• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:施云惠

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 14 >
SMSIR: Spherical Measure Based Spherical Image Representation SCIE
期刊论文 | 2021 , 30 , 6377-6391 | IEEE TRANSACTIONS ON IMAGE PROCESSING
Abstract&Keyword Cite

Abstract :

This paper presents a spherical measure based spherical image representation(SMSIR) and sphere-based resampling methods for generating our representation. On this basis, a spherical wavelet transform is also proposed. We first propose a formal recursive definition of the spherical triangle elements of SMSIR and a dyadic index scheme. The index scheme, which supports global random access and needs not to be pre-computed and stored, can efficiently index the elements of SMSIR like planar images. Two resampling methods to generate SMSIR from the most commonly used ERP(Equirectangular Projection) representation are presented. Notably, the spherical measure based resampling, which exploits the mapping between the spherical and the parameter domain, achieves higher computational efficiency than the spherical RBF(Radial Basis Function) based resampling. Finally, we design high-pass and low-pass filters with lifting schemes based on the dyadic index to further verify the efficiency of our index and deal with the spherical isotropy. It provides novel Multi-Resolution Analysis(MRA) for spherical images. Experiments on continuous synthetic spherical images indicate that our representation can recover the original image signals with higher accuracy than the ERP and CMP(Cubemap) representations at the same sampling rate. Besides, the resampling experiments on natural spherical images show that our resampling methods outperform the bilinear and bicubic interpolations concerning the subjective and objective quality. Particularly, as high as 2dB gain in terms of S-PSNR is achieved. Experiments also show that our spherical image transform can capture more geometric features of spherical images than traditional wavelet transform.

Keyword :

Feature extraction Feature extraction Interpolation Interpolation spherical measure spherical measure Image representation Image representation Geometry Geometry indexing scheme indexing scheme Indexing Indexing spherical RBF spherical RBF Extraterrestrial measurements Extraterrestrial measurements image resampling image resampling Spherical images Spherical images Surface treatment Surface treatment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Gang , Shi, Yunhui , Sun, Xiaoyan et al. SMSIR: Spherical Measure Based Spherical Image Representation [J]. | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 : 6377-6391 .
MLA Wu, Gang et al. "SMSIR: Spherical Measure Based Spherical Image Representation" . | IEEE TRANSACTIONS ON IMAGE PROCESSING 30 (2021) : 6377-6391 .
APA Wu, Gang , Shi, Yunhui , Sun, Xiaoyan , Wang, Jin , Yin, Baocai . SMSIR: Spherical Measure Based Spherical Image Representation . | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 , 6377-6391 .
Export to NoteExpress RIS BibTex
Sparse Coding of Intra Prediction Residuals for Screen Content Coding CPCI-S
会议论文 | 2021 | IEEE International Conference on Consumer Electronics (ICCE)
Abstract&Keyword Cite

Abstract :

High Efficiency Video Coding - Screen Content Coding (HEVC-SCC) is an extension to HEVC which adds sophisticated compression methods for computer generated content. A video frame is usually split into blocks that are predicted and subtracted from the original, which leaves a residual. These blocks are transformed by integer discrete sine transform (IntDST) or integer discrete cosine transform (IntDCT), quantized, and entropy coded into a bitstream. In contrast to camera captured content, screen content contains a lot of similar and repeated blocks. The HEVC-SCC tools utilize these similarities in various ways. After these tools are executed, the remaining signals are handled by IntDST/IntDCT which is designed to code camera-captured content. Fortunately, in sparse coding, the dictionary learning process which uses these residuals adapts much better and the outcome is significantly sparser than for camera captured content. This paper proposes a sparse coding scheme which takes advantage of the similar and repeated intra prediction residuals and targets low to mid frequency/energy blocks with a low sparsity setup. We also applied an approach which splits the common test conditions (CTC) sequences into categories for training and testing purposes. It is integrated as an alternate transform where the selection between traditional transform and our proposed method is based on a rate-distortion optimization (RDO) decision. It is integrated in HEVC-SCC test model (HM) HM-16.18+SCM-8.7. Experimental results show that the proposed method achieves a Bjontegaard rate difference (BD-rate) of up to 4.6% in an extreme computationally demanding setup for the "all intra" configuration compared with HM-16.18+SCM-8.7.

Keyword :

screen content coding screen content coding intra prediction intra prediction orthogonal matching pursuit orthogonal matching pursuit sparse representation sparse representation residual coding residual coding KSVD KSVD HEVC HEVC video coding video coding sparse coding sparse coding

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Schimpf, Michael G. , Ling, Nam , Shi, Yunhui et al. Sparse Coding of Intra Prediction Residuals for Screen Content Coding [C] . 2021 .
MLA Schimpf, Michael G. et al. "Sparse Coding of Intra Prediction Residuals for Screen Content Coding" . (2021) .
APA Schimpf, Michael G. , Ling, Nam , Shi, Yunhui , Liu, Ying . Sparse Coding of Intra Prediction Residuals for Screen Content Coding . (2021) .
Export to NoteExpress RIS BibTex
球面图像的SLIC算法 CQVIP
期刊论文 | 2021 , 47 (3) , 216-223 | 吴刚
Abstract&Keyword Cite

Abstract :

球面图像的SLIC算法

Keyword :

图像分割 图像分割 超像素 超像素 SLIC算法 SLIC算法 重采样 重采样 球面图像 球面图像 聚类 聚类

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 吴刚 , 施云惠 , 尹宝才 et al. 球面图像的SLIC算法 [J]. | 吴刚 , 2021 , 47 (3) : 216-223 .
MLA 吴刚 et al. "球面图像的SLIC算法" . | 吴刚 47 . 3 (2021) : 216-223 .
APA 吴刚 , 施云惠 , 尹宝才 , 北京工业大学学报 . 球面图像的SLIC算法 . | 吴刚 , 2021 , 47 (3) , 216-223 .
Export to NoteExpress RIS BibTex
球面图像的SLIC算法 CSCD
期刊论文 | 2021 , 47 (3) , 216-223 | 北京工业大学学报
Abstract&Keyword Cite

Abstract :

简单线性迭代聚类(simple linear iterative clustering,SLIC)超像素分割算法可以直接用于等距柱状投影(equirectangular projection,ERP)的球面图像,但是投影所造成的球面数据局部相关性破坏,会导致SLIC算法在ERP图像的部分区域无法生成合适的超像素分类,从而影响该算法的性能.为解决这一问题,首先对ERP格式的球面图像进行重采样,生成球面上近似均匀分布的球面像元数据;然后在保持球面图像数据局部相关性的基础上,将重采样数据重组为一个新的球面图像二维表示;并基于此二维表示,将球面数据的几何关系整合到SLIC算法中,最终建立球面图像SLIC算法.针对多组ERP图像分别应用SLIC算法和本文提出的算法,对比2种算法在不同聚类数量下的超像素分割结果.实验结果表明:所提出的球面图像SLIC算法在客观质量上优于原SLIC算法,所生成的超像素分割结果不受球面区域变化影响,且轮廓闭合,在球面上表现出了较好的相似性和一致性.

Keyword :

SLIC算法 SLIC算法 聚类 聚类 超像素 超像素 重采样 重采样 球面图像 球面图像 图像分割 图像分割

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 吴刚 , 施云惠 , 尹宝才 . 球面图像的SLIC算法 [J]. | 北京工业大学学报 , 2021 , 47 (3) : 216-223 .
MLA 吴刚 et al. "球面图像的SLIC算法" . | 北京工业大学学报 47 . 3 (2021) : 216-223 .
APA 吴刚 , 施云惠 , 尹宝才 . 球面图像的SLIC算法 . | 北京工业大学学报 , 2021 , 47 (3) , 216-223 .
Export to NoteExpress RIS BibTex
MS-Net: A lightweight separable ConvNet for multi-dimensional image processing SCIE
期刊论文 | 2021 , 80 (17) , 25673-25688 | MULTIMEDIA TOOLS AND APPLICATIONS
Abstract&Keyword Cite

Abstract :

As the core technology of deep learning, convolutional neural networks have been widely applied in a variety of computer vision tasks and have achieved state-of-the-art performance. However, it's difficult and inefficient for them to deal with high dimensional image signals due to the dramatic increase of training parameters. In this paper, we present a lightweight and efficient MS-Net for the multi-dimensional(MD) image processing, which provides a promising way to handle MD images, especially for devices with limited computational capacity. It takes advantage of a series of one dimensional convolution kernels and introduces a separable structure in the ConvNet throughout the learning process to handle MD image signals. Meanwhile, multiple group convolutions with kernel size 1 x 1 are used to extract channel information. Then the information of each dimension and channel is fused by a fusion module to extract the complete image features. Thus the proposed MS-Net significantly reduces the training complexity, parameters and memory cost. The proposed MS-Net is evaluated on both 2D and 3D benchmarks CIFAR-10, CIFAR-100 and KTH. Extensive experimental results show that the MS-Net achieves competitive performance with greatly reduced computational and memory cost compared with the state-of-the-art ConvNet models.

Keyword :

Multi-dimensional image processing Multi-dimensional image processing Separable convolution neural network Separable convolution neural network Matricization Matricization Feature extraction and representation Feature extraction and representation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hou, Zhenning , Shi, Yunhui , Wang, Jin et al. MS-Net: A lightweight separable ConvNet for multi-dimensional image processing [J]. | MULTIMEDIA TOOLS AND APPLICATIONS , 2021 , 80 (17) : 25673-25688 .
MLA Hou, Zhenning et al. "MS-Net: A lightweight separable ConvNet for multi-dimensional image processing" . | MULTIMEDIA TOOLS AND APPLICATIONS 80 . 17 (2021) : 25673-25688 .
APA Hou, Zhenning , Shi, Yunhui , Wang, Jin , Cui, Yingxuan , Yin, Baocai . MS-Net: A lightweight separable ConvNet for multi-dimensional image processing . | MULTIMEDIA TOOLS AND APPLICATIONS , 2021 , 80 (17) , 25673-25688 .
Export to NoteExpress RIS BibTex
Learning Redundant Sparsifying Transform based on Equi-Angular Frame CPCI-S
期刊论文 | 2020 , 439-442 | 2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)
Abstract&Keyword Cite

Abstract :

Due to the fact that sparse coding in redundant sparse dictionary learning model is NP-hard, interest has turned to the non-redundant sparsifying transform as its sparse coding is computationally cheap. However, natural images typically contain diverse textures that cannot be sparsified well by a non-redundant system. In this paper we propose a new approach for learning redundant sparsifying transform based on equi-angular frame, where the frame and its dual frame are corresponding to applying the forward and the backward transforms. The uniform mutual coherence in the sparsifying transform is enforced by the equi-angular constraint, which better sparsifies diverse textures. In addition, an efficient algorithm is proposed for learning the redundant transform. Experimental results for image representation illustrate the superiority of our proposed method over non-redundant sparsifying transforms. The image denoising results show that our proposed method achieves superior denoising performance, in terms of subjective and objective quality, compared to the K-SVD, the data-driven tight frame method, the learning based sparsifying transform and the overcomplete transform model with block cosparsity (OCTOBOS).

Keyword :

sparse sparse mutual coherence mutual coherence equi-angular equi-angular redundant transform redundant transform

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Min , Shi, Yunhui , Sun, Xiaoyan et al. Learning Redundant Sparsifying Transform based on Equi-Angular Frame [J]. | 2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP) , 2020 : 439-442 .
MLA Zhang, Min et al. "Learning Redundant Sparsifying Transform based on Equi-Angular Frame" . | 2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP) (2020) : 439-442 .
APA Zhang, Min , Shi, Yunhui , Sun, Xiaoyan , Ling, Nam , Qi, Na . Learning Redundant Sparsifying Transform based on Equi-Angular Frame . | 2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP) , 2020 , 439-442 .
Export to NoteExpress RIS BibTex
Multi-Direction Dictionary Learning Based Depth Map Super-Resolution With Autoregressive Modeling SCIE
期刊论文 | 2020 , 22 (6) , 1470-1484 | IEEE TRANSACTIONS ON MULTIMEDIA
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

3D depth cameras have become more and more popular in recent years. However, depth maps captured by these cameras can hardly be used in 3D reconstruction directly because they often suffer from low resolution and blurring depth discontinuities. Super resolution of depth maps is necessary. In depth maps, the edge areas play more important role and demonstrate distinct geometry directions compared with natural images. However, most existing super-resolution methods ignore this fact, and they can not handle depth edges properly. Motivated by this, we propose a compound method that combines multi-direction dictionary sparse representation and autoregressive (AR) models, so that the depth edges are presented precisely at different levels. In the patch level, the depth edge patches with geometry directions are well represented by the pre-trained multi-directional dictionaries. Compared with a universal dictionary, multiple dictionaries trained from different directional patches can represent the directional depth patch much better. In the finer pixel level, we utilize an adaptive AR model to represent the local correlation patterns in small areas. Extensive experimental results on both synthetic and real datasets demonstrate that, the proposed model outperforms state-of-the-art depth map super-resolution methods in terms of both quantitative metrics and subjective visual quality.

Keyword :

Geometry Geometry Dictionaries Dictionaries sparse representation sparse representation Color Color Image edge detection Image edge detection Machine learning Machine learning super-resolution (SR) super-resolution (SR) dictionary learning dictionary learning autoregressive (AR) model autoregressive (AR) model Adaptation models Adaptation models Depth map Depth map Cameras Cameras

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Jin , Xu, Wei , Cai, Jian-Feng et al. Multi-Direction Dictionary Learning Based Depth Map Super-Resolution With Autoregressive Modeling [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2020 , 22 (6) : 1470-1484 .
MLA Wang, Jin et al. "Multi-Direction Dictionary Learning Based Depth Map Super-Resolution With Autoregressive Modeling" . | IEEE TRANSACTIONS ON MULTIMEDIA 22 . 6 (2020) : 1470-1484 .
APA Wang, Jin , Xu, Wei , Cai, Jian-Feng , Zhu, Qing , Shi, Yunhui , Yin, Baocai . Multi-Direction Dictionary Learning Based Depth Map Super-Resolution With Autoregressive Modeling . | IEEE TRANSACTIONS ON MULTIMEDIA , 2020 , 22 (6) , 1470-1484 .
Export to NoteExpress RIS BibTex
A Co-Prediction-Based Compression Scheme for Correlated Images SCIE
期刊论文 | 2020 , 22 (8) , 1917-1928 | IEEE TRANSACTIONS ON MULTIMEDIA
Abstract&Keyword Cite

Abstract :

Deep learning has achieved a preliminary success in image compression due to the ability to learn the nonlinear spaces with compact features that training samples belong to. Unfortunately, it is not straightforward for the network based image compression methods to code multiple highly related images. In this paper, we propose a co-prediction based image compression (CPIC) which uses the multi-stream autoencoders to collaboratively code the multiple highly correlated images by enforcing the co-reference constraint on the multi-stream features. Patch samples fed into the multi-stream autoencoder, are generated through corresponding patch matching under permutation, which helps the autoencoder to learn the relationship among corresponding patches from the correlated images. Each stream network consists of encoder, decoder, importance map network and binarizer. In order to guide the allocation of local bit rate of the binary features, the important map network is employed to guarantee the compactness of learned features. A proxy function is used to make the binary operation for the code layer of the autoencoder differentiable. Finally, the network optimization is formulated as a rate distortion optimization. Experimental results prove that the proposed compression method outperforms JPEG2000 up to 1.5 dB in terms of PSNR.

Keyword :

Convolutional codes Convolutional codes Autoencoder Autoencoder rate distortion optimization rate distortion optimization Optimization Optimization Transform coding Transform coding correlated images correlated images Image reconstruction Image reconstruction Decoding Decoding Image coding Image coding Transforms Transforms image compression image compression multi-stream networks multi-stream networks

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Wenbin , Shi, Yunhui , Zuo, Wangmeng et al. A Co-Prediction-Based Compression Scheme for Correlated Images [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2020 , 22 (8) : 1917-1928 .
MLA Yin, Wenbin et al. "A Co-Prediction-Based Compression Scheme for Correlated Images" . | IEEE TRANSACTIONS ON MULTIMEDIA 22 . 8 (2020) : 1917-1928 .
APA Yin, Wenbin , Shi, Yunhui , Zuo, Wangmeng , Fan, Xiaopeng . A Co-Prediction-Based Compression Scheme for Correlated Images . | IEEE TRANSACTIONS ON MULTIMEDIA , 2020 , 22 (8) , 1917-1928 .
Export to NoteExpress RIS BibTex
Stable Sparse Model with Non-Tight Frame SCIE
期刊论文 | 2020 , 10 (5) | APPLIED SCIENCES-BASEL
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Overcomplete representation is attracting interest in image restoration due to its potential to generate sparse representations of signals. However, the problem of seeking sparse representation must be unstable in the presence of noise. Restricted Isometry Property (RIP), playing a crucial role in providing stable sparse representation, has been ignored in the existing sparse models as it is hard to integrate into the conventional sparse models as a regularizer. In this paper, we propose a stable sparse model with non-tight frame (SSM-NTF) via applying the corresponding frame condition to approximate RIP. Our SSM-NTF model takes into account the advantage of the traditional sparse model, and meanwhile contains RIP and closed-form expression of sparse coefficients which ensure stable recovery. Moreover, benefitting from the pair-wise of the non-tight frame (the original frame and its dual frame), our SSM-NTF model combines a synthesis sparse system and an analysis sparse system. By enforcing the frame bounds and applying a second-order truncated series to approximate the inverse frame operator, we formulate a dictionary pair (frame pair) learning model along with a two-phase iterative algorithm. Extensive experimental results on image restoration tasks such as denoising, super resolution and inpainting show that our proposed SSM-NTF achieves superior recovery performance in terms of both subjective and objective quality.

Keyword :

RIP RIP stable recovery stable recovery sparse dictionary sparse dictionary frame frame

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Min , Shi, Yunhui , Qi, Na et al. Stable Sparse Model with Non-Tight Frame [J]. | APPLIED SCIENCES-BASEL , 2020 , 10 (5) .
MLA Zhang, Min et al. "Stable Sparse Model with Non-Tight Frame" . | APPLIED SCIENCES-BASEL 10 . 5 (2020) .
APA Zhang, Min , Shi, Yunhui , Qi, Na , Yin, Baocai . Stable Sparse Model with Non-Tight Frame . | APPLIED SCIENCES-BASEL , 2020 , 10 (5) .
Export to NoteExpress RIS BibTex
Data-Driven Redundant Transform Based on Parseval Frames SCIE
期刊论文 | 2020 , 10 (8) | APPLIED SCIENCES-BASEL
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

The sparsity of images in a certain transform domain or dictionary has been exploited in many image processing applications. Both classic transforms and sparsifying transforms reconstruct images by a linear combination of a small basis of the transform. Both kinds of transform are non-redundant. However, natural images admit complicated textures and structures, which can hardly be sparsely represented by square transforms. To solve this issue, we propose a data-driven redundant transform based on Parseval frames (DRTPF) by applying the frame and its dual frame as the backward and forward transform operators, respectively. Benefitting from this pairwise use of frames, the proposed model combines a synthesis sparse system and an analysis sparse system. By enforcing the frame pair to be Parseval frames, the singular values and condition number of the learnt redundant frames, which are efficient values for measuring the quality of the learnt sparsifying transforms, are forced to achieve an optimal state. We formulate a transform pair (i.e., frame pair) learning model and a two-phase iterative algorithm, analyze the robustness of the proposed DRTPF and the convergence of the corresponding algorithm, and demonstrate the effectiveness of our proposed DRTPF by analyzing its robustness against noise and sparsification errors. Extensive experimental results on image denoising show that our proposed model achieves superior denoising performance, in terms of subjective and objective quality, compared to traditional sparse models.

Keyword :

parseval frame parseval frame sparse representation sparse representation transform transform

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Min , Shi, Yunhui , Qi, Na et al. Data-Driven Redundant Transform Based on Parseval Frames [J]. | APPLIED SCIENCES-BASEL , 2020 , 10 (8) .
MLA Zhang, Min et al. "Data-Driven Redundant Transform Based on Parseval Frames" . | APPLIED SCIENCES-BASEL 10 . 8 (2020) .
APA Zhang, Min , Shi, Yunhui , Qi, Na , Yin, Baocai . Data-Driven Redundant Transform Based on Parseval Frames . | APPLIED SCIENCES-BASEL , 2020 , 10 (8) .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 14 >

Export

Results:

Selected

to

Format:
Online/Total:2431/2724120
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.