• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:施云惠

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 19 >
Syntax-Guided Content-Adaptive Transform for Image Compression SCIE
期刊论文 | 2024 , 24 (16) | SENSORS
Abstract&Keyword Cite

Abstract :

The surge in image data has significantly increased the pressure on storage and transmission, posing new challenges for image compression technology. The structural texture of an image implies its statistical characteristics, which is effective for image encoding and decoding. Consequently, content-adaptive compression methods based on learning can better capture the content attributes of images, thereby enhancing encoding performance. However, learned image compression methods do not comprehensively account for both the global and local correlations among the pixels within an image. Moreover, they are constrained by rate-distortion optimization, which prevents the attainment of a compact representation of image attributes. To address these issues, we propose a syntax-guided content-adaptive transform framework that efficiently captures image attributes and enhances encoding efficiency. Firstly, we propose a syntax-refined side information module that fully leverages syntax and side information to guide the adaptive transformation of image attributes. Moreover, to more thoroughly exploit the global and local correlations in image space, we designed global-local modules, local-global modules, and upsampling/downsampling modules in codecs, further eliminating local and global redundancies. The experimental findings indicate that our proposed syntax-guided content-adaptive image compression model successfully adapts to the diverse complexities of different images, which enhances the efficiency of image compression. Concurrently, the method proposed has demonstrated outstanding performance across three benchmark datasets.

Keyword :

image compression image compression deep learning deep learning adaptive compression adaptive compression

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shi, Yunhui , Ye, Liping , Wang, Jin et al. Syntax-Guided Content-Adaptive Transform for Image Compression [J]. | SENSORS , 2024 , 24 (16) .
MLA Shi, Yunhui et al. "Syntax-Guided Content-Adaptive Transform for Image Compression" . | SENSORS 24 . 16 (2024) .
APA Shi, Yunhui , Ye, Liping , Wang, Jin , Wang, Lilong , Hu, Hui , Yin, Baocai et al. Syntax-Guided Content-Adaptive Transform for Image Compression . | SENSORS , 2024 , 24 (16) .
Export to NoteExpress RIS BibTex
Hybrid Sparse Transformer and Wavelet Fusion-Based Deep Unfolding Network for Hyperspectral Snapshot Compressive Imaging SCIE
期刊论文 | 2024 , 24 (19) | SENSORS
Abstract&Keyword Cite

Abstract :

Recently, deep unfolding network methods have significantly progressed in hyperspectral snapshot compressive imaging. Many approaches directly employ Transformer models to boost the feature representation capabilities of algorithms. However, they often fall short of leveraging the full potential of self-attention mechanisms. Additionally, current methods lack adequate consideration of both intra-stage and inter-stage feature fusion, which hampers their overall performance. To tackle these challenges, we introduce a novel approach that hybridizes the sparse Transformer and wavelet fusion-based deep unfolding network for hyperspectral image (HSI) reconstruction. Our method includes the development of a spatial sparse Transformer and a spectral sparse Transformer, designed to capture spatial and spectral attention of HSI data, respectively, thus enhancing the Transformer's feature representation capabilities. Furthermore, we incorporate wavelet-based methods for both intra-stage and inter-stage feature fusion, which significantly boosts the algorithm's reconstruction performance. Extensive experiments across various datasets confirm the superiority of our proposed approach.

Keyword :

hyperspectral image reconstruction hyperspectral image reconstruction compressive sensing compressive sensing snapshot compressive imaging snapshot compressive imaging deep unfolding network deep unfolding network

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ying, Yangke , Wang, Jin , Shi, Yunhui et al. Hybrid Sparse Transformer and Wavelet Fusion-Based Deep Unfolding Network for Hyperspectral Snapshot Compressive Imaging [J]. | SENSORS , 2024 , 24 (19) .
MLA Ying, Yangke et al. "Hybrid Sparse Transformer and Wavelet Fusion-Based Deep Unfolding Network for Hyperspectral Snapshot Compressive Imaging" . | SENSORS 24 . 19 (2024) .
APA Ying, Yangke , Wang, Jin , Shi, Yunhui , Ling, Nam . Hybrid Sparse Transformer and Wavelet Fusion-Based Deep Unfolding Network for Hyperspectral Snapshot Compressive Imaging . | SENSORS , 2024 , 24 (19) .
Export to NoteExpress RIS BibTex
MS-Net: A lightweight separable ConvNet for multi-dimensional image processing SCIE
期刊论文 | 2021 , 80 (17) , 25673-25688 | MULTIMEDIA TOOLS AND APPLICATIONS
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

As the core technology of deep learning, convolutional neural networks have been widely applied in a variety of computer vision tasks and have achieved state-of-the-art performance. However, it's difficult and inefficient for them to deal with high dimensional image signals due to the dramatic increase of training parameters. In this paper, we present a lightweight and efficient MS-Net for the multi-dimensional(MD) image processing, which provides a promising way to handle MD images, especially for devices with limited computational capacity. It takes advantage of a series of one dimensional convolution kernels and introduces a separable structure in the ConvNet throughout the learning process to handle MD image signals. Meanwhile, multiple group convolutions with kernel size 1 x 1 are used to extract channel information. Then the information of each dimension and channel is fused by a fusion module to extract the complete image features. Thus the proposed MS-Net significantly reduces the training complexity, parameters and memory cost. The proposed MS-Net is evaluated on both 2D and 3D benchmarks CIFAR-10, CIFAR-100 and KTH. Extensive experimental results show that the MS-Net achieves competitive performance with greatly reduced computational and memory cost compared with the state-of-the-art ConvNet models.

Keyword :

Multi-dimensional image processing Multi-dimensional image processing Separable convolution neural network Separable convolution neural network Matricization Matricization Feature extraction and representation Feature extraction and representation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hou, Zhenning , Shi, Yunhui , Wang, Jin et al. MS-Net: A lightweight separable ConvNet for multi-dimensional image processing [J]. | MULTIMEDIA TOOLS AND APPLICATIONS , 2021 , 80 (17) : 25673-25688 .
MLA Hou, Zhenning et al. "MS-Net: A lightweight separable ConvNet for multi-dimensional image processing" . | MULTIMEDIA TOOLS AND APPLICATIONS 80 . 17 (2021) : 25673-25688 .
APA Hou, Zhenning , Shi, Yunhui , Wang, Jin , Cui, Yingxuan , Yin, Baocai . MS-Net: A lightweight separable ConvNet for multi-dimensional image processing . | MULTIMEDIA TOOLS AND APPLICATIONS , 2021 , 80 (17) , 25673-25688 .
Export to NoteExpress RIS BibTex
球面图像的SLIC算法 CSCD
期刊论文 | 2021 , 47 (3) , 216-223 | 北京工业大学学报
Abstract&Keyword Cite

Abstract :

简单线性迭代聚类(simple linear iterative clustering,SLIC)超像素分割算法可以直接用于等距柱状投影(equirectangular projection,ERP)的球面图像,但是投影所造成的球面数据局部相关性破坏,会导致SLIC算法在ERP图像的部分区域无法生成合适的超像素分类,从而影响该算法的性能.为解决这一问题,首先对ERP格式的球面图像进行重采样,生成球面上近似均匀分布的球面像元数据;然后在保持球面图像数据局部相关性的基础上,将重采样数据重组为一个新的球面图像二维表示;并基于此二维表示,将球面数据的几何关系整合到SLIC算法中,最终建立球面图像SLIC算法.针对多组ERP图像分别应用SLIC算法和本文提出的算法,对比2种算法在不同聚类数量下的超像素分割结果.实验结果表明:所提出的球面图像SLIC算法在客观质量上优于原SLIC算法,所生成的超像素分割结果不受球面区域变化影响,且轮廓闭合,在球面上表现出了较好的相似性和一致性.

Keyword :

SLIC算法 SLIC算法 聚类 聚类 超像素 超像素 重采样 重采样 球面图像 球面图像 图像分割 图像分割

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 吴刚 , 施云惠 , 尹宝才 . 球面图像的SLIC算法 [J]. | 北京工业大学学报 , 2021 , 47 (3) : 216-223 .
MLA 吴刚 et al. "球面图像的SLIC算法" . | 北京工业大学学报 47 . 3 (2021) : 216-223 .
APA 吴刚 , 施云惠 , 尹宝才 . 球面图像的SLIC算法 . | 北京工业大学学报 , 2021 , 47 (3) , 216-223 .
Export to NoteExpress RIS BibTex
球面图像的SLIC算法 CQVIP
期刊论文 | 2021 , 47 (3) , 216-223 | 吴刚
Abstract&Keyword Cite

Abstract :

球面图像的SLIC算法

Keyword :

图像分割 图像分割 超像素 超像素 SLIC算法 SLIC算法 重采样 重采样 球面图像 球面图像 聚类 聚类

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 吴刚 , 施云惠 , 尹宝才 et al. 球面图像的SLIC算法 [J]. | 吴刚 , 2021 , 47 (3) : 216-223 .
MLA 吴刚 et al. "球面图像的SLIC算法" . | 吴刚 47 . 3 (2021) : 216-223 .
APA 吴刚 , 施云惠 , 尹宝才 , 北京工业大学学报 . 球面图像的SLIC算法 . | 吴刚 , 2021 , 47 (3) , 216-223 .
Export to NoteExpress RIS BibTex
SMSIR: Spherical Measure Based Spherical Image Representation SCIE
期刊论文 | 2021 , 30 , 6377-6391 | IEEE TRANSACTIONS ON IMAGE PROCESSING
WoS CC Cited Count: 4
Abstract&Keyword Cite

Abstract :

This paper presents a spherical measure based spherical image representation(SMSIR) and sphere-based resampling methods for generating our representation. On this basis, a spherical wavelet transform is also proposed. We first propose a formal recursive definition of the spherical triangle elements of SMSIR and a dyadic index scheme. The index scheme, which supports global random access and needs not to be pre-computed and stored, can efficiently index the elements of SMSIR like planar images. Two resampling methods to generate SMSIR from the most commonly used ERP(Equirectangular Projection) representation are presented. Notably, the spherical measure based resampling, which exploits the mapping between the spherical and the parameter domain, achieves higher computational efficiency than the spherical RBF(Radial Basis Function) based resampling. Finally, we design high-pass and low-pass filters with lifting schemes based on the dyadic index to further verify the efficiency of our index and deal with the spherical isotropy. It provides novel Multi-Resolution Analysis(MRA) for spherical images. Experiments on continuous synthetic spherical images indicate that our representation can recover the original image signals with higher accuracy than the ERP and CMP(Cubemap) representations at the same sampling rate. Besides, the resampling experiments on natural spherical images show that our resampling methods outperform the bilinear and bicubic interpolations concerning the subjective and objective quality. Particularly, as high as 2dB gain in terms of S-PSNR is achieved. Experiments also show that our spherical image transform can capture more geometric features of spherical images than traditional wavelet transform.

Keyword :

Feature extraction Feature extraction Interpolation Interpolation spherical measure spherical measure Image representation Image representation Geometry Geometry indexing scheme indexing scheme Indexing Indexing spherical RBF spherical RBF Extraterrestrial measurements Extraterrestrial measurements image resampling image resampling Spherical images Spherical images Surface treatment Surface treatment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Gang , Shi, Yunhui , Sun, Xiaoyan et al. SMSIR: Spherical Measure Based Spherical Image Representation [J]. | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 : 6377-6391 .
MLA Wu, Gang et al. "SMSIR: Spherical Measure Based Spherical Image Representation" . | IEEE TRANSACTIONS ON IMAGE PROCESSING 30 (2021) : 6377-6391 .
APA Wu, Gang , Shi, Yunhui , Sun, Xiaoyan , Wang, Jin , Yin, Baocai . SMSIR: Spherical Measure Based Spherical Image Representation . | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2021 , 30 , 6377-6391 .
Export to NoteExpress RIS BibTex
Sparse Coding of Intra Prediction Residuals for Screen Content Coding CPCI-S
会议论文 | 2021 | IEEE International Conference on Consumer Electronics (ICCE)
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

High Efficiency Video Coding - Screen Content Coding (HEVC-SCC) is an extension to HEVC which adds sophisticated compression methods for computer generated content. A video frame is usually split into blocks that are predicted and subtracted from the original, which leaves a residual. These blocks are transformed by integer discrete sine transform (IntDST) or integer discrete cosine transform (IntDCT), quantized, and entropy coded into a bitstream. In contrast to camera captured content, screen content contains a lot of similar and repeated blocks. The HEVC-SCC tools utilize these similarities in various ways. After these tools are executed, the remaining signals are handled by IntDST/IntDCT which is designed to code camera-captured content. Fortunately, in sparse coding, the dictionary learning process which uses these residuals adapts much better and the outcome is significantly sparser than for camera captured content. This paper proposes a sparse coding scheme which takes advantage of the similar and repeated intra prediction residuals and targets low to mid frequency/energy blocks with a low sparsity setup. We also applied an approach which splits the common test conditions (CTC) sequences into categories for training and testing purposes. It is integrated as an alternate transform where the selection between traditional transform and our proposed method is based on a rate-distortion optimization (RDO) decision. It is integrated in HEVC-SCC test model (HM) HM-16.18+SCM-8.7. Experimental results show that the proposed method achieves a Bjontegaard rate difference (BD-rate) of up to 4.6% in an extreme computationally demanding setup for the "all intra" configuration compared with HM-16.18+SCM-8.7.

Keyword :

screen content coding screen content coding intra prediction intra prediction orthogonal matching pursuit orthogonal matching pursuit sparse representation sparse representation residual coding residual coding KSVD KSVD HEVC HEVC video coding video coding sparse coding sparse coding

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Schimpf, Michael G. , Ling, Nam , Shi, Yunhui et al. Sparse Coding of Intra Prediction Residuals for Screen Content Coding [C] . 2021 .
MLA Schimpf, Michael G. et al. "Sparse Coding of Intra Prediction Residuals for Screen Content Coding" . (2021) .
APA Schimpf, Michael G. , Ling, Nam , Shi, Yunhui , Liu, Ying . Sparse Coding of Intra Prediction Residuals for Screen Content Coding . (2021) .
Export to NoteExpress RIS BibTex
A Co-Prediction-Based Compression Scheme for Correlated Images SCIE
期刊论文 | 2020 , 22 (8) , 1917-1928 | IEEE TRANSACTIONS ON MULTIMEDIA
WoS CC Cited Count: 4
Abstract&Keyword Cite

Abstract :

Deep learning has achieved a preliminary success in image compression due to the ability to learn the nonlinear spaces with compact features that training samples belong to. Unfortunately, it is not straightforward for the network based image compression methods to code multiple highly related images. In this paper, we propose a co-prediction based image compression (CPIC) which uses the multi-stream autoencoders to collaboratively code the multiple highly correlated images by enforcing the co-reference constraint on the multi-stream features. Patch samples fed into the multi-stream autoencoder, are generated through corresponding patch matching under permutation, which helps the autoencoder to learn the relationship among corresponding patches from the correlated images. Each stream network consists of encoder, decoder, importance map network and binarizer. In order to guide the allocation of local bit rate of the binary features, the important map network is employed to guarantee the compactness of learned features. A proxy function is used to make the binary operation for the code layer of the autoencoder differentiable. Finally, the network optimization is formulated as a rate distortion optimization. Experimental results prove that the proposed compression method outperforms JPEG2000 up to 1.5 dB in terms of PSNR.

Keyword :

Convolutional codes Convolutional codes Autoencoder Autoencoder rate distortion optimization rate distortion optimization Optimization Optimization Transform coding Transform coding correlated images correlated images Image reconstruction Image reconstruction Decoding Decoding Image coding Image coding Transforms Transforms image compression image compression multi-stream networks multi-stream networks

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yin, Wenbin , Shi, Yunhui , Zuo, Wangmeng et al. A Co-Prediction-Based Compression Scheme for Correlated Images [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2020 , 22 (8) : 1917-1928 .
MLA Yin, Wenbin et al. "A Co-Prediction-Based Compression Scheme for Correlated Images" . | IEEE TRANSACTIONS ON MULTIMEDIA 22 . 8 (2020) : 1917-1928 .
APA Yin, Wenbin , Shi, Yunhui , Zuo, Wangmeng , Fan, Xiaopeng . A Co-Prediction-Based Compression Scheme for Correlated Images . | IEEE TRANSACTIONS ON MULTIMEDIA , 2020 , 22 (8) , 1917-1928 .
Export to NoteExpress RIS BibTex
一种编码立方体投影格式的自适应QP调整方法 incoPat
专利 | 2020-03-06 | CN202010154990.X
Abstract&Keyword Cite

Abstract :

本发明涉及一种编码立方体投影格式的自适应QP调整方法,属于计算机编解码领域。本发明在传统编解码技术的基础上,针对立方体投影的像素分布特点,进行有针对性的改进。通过自适应QP调整,对同一帧内的不同区域采取适宜的方案,来实现对于立方体投影格式的优化。与传统编码方法统一QP相比,可以更加适配立方体投影格式,既提高了编码效率又节约了码率。

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 尹宝才 , 李煜聪 , 施云惠 et al. 一种编码立方体投影格式的自适应QP调整方法 : CN202010154990.X[P]. | 2020-03-06 .
MLA 尹宝才 et al. "一种编码立方体投影格式的自适应QP调整方法" : CN202010154990.X. | 2020-03-06 .
APA 尹宝才 , 李煜聪 , 施云惠 , 齐娜 . 一种编码立方体投影格式的自适应QP调整方法 : CN202010154990.X. | 2020-03-06 .
Export to NoteExpress RIS BibTex
Learning Redundant Sparsifying Transform based on Equi-Angular Frame CPCI-S
期刊论文 | 2020 , 439-442 | 2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)
Abstract&Keyword Cite

Abstract :

Due to the fact that sparse coding in redundant sparse dictionary learning model is NP-hard, interest has turned to the non-redundant sparsifying transform as its sparse coding is computationally cheap. However, natural images typically contain diverse textures that cannot be sparsified well by a non-redundant system. In this paper we propose a new approach for learning redundant sparsifying transform based on equi-angular frame, where the frame and its dual frame are corresponding to applying the forward and the backward transforms. The uniform mutual coherence in the sparsifying transform is enforced by the equi-angular constraint, which better sparsifies diverse textures. In addition, an efficient algorithm is proposed for learning the redundant transform. Experimental results for image representation illustrate the superiority of our proposed method over non-redundant sparsifying transforms. The image denoising results show that our proposed method achieves superior denoising performance, in terms of subjective and objective quality, compared to the K-SVD, the data-driven tight frame method, the learning based sparsifying transform and the overcomplete transform model with block cosparsity (OCTOBOS).

Keyword :

sparse sparse mutual coherence mutual coherence equi-angular equi-angular redundant transform redundant transform

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Min , Shi, Yunhui , Sun, Xiaoyan et al. Learning Redundant Sparsifying Transform based on Equi-Angular Frame [J]. | 2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP) , 2020 : 439-442 .
MLA Zhang, Min et al. "Learning Redundant Sparsifying Transform based on Equi-Angular Frame" . | 2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP) (2020) : 439-442 .
APA Zhang, Min , Shi, Yunhui , Sun, Xiaoyan , Ling, Nam , Qi, Na . Learning Redundant Sparsifying Transform based on Equi-Angular Frame . | 2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP) , 2020 , 439-442 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 19 >

Export

Results:

Selected

to

Format:
Online/Total:418/4741904
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.