• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:顾锞

Refining:

Source

Submit Unfold

Co-Author

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 12 >
Screen Content Quality Assessment: Overview, Benchmark, and Beyond SCIE
期刊论文 | 2022 , 54 (9) | ACM COMPUTING SURVEYS
Abstract&Keyword Cite

Abstract :

Screen content, which is often computer-generated, has many characteristics distinctly different from conventional camera-captured natural scene content. Such characteristic differences impose major challenges to the corresponding content quality assessment, which plays a critical role to ensure and improve the final user-perceived quality of experience (QoE) in various screen content communication and networking systems. Quality assessment of such screen content has attracted much attention recently, primarily because the screen content grows explosively due to the prevalence of cloud and remote computing applications in recent years, and due to the fact that conventional quality assessment methods can not handle such content effectively. As the most technology-oriented part of QoE modeling, image/video content/media quality assessment has drawn wide attention from researchers, and a large amount of work has been carried out to tackle the problem of screen content quality assessment. This article is intended to provide a systematic and timely review on this emerging research field, including (1) background of natural scene vs. screen content quality assessment; (2) characteristics of natural scene vs. screen content; (3) overview of screen content quality assessment methodologies and measures; (4) relevant benchmarks and comprehensive evaluation of the state-of-the-art; (5) discussions on generalizations from screen content quality assessment to QoE assessment, and other techniques beyond QoE assessment; and (6) unresolved challenges and promising future research directions. Throughout this article, we focus on the differences and similarities between screen content and conventional natural scene content. We expect that this review article shall provide readers with an overview of the background, history, recent progress, and future of the emerging screen content quality assessment research.

Keyword :

quality of experience quality of experience Screen content Screen content natural scene natural scene quality assessment quality assessment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Min, Xiongkuo , Gu, Ke , Zhai, Guangtao et al. Screen Content Quality Assessment: Overview, Benchmark, and Beyond [J]. | ACM COMPUTING SURVEYS , 2022 , 54 (9) .
MLA Min, Xiongkuo et al. "Screen Content Quality Assessment: Overview, Benchmark, and Beyond" . | ACM COMPUTING SURVEYS 54 . 9 (2022) .
APA Min, Xiongkuo , Gu, Ke , Zhai, Guangtao , Yang, Xiaokang , Zhang, Wenjun , Le Callet, Patrick et al. Screen Content Quality Assessment: Overview, Benchmark, and Beyond . | ACM COMPUTING SURVEYS , 2022 , 54 (9) .
Export to NoteExpress RIS BibTex
Ensemble Meta-Learning for Few-Shot Soot Density Recognition EI
期刊论文 | 2021 , 17 (3) , 2261-2270 | IEEE Transactions on Industrial Informatics
Abstract&Keyword Cite

Abstract :

In each petrochemical plant around the world, the flare stack as a requisite facility produces a large amount of soot due to the incomplete combustion of flare gas, and this strongly endangers air quality and human health. Despite severe damages, the abovementioned abnormal conditions rarely occur, and, thus, only few-shot samples are available. To address such difficulty, in this article, we design an image-based flare soot density recognition network (FSDR-Net) via a new ensemble meta-learning technology. More particularly, we first train a deep convolutional neural network (CNN) by applying the model-agnostic meta-learning algorithm on a variety of learning tasks that are relevant to the flare soot recognition so as to obtain the general-purpose optimized initial parameters (GOIP). Second, for the new task of recognizing the flare soot density via only few-shot instances, a new ensemble is developed to selectively aggregate several predictions that are generated based on a wide range of learning rates and a small number of gradient steps. Results of experiments conducted on the density recognition of flare soot corroborate the superiority of our proposed FSDR-Net as compared with the popular and state-of-the-art deep CNNs. © 2005-2012 IEEE.

Keyword :

Air quality Air quality Dust Dust Convolutional neural networks Convolutional neural networks Deep neural networks Deep neural networks Soot Soot Learning algorithms Learning algorithms Optical character recognition Optical character recognition Petrochemical plants Petrochemical plants

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Gu, Ke , Zhang, Yonghui , Qiao, Junfei . Ensemble Meta-Learning for Few-Shot Soot Density Recognition [J]. | IEEE Transactions on Industrial Informatics , 2021 , 17 (3) : 2261-2270 .
MLA Gu, Ke et al. "Ensemble Meta-Learning for Few-Shot Soot Density Recognition" . | IEEE Transactions on Industrial Informatics 17 . 3 (2021) : 2261-2270 .
APA Gu, Ke , Zhang, Yonghui , Qiao, Junfei . Ensemble Meta-Learning for Few-Shot Soot Density Recognition . | IEEE Transactions on Industrial Informatics , 2021 , 17 (3) , 2261-2270 .
Export to NoteExpress RIS BibTex
Kernel-Ridge Regression-Based Quality Measure and Enhancement of Three-Dimensional-Synthesized Images EI
期刊论文 | 2021 , 68 (1) , 423-433 | IEEE Transactions on Industrial Electronics
Abstract&Keyword Cite

Abstract :

In this article, we propose an efficient joint image quality assessment and enhancement algorithm for the 3-D-synthesized images using a global predictor, namely, kernel ridge regression (KRR). Recently, a few prediction-based image quality assessment (IQA) algorithms have been proposed for 3-D-synthesized images. These algorithms use efficient prediction algorithms and try to predict all the regions efficiently, except the boundaries of the regions with geometric distortions. Unfortunately, these algorithms only count the number of pixels along the boundaries of the regions with geometric distortions and subsequently, calculate the quality scores. With this view, we propose a new algorithm for 3-D-synthesized images based upon the global KRR-based predictor, which estimates the complete distortion surface with geometric distortions. Further, it uses the distortion surface to estimate the perceptual quality of the 3-D-synthesized images. Also, the joint quality assessment and enhancement algorithms for 3-D-synthesized images are missing in literature. With this view, we propose to estimate the distortion map of the geometric distortions via the same predictor used in quality estimation and it subsequently enhances the perceptual quality of the 3-D-synthesized images. The performance of the proposed quality assessment algorithm is better than the existing IQA algorithms. Also, the proposed quality enhancement algorithm is promising, significantly enhancing the perceptual quality of 3-D-synthesized images. © 1982-2012 IEEE.

Keyword :

Forecasting Forecasting Image quality Image quality Image enhancement Image enhancement Regression analysis Regression analysis Geometry Geometry Quality control Quality control

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jakhetiya, Vinit , Gu, Ke , Jaiswal, Sunil P. et al. Kernel-Ridge Regression-Based Quality Measure and Enhancement of Three-Dimensional-Synthesized Images [J]. | IEEE Transactions on Industrial Electronics , 2021 , 68 (1) : 423-433 .
MLA Jakhetiya, Vinit et al. "Kernel-Ridge Regression-Based Quality Measure and Enhancement of Three-Dimensional-Synthesized Images" . | IEEE Transactions on Industrial Electronics 68 . 1 (2021) : 423-433 .
APA Jakhetiya, Vinit , Gu, Ke , Jaiswal, Sunil P. , Singhal, Trisha , Xia, Zhifang . Kernel-Ridge Regression-Based Quality Measure and Enhancement of Three-Dimensional-Synthesized Images . | IEEE Transactions on Industrial Electronics , 2021 , 68 (1) , 423-433 .
Export to NoteExpress RIS BibTex
Semi-Reference Sonar Image Quality Assessment Based on Task and Visual Perception SCIE
期刊论文 | 2021 , 23 , 1008-1020 | IEEE TRANSACTIONS ON MULTIMEDIA
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

In submarine and underwater detection tasks, conventional optical imaging and analysis methods are not universally applicable due to the limited penetration depth of visible light. Instead, sonar imaging has become a preferred alternative. However, the capture and transmission conditions in complicated and dynamic underwater environments inevitably lead to visual quality degradation of sonar images, which might also impede further recognition, analysis and understanding. To measure this quality decrease and provide a solid quality indicator for sonar image enhancement, we propose a task- and perception-oriented sonar image quality assessment (TPSIQA) method, in which a semi-reference (SR) approach is applied to adapt to the limited bandwidth of underwater communication channels. In particular, we exploit reduced visual features that are critical for both human perception of and object recognition in sonar images. The final quality indicator is obtained through ensemble learning, which aggregates an optimal subset of multiple base learners to achieve both high accuracy and a high generalization ability. In this way, we are able to develop a compact but generalized quality metric using a small database of sonar images. Experimental results demonstrate competitive performance, high efficiency, and strong robustness of our method compared to the latest available image quality metrics.

Keyword :

Feature extraction Feature extraction image quality asse-ssment (IQA) image quality asse-ssment (IQA) Sonar detection Sonar detection Image quality Image quality Sonar image Sonar image semi-reference semi-reference Sonar measurements Sonar measurements task-aware quality assessment task-aware quality assessment Task analysis Task analysis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Weiling , Gu, Ke , Zhao, Tiesong et al. Semi-Reference Sonar Image Quality Assessment Based on Task and Visual Perception [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2021 , 23 : 1008-1020 .
MLA Chen, Weiling et al. "Semi-Reference Sonar Image Quality Assessment Based on Task and Visual Perception" . | IEEE TRANSACTIONS ON MULTIMEDIA 23 (2021) : 1008-1020 .
APA Chen, Weiling , Gu, Ke , Zhao, Tiesong , Jiang, Gangyi , Le Callet, Patrick . Semi-Reference Sonar Image Quality Assessment Based on Task and Visual Perception . | IEEE TRANSACTIONS ON MULTIMEDIA , 2021 , 23 , 1008-1020 .
Export to NoteExpress RIS BibTex
Exploiting Local Degradation Characteristics and Global Statistical Properties for Blind Quality Assessment of Tone-Mapped HDR Images EI
期刊论文 | 2021 , 23 , 692-705 | IEEE Transactions on Multimedia
Abstract&Keyword Cite

Abstract :

Tone mapping operators (TMOs) are developed to convert a high dynamic range (HDR) image into a low dynamic range (LDR) one for display with the goal of preserving as much visual information as possible. However, image quality degradation is inevitable due to the dynamic range compression during the tone-mapping process. This accordingly raises an urgent demand for effective quality evaluation methods to select a high-quality tone-mapped image (TMI) from a set of candidates generated by distinct TMOs or the same TMO with different parameter settings. A key element to the success of TMI quality evaluation is to extract effective features that are highly consistent with human perception. Towards this end, this paper proposes a novel blind TMI quality metric by exploiting both local degradation characteristics and global statistical properties for feature extraction. Several image attributes including texture, structure, colorfulness and naturalness are considered either locally or globally. The extracted local and global features are aggregated into an overall quality via regression. Experimental results on two benchmark databases demonstrate the superiority of the proposed metric over both the state-of-The-Art blind quality models designed for synthetically distorted images (SDIs) and the blind quality models specifically developed for TMIs. © 1999-2012 IEEE.

Keyword :

Image quality Image quality Quality control Quality control Benchmarking Benchmarking Textures Textures Mapping Mapping

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Xuejin , Jiang, Qiuping , Shao, Feng et al. Exploiting Local Degradation Characteristics and Global Statistical Properties for Blind Quality Assessment of Tone-Mapped HDR Images [J]. | IEEE Transactions on Multimedia , 2021 , 23 : 692-705 .
MLA Wang, Xuejin et al. "Exploiting Local Degradation Characteristics and Global Statistical Properties for Blind Quality Assessment of Tone-Mapped HDR Images" . | IEEE Transactions on Multimedia 23 (2021) : 692-705 .
APA Wang, Xuejin , Jiang, Qiuping , Shao, Feng , Gu, Ke , Zhai, Guangtao , Yang, Xiaokang . Exploiting Local Degradation Characteristics and Global Statistical Properties for Blind Quality Assessment of Tone-Mapped HDR Images . | IEEE Transactions on Multimedia , 2021 , 23 , 692-705 .
Export to NoteExpress RIS BibTex
Improved deep CNNs based on Nonlinear Hybrid Attention Module for image classification. PubMed
期刊论文 | 2021 , 140 , 158-166 | Neural networks : the official journal of the International Neural Network Society
Abstract&Keyword Cite

Abstract :

Recent years have witnessed numerous successful applications of incorporating attention module into feed-forward convolutional neural networks. Along this line of research, we design a novel lightweight general-purpose attention module by simultaneously taking channel attention and spatial attention into consideration. Specifically, inspired by the characteristics of channel attention and spatial attention, a nonlinear hybrid method is proposed to combine such two types of attention feature maps, which is highly beneficial to better network fine-tuning. Further, the parameters of each attention branch can be adjustable for the purpose of making the attention module more flexible and adaptable. From another point of view, we found that the currently popular SE, and CBAM modules are actually two particular cases of our proposed attention module. We also explore the latest attention module ADCM. To validate the module, we conduct experiments on CIFAR10, CIFAR100, Fashion MINIST datasets. Results show that, after integrating with our attention module, existing networks tend to be more efficient in training process and have better performance as compared with state-of-the-art competitors. Also, it is worthy to stress the following two points: (1) our attention module can be used in existing state-of-the-art deep architectures and get better performance at a small computational cost; (2) the module can be added to existing deep architectures in a simple way through stacking the integration of networks block and our module.

Keyword :

Hybrid attention mechanism Hybrid attention mechanism Convolutional neural networks Convolutional neural networks Feature map combination Feature map combination General module General module

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Guo Nan , Gu Ke , Qiao Junfei et al. Improved deep CNNs based on Nonlinear Hybrid Attention Module for image classification. [J]. | Neural networks : the official journal of the International Neural Network Society , 2021 , 140 : 158-166 .
MLA Guo Nan et al. "Improved deep CNNs based on Nonlinear Hybrid Attention Module for image classification." . | Neural networks : the official journal of the International Neural Network Society 140 (2021) : 158-166 .
APA Guo Nan , Gu Ke , Qiao Junfei , Bi Jing . Improved deep CNNs based on Nonlinear Hybrid Attention Module for image classification. . | Neural networks : the official journal of the International Neural Network Society , 2021 , 140 , 158-166 .
Export to NoteExpress RIS BibTex
PM2.5 Monitoring: Use Information Abundance Measurement and Wide and Deep Learning SCIE
期刊论文 | 2021 , 32 (10) , 4278-4290 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

This article devises a photograph-based monitoring model to estimate the real-time PM2.5 concentrations, overcoming currently popular electrochemical sensor-based PM2.5 monitoring methods' shortcomings such as low-density spatial distribution and time delay. Combining the proposed monitoring model, the photographs taken by various camera devices (e.g., surveillance camera, automobile data recorder, and mobile phone) can widely monitor PM2.5 concentration in megacities. This is beneficial to offering helpful decision-making information for atmospheric forecast and control, thus reducing the epidemic of COVID-19. To specify, the proposed model fuses Information Abundance measurement and Wide and Deep learning, dubbed as IAWD, for PM2.5 monitoring. First, our model extracts two categories of features in a newly proposed DS transform space to measure the information abundance (IA) of a given photograph since the growth of PM2.5 concentration decreases its IA. Second, to simultaneously possess the advantages of memorization and generalization, a new wide and deep neural network is devised to learn a nonlinear mapping between the above-mentioned extracted features and the groundtruth PM2.5 concentration. Experiments on two recently established datasets totally including more than 100 000 photographs demonstrate the effectiveness of our extracted features and the superiority of our proposed IAWD model as compared to state-of-the-art relevant computing techniques.

Keyword :

Atmospheric measurements Atmospheric measurements Transforms Transforms information abundance (IA) information abundance (IA) Temperature measurement Temperature measurement Atmospheric modeling Atmospheric modeling Monitoring Monitoring DS transform space DS transform space photograph-based PM2.5 monitoring photograph-based PM2.5 monitoring COVID-19 COVID-19 wide and deep learning wide and deep learning Feature extraction Feature extraction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Gu, Ke , Liu, Hongyan , Xia, Zhifang et al. PM2.5 Monitoring: Use Information Abundance Measurement and Wide and Deep Learning [J]. | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2021 , 32 (10) : 4278-4290 .
MLA Gu, Ke et al. "PM2.5 Monitoring: Use Information Abundance Measurement and Wide and Deep Learning" . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 32 . 10 (2021) : 4278-4290 .
APA Gu, Ke , Liu, Hongyan , Xia, Zhifang , Qiao, Junfei , Lin, Weisi , Thalmann, Daniel . PM2.5 Monitoring: Use Information Abundance Measurement and Wide and Deep Learning . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2021 , 32 (10) , 4278-4290 .
Export to NoteExpress RIS BibTex
Toward a No-Reference Quality Metric for Camera-Captured Images SCIE
期刊论文 | 2021 , 53 (6) , 3651-3664 | IEEE TRANSACTIONS ON CYBERNETICS
Abstract&Keyword Cite

Abstract :

Existing no-reference (NR) image quality assessment (IQA) metrics are still not convincing for evaluating the quality of the camera-captured images. Toward tackling this issue, we, in this article, establish a novel NR quality metric for quantifying the quality of the camera-captured images reliably. Since the image quality is hierarchically perceived from the low-level preliminary visual perception to the high-level semantic comprehension in the human brain, in our proposed metric, we characterize the image quality by exploiting both the low-level image properties and the high-level semantics of the image. Specifically, we extract a series of low-level features to characterize the fundamental image properties, including the brightness, saturation, contrast, noiseness, sharpness, and naturalness, which are highly indicative of the camera-captured image quality. Correspondingly, the high-level features are designed to characterize the semantics of the image. The low-level and high-level perceptual features play complementary roles in measuring the image quality. To infer the image quality, we employ the support vector regression (SVR) to map all the informative features to a single quality score. Thorough tests conducted on two standard camera-captured image databases demonstrate the effectiveness of the proposed quality metric in assessing the image quality and its superiority over the state-of-the-art NR quality metrics. The source code of the proposed metric for camera-captured images is released at https://github.com/YT2015?tab=repositories.

Keyword :

Predictive models Predictive models blind blind Measurement Measurement Semantics Semantics Image quality Image quality Camera-captured image Camera-captured image no-reference (NR) no-reference (NR) Feature extraction Feature extraction Visual perception Visual perception Visualization Visualization image quality assessment (IQA) image quality assessment (IQA) deep neural network (DNN) deep neural network (DNN)

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hu, Runze , Liu, Yutao , Gu, Ke et al. Toward a No-Reference Quality Metric for Camera-Captured Images [J]. | IEEE TRANSACTIONS ON CYBERNETICS , 2021 , 53 (6) : 3651-3664 .
MLA Hu, Runze et al. "Toward a No-Reference Quality Metric for Camera-Captured Images" . | IEEE TRANSACTIONS ON CYBERNETICS 53 . 6 (2021) : 3651-3664 .
APA Hu, Runze , Liu, Yutao , Gu, Ke , Min, Xiongkuo , Zhai, Guangtao . Toward a No-Reference Quality Metric for Camera-Captured Images . | IEEE TRANSACTIONS ON CYBERNETICS , 2021 , 53 (6) , 3651-3664 .
Export to NoteExpress RIS BibTex
Predicting the Quality of View Synthesis With Color-Depth Image Fusion SCIE
期刊论文 | 2021 , 31 (7) , 2509-2521 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
WoS CC Cited Count: 12
Abstract&Keyword Cite

Abstract :

With the increasing prevalence of free-viewpoint video applications, virtual view synthesis has attracted extensive attention. In view synthesis, a new viewpoint is generated from the input color and depth images with a depth-image-based rendering (DIBR) algorithm. Current quality evaluation models for view synthesis typically operate on the synthesized images, i.e. after the DIBR process, which is computationally expensive. So a natural question is that can we infer the quality of DIBR-based synthesized images using the input color and depth images directly without performing the intricate DIBR operation. With this motivation, this paper presents a no-reference image quality prediction model for view synthesis via COlor-Depth Image Fusion, dubbed CODIF, where the actual DIBR is not needed. First, object boundary regions are detected from the color image, and a Wavelet-based image fusion method is proposed to imitate the interaction between color and depth images during the DIBR process. Then statistical features of the interactional regions and natural regions are extracted from the fused color-depth image to portray the influences of distortions in color/depth images on the quality of synthesized views. Finally, all statistical features are utilized to learn the quality prediction model for view synthesis. Extensive experiments on public view synthesis databases demonstrate the advantages of the proposed metric in predicting the quality of view synthesis, and it even suppresses the state-of-the-art post-DIBR view synthesis quality metrics.

Keyword :

Color Color quality prediction quality prediction Predictive models Predictive models DIBR DIBR Distortion measurement Distortion measurement color-depth fusion color-depth fusion Image color analysis Image color analysis Distortion Distortion View synthesis View synthesis Image fusion Image fusion interactional region interactional region

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Leida , Huang, Yipo , Wu, Jinjian et al. Predicting the Quality of View Synthesis With Color-Depth Image Fusion [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2021 , 31 (7) : 2509-2521 .
MLA Li, Leida et al. "Predicting the Quality of View Synthesis With Color-Depth Image Fusion" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 31 . 7 (2021) : 2509-2521 .
APA Li, Leida , Huang, Yipo , Wu, Jinjian , Gu, Ke , Fang, Yuming . Predicting the Quality of View Synthesis With Color-Depth Image Fusion . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2021 , 31 (7) , 2509-2521 .
Export to NoteExpress RIS BibTex
Air Quality Controlling-Oriented Highly Efficient Method for Monitoring Particulate Matters CPCI-S
会议论文 | 2020 , 6624-6627 | 39th Chinese Control Conference (CCC)
Abstract&Keyword Cite

Abstract :

Since Particulate Matters (PMs) are closely related to people's living and health, it has become one of the most important indicator of air quality monitoring around the world. But the existing sensor-based methods for PMs monitoring have remarkable disadvantages, such as low-density monitoring stations and high-requirement monitoring conditions. It is highly desired to devise a method that can obtain the PMs concentration at any location for the following air quality control in time. The prior works indicate that the PMs concentration can be monitored by using ubiquitous photos. To further investigate such issue, we gathered 1,500 photos in big cities to establish a new AQPDCITY dataset. Experiments conducted to check nine state-of-the-art methods on this dataset show that the performance of those above methods perform poorly in the AQPDCITY dataset. To address the above issue, we propose a new photo-based model for PMs monitoring. To be specific, we use the Support Vector Regression (SVR) to incorporate four types of 18 features to obtain a high-density PMs monitoring map. Experiments show that the newly proposed model has achieved superior performance than recently developed methods in the AQPDCITY dataset.

Keyword :

Monitoring Monitoring Feature Extrastion and Fusion Feature Extrastion and Fusion Particulate Matters (PMs) Particulate Matters (PMs) AQPDCITY Dataset AQPDCITY Dataset

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Yonghui , Gu, Ke , Xia, Zhifang et al. Air Quality Controlling-Oriented Highly Efficient Method for Monitoring Particulate Matters [C] . 2020 : 6624-6627 .
MLA Zhang, Yonghui et al. "Air Quality Controlling-Oriented Highly Efficient Method for Monitoring Particulate Matters" . (2020) : 6624-6627 .
APA Zhang, Yonghui , Gu, Ke , Xia, Zhifang , Qiao, Junfei . Air Quality Controlling-Oriented Highly Efficient Method for Monitoring Particulate Matters . (2020) : 6624-6627 .
Export to NoteExpress RIS BibTex
10| 20| 50 per page
< Page ,Total 12 >

Export

Results:

Selected

to

Format:
Online/Total:987/2209093
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.