• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Gu, Ke (Gu, Ke.) (Scholars:顾锞) | Liu, Hongyan (Liu, Hongyan.) | Xia, Zhifang (Xia, Zhifang.) | Qiao, Junfei (Qiao, Junfei.) (Scholars:乔俊飞) | Lin, Weisi (Lin, Weisi.) | Thalmann, Daniel (Thalmann, Daniel.)

Indexed by:

EI Scopus SCIE

Abstract:

This article devises a photograph-based monitoring model to estimate the real-time PM2.5 concentrations, overcoming currently popular electrochemical sensor-based PM2.5 monitoring methods' shortcomings such as low-density spatial distribution and time delay. Combining the proposed monitoring model, the photographs taken by various camera devices (e.g., surveillance camera, automobile data recorder, and mobile phone) can widely monitor PM2.5 concentration in megacities. This is beneficial to offering helpful decision-making information for atmospheric forecast and control, thus reducing the epidemic of COVID-19. To specify, the proposed model fuses Information Abundance measurement and Wide and Deep learning, dubbed as IAWD, for PM2.5 monitoring. First, our model extracts two categories of features in a newly proposed DS transform space to measure the information abundance (IA) of a given photograph since the growth of PM2.5 concentration decreases its IA. Second, to simultaneously possess the advantages of memorization and generalization, a new wide and deep neural network is devised to learn a nonlinear mapping between the above-mentioned extracted features and the groundtruth PM2.5 concentration. Experiments on two recently established datasets totally including more than 100 000 photographs demonstrate the effectiveness of our extracted features and the superiority of our proposed IAWD model as compared to state-of-the-art relevant computing techniques.

Keyword:

Atmospheric measurements Transforms information abundance (IA) Temperature measurement Atmospheric modeling Monitoring DS transform space photograph-based PM2.5 monitoring COVID-19 wide and deep learning Feature extraction

Author Community:

  • [ 1 ] [Gu, Ke]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Engn Res Ctr Intelligence Percept & Autonomou,Bei, Fac Informat Technol,Minist Educ,Beijing Artifici, Beijing 100124, Peoples R China
  • [ 2 ] [Liu, Hongyan]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Engn Res Ctr Intelligence Percept & Autonomou,Bei, Fac Informat Technol,Minist Educ,Beijing Artifici, Beijing 100124, Peoples R China
  • [ 3 ] [Xia, Zhifang]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Engn Res Ctr Intelligence Percept & Autonomou,Bei, Fac Informat Technol,Minist Educ,Beijing Artifici, Beijing 100124, Peoples R China
  • [ 4 ] [Qiao, Junfei]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Engn Res Ctr Intelligence Percept & Autonomou,Bei, Fac Informat Technol,Minist Educ,Beijing Artifici, Beijing 100124, Peoples R China
  • [ 5 ] [Lin, Weisi]Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
  • [ 6 ] [Thalmann, Daniel]Ecole Polytech Fed Lausanne, CH-1015 Lausanne, Switzerland

Reprint Author's Address:

  • 顾锞

    [Gu, Ke]Beijing Univ Technol, Beijing Key Lab Computat Intelligence & Intellige, Engn Res Ctr Intelligence Percept & Autonomou,Bei, Fac Informat Technol,Minist Educ,Beijing Artifici, Beijing 100124, Peoples R China

Show more details

Related Keywords:

Source :

IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS

ISSN: 2162-237X

Year: 2021

Issue: 10

Volume: 32

Page: 4278-4290

1 0 . 4 0 0

JCR@2022

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:87

JCR Journal Grade:1

Cited Count:

WoS CC Cited Count: 64

SCOPUS Cited Count: 86

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 8

Online/Total:710/10708683
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.