• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Irfan, Ayesha (Irfan, Ayesha.) | Li, Yu (Li, Yu.) | E, Xinhua (E, Xinhua.) | Sun, Guangmin (Sun, Guangmin.) (Scholars:孙光民)

Indexed by:

EI Scopus SCIE

Abstract:

Land use and land cover (LULC) classification through remote sensing imagery serves as a cornerstone for environmental monitoring, resource management, and evidence-based urban planning. While Synthetic Aperture Radar (SAR) and optical sensors individually capture distinct aspects of Earth's surface, their complementary nature SAR excelling in structural and all-weather observation and optical sensors providing rich spectral information-offers untapped potential for improving classification robustness. However, the intrinsic differences in their imaging mechanisms (e.g., SAR's coherent scattering versus optical's reflectance properties) pose significant challenges in achieving effective multimodal fusion for LULC analysis. To address this gap, we propose a multimodal deep-learning framework that systematically integrates SAR and optical imagery. Our approach employs a dual-branch neural network, with two fusion paradigms being rigorously compared: the Early Fusion strategy and the Late Fusion strategy. Experiments on the SEN12MS dataset-a benchmark containing globally diverse land cover categories-demonstrate the framework's efficacy. Our Early Fusion strategy achieved 88% accuracy (F1 score: 87%), outperforming the Late Fusion approach (84% accuracy, F1 score: 82%). The results indicate that optical data provide detailed spectral signatures useful for identifying vegetation, water bodies, and urban areas, whereas SAR data contribute valuable texture and structural details. Early Fusion's superiority stems from synergistic low-level feature extraction, capturing cross-modal correlations lost in late-stage fusion. Compared to state-of-the-art baselines, our proposed methods show a significant improvement in classification accuracy, demonstrating that multimodal fusion mitigates single-sensor limitations (e.g., optical cloud obstruction and SAR speckle noise). This study advances remote sensing technology by providing a precise and effective method for LULC classification.

Keyword:

multisource data fusion optical imagery Synthetic Aperture Radar (SAR) remote sensing deep learning land use/land cover (LULC)

Author Community:

  • [ 1 ] [Irfan, Ayesha]Beijing Univ Technol, Sch Informat Sci & Technol, Beijing 100124, Peoples R China
  • [ 2 ] [Li, Yu]Beijing Univ Technol, Sch Informat Sci & Technol, Beijing 100124, Peoples R China
  • [ 3 ] [E, Xinhua]Beijing Univ Technol, Sch Informat Sci & Technol, Beijing 100124, Peoples R China
  • [ 4 ] [Sun, Guangmin]Beijing Univ Technol, Sch Informat Sci & Technol, Beijing 100124, Peoples R China

Reprint Author's Address:

  • [E, Xinhua]Beijing Univ Technol, Sch Informat Sci & Technol, Beijing 100124, Peoples R China

Show more details

Related Keywords:

Source :

REMOTE SENSING

Year: 2025

Issue: 7

Volume: 17

5 . 0 0 0

JCR@2022

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 9

Affiliated Colleges:

Online/Total:427/10701327
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.