• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
搜索

Author:

Luo, Zhen (Luo, Zhen.) | Ma, Junyi (Ma, Junyi.) | Zhou, Zijie (Zhou, Zijie.) | Xiong, Guangming (Xiong, Guangming.)

Indexed by:

EI Scopus SCIE

Abstract:

The ability to predict future structure features of environments based on past perception information is extremely needed by autonomous vehicles, which helps to make the following decision-making and path planning more reasonable. Recently, point cloud prediction (PCP) is utilized to predict and describe future environmental structures by the point cloud form. In this letter, we propose a novel efficient Transformer-based network to predict the future LiDAR point clouds exploiting the past point cloud sequences. We also design a semantic auxiliary training strategy to make the predicted LiDAR point cloud sequence semantically similar to the ground truth and thus improves the significance of the deployment for more tasks in real-vehicle applications. Our approach is completely self-supervised, which means it does not require any manual labeling and has a solid generalization ability toward different environments. The experimental results show that our method outperforms the state-of-the-art PCP methods on the prediction results and semantic similarity, and has a good real-time performance.

Keyword:

self-supervised learning Training Semantics semantic auxiliary training Transformers Point cloud compression Laser radar Point cloud prediction Three-dimensional displays Feature extraction

Author Community:

  • [ 1 ] [Luo, Zhen]Beijing Univ Technol, Beijing 100081, Peoples R China
  • [ 2 ] [Ma, Junyi]Beijing Univ Technol, Beijing 100081, Peoples R China
  • [ 3 ] [Zhou, Zijie]Beijing Univ Technol, Beijing 100081, Peoples R China
  • [ 4 ] [Xiong, Guangming]Beijing Univ Technol, Beijing 100081, Peoples R China

Reprint Author's Address:

Show more details

Related Keywords:

Source :

IEEE ROBOTICS AND AUTOMATION LETTERS

ISSN: 2377-3766

Year: 2023

Issue: 7

Volume: 8

Page: 4267-4274

5 . 2 0 0

JCR@2022

Cited Count:

WoS CC Cited Count: 2

SCOPUS Cited Count: 4

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 10

Affiliated Colleges:

Online/Total:550/10591314
Address:BJUT Library(100 Pingleyuan,Chaoyang District,Beijing 100124, China Post Code:100124) Contact Us:010-67392185
Copyright:BJUT Library Technical Support:Beijing Aegean Software Co., Ltd.