Multi-View Radar Semantic Segmentation - Télécom Paris
Communication Dans Un Congrès Année : 2021

Multi-View Radar Semantic Segmentation

Résumé

Understanding the scene around the ego-vehicle is key to assisted and autonomous driving. Nowadays, this is mostly conducted using cameras and laser scanners, despite their reduced performance in adverse weather conditions. Automotive radars are low-cost active sensors that measure properties of surrounding objects, including their relative speed, and have the key advantage of not being impacted by rain, snow or fog. However, they are seldom used for scene understanding due to the size and complexity of radar raw data and the lack of annotated datasets. Fortunately, recent open-sourced datasets have opened up research on classification, object detection and semantic segmentation with raw radar signals using end-to-end trainable models. In this work, we propose several novel architectures, and their associated losses, which analyse multiple "views" of the range-angle-Doppler radar tensor to segment it semantically. Experiments conducted on the recent CARRADA dataset demonstrate that our best model outperforms alternative models, derived either from the semantic segmentation of natural images or from radar scene understanding, while requiring significantly fewer parameters. Both our code and trained models are available at https://github.com/valeoai/MVRSS.
Fichier principal
Vignette du fichier
arxiv_multi_view_radar_semantic_segmentation_camera_ready_20210823.pdf (5.11 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03324900 , version 1 (24-08-2021)

Identifiants

  • HAL Id : hal-03324900 , version 1

Citer

Arthur Ouaknine, Alasdair Newson, Patrick Pérez, Florence Tupin, Julien Rebut. Multi-View Radar Semantic Segmentation. International Conference on Computer Vision (ICCV) 2021, Oct 2021, Montreal (virtual), Canada. ⟨hal-03324900⟩
256 Consultations
192 Téléchargements

Partager

More