SynchroSem: Loosely coupled multi-modal semantic mapping for synchronized Lidar-Visual-Inertial Systems

Author: Eloi Bové Canals

Virtual Room: https://eu.bbcollab.com/guest/251e9be0599c4430895c92d022109db3

Date & time: 22/09/2021 – 8:45 h

Session name: Multimodal

Company name: Scaled Robotics SL

Supervisors: Josep R. Casas, Bharath Sankaran

Abstract:

We present SynchroSem: a loosely coupled multi-modal semantic SLAM approach for synchronized Lidar-Visual-Inertial (LVI) systems. Our system integrates pose measurements from two different tightly coupled odometry systems, Visual-Inertial (VI) and Lidar-Inertial (LI). These are loosely coupled by means of a pose graph representation that enables a robust late-fusion of pose measurements, being resilient to geometric and visual degeneracy. The developed multi-modal semantic feature extraction allows the global optimization system to reduce long term operation errors via loop closure, achieving State-of-the-Art performance on public LVI datasets.Our method has also been deployed on a custom robotics platform. We developed a simple and reproducible hardware synchronization system using commercial-off-the-shelf hardware, and it is made publicly available at https://halops.github.io/SynchroSem.

Committee:

– President: Francesc Moreno-Noguer(UPC)
– Secretary: Mario Ceresa(UPF)
– Vocal: Dimosthenis Karatzas(UAB)