Task-driven RGB-Lidar Fusion for Object Tracking in Resource-Efficient Autonomous System

Published in IEEE Transactions on Intellkukuigent Vehicles, 2021

Recommended citation: K. Samal, H. Kumawat, P. Saha, M. Wolf and S. Mukhopadhyay, "Task-driven RGB-Lidar Fusion for Object Tracking in Resource-Efficient Autonomous System," in IEEE Transactions on Intelligent Vehicles, doi: 10.1109/TIV.2021.3087664. https://ieeexplore.ieee.org/document/9448387

Autonomous mobile systems such as vehicles or robots are equipped with multiple sensor modalities including Lidar, RGB, and Radar. The fusion of multi-modal information can enhance task accuracy but indiscriminate sensing and fusion in all modality increases demand on available system resources. This paper presents a task-driven approach to input fusion that minimizes utilization of resource-heavy sensors and demonstrates its application to Visual-Lidar fusion for object tracking and path planning. Proposed spatiotemporal sampling algorithm activates Lidar only at regions-of-interest identified by analyzing visual input and reduces the Lidar ‘base frame rate’ according to kinematic state of the system. This significantly reduces Lidar usage, in terms of data sensed/transferred and potentially power consumed, without severe reduction in performance compared to both a baseline decision-level fusion and state-of-the-art deep multi-modal fusion.

Download paper here dcasfaefef