LiteViLA: A Lightweight Vision-Language Model Scene Understanding in Autonomous Driving

1National Tsing Hua University, Taiwan, 2NVIDIA
ECCV 2024 Workshop W-CODA Abstract Paper
Image 1
Image 2

Visual perception and comprehension data samples from CODA-LM.

Abstract

This paper describes our method for the ECCV 2024 Workshop W-CODA Track 1: Corner Case Scene Understanding. We propose LiteViLA: a Lightweight Vision-Language model for scene understanding in Autonomous driving, leveraging the TinyLLaVA backbone for efficient processing of large-scale multimodal data. Our approach extracts visual fea-tures through a Vision Encoder and Q-Former, with the integration of visual and language modalities handled by the Language Model (LM) through a Mixture of Adapters (MoA) mechanism. The MoA dynamically selects task-specific adapters for General Perception, Region Perception, and Driving Suggestions, optimizing performance across these critical tasks. Finally, a Reviewer component refines the generated answers, ensuring their accuracy and relevance.

Method Overview

pipeline

Qualitative Results

Image 1
Image 2