DriveLM
This article explores the application of Graph Visual Question Answering in autonomous driving systems, particularly for the 2024 challenge. It uses datasets such as nuScenes and CARLA to develop a VLM-based baseline approach that combines Graph VQA with end-to-end driving solutions. The project seeks to simulate human reasoning in driving, offering a holistic framework for perception, prediction, and planning. It merges language models with autonomous systems for explainable planning and improved decision-making in self-driving vehicles. Learn about the project's novel methodologies and its impact on the field of autonomous vehicles.