Implementing a Deep Reinforcement Learning Model for Autonomous Driving
Overview
Artificial Intelligence (AI) has made significant strides in various technological fields, and one exciting area is the development of self-driving cars. The project, "Autonomous Driving in Carla using Deep Reinforcement Learning," harnesses cutting-edge AI techniques to train an agent to drive autonomously. The project utilizes Deep Reinforcement Learning (DRL) powered by Proximal Policy Optimization (PPO) in the CARLA simulator, a hyper-realistic urban environment, to navigate vehicles efficiently.
Project Components
The project comprises three major components:
-
CARLA Environment Setup: The CARLA simulator provides a realistic framework to simulate driving conditions safely, away from real-world risks and ethical dilemmas.
-
Variational Autoencoder (VAE): This component helps in transforming high-dimensional images into simpler, low-dimensional representations. This simplification allows the agent to learn driving tasks faster and more efficiently.
-
Proximal Policy Optimization (PPO): An on-policy DRL algorithm used here to train the agent on continuous state and action spaces. The PPO-based agent learns to drive reliably in various driving scenarios within the CARLA simulator.
Project Setup
To get started with this project:
-
CARLA Version: Use CARLA 0.9.8 along with additional maps focusing on Town 2 and Town 7 to test driving behaviors.
-
Environment: Set up the project on Windows or Linux, as CARLA supports these operating systems.
-
Python Environment: Clone the project repository and set up a Python virtual environment with version 3.7 or higher. Use
pip
andpoetry
to manage dependencies. -
CARLA Server: Download and run the CARLA server (version 0.9.8) before initiating the client code for training or testing.
Methodology
The project methodology centers around integrating the three core elements (CARLA, VAE, PPO) to create an autonomous driving solution. The visual representation of the methodology highlights how these components interact to produce a seamless driving experience.
Running the Project
-
Running a Trained Agent: The project offers pretrained PPO agents for Town 2 and Town 7. Use specific commands to test these agents in the simulator.
-
Training a New Agent: Initiating training with default parameters is straightforward, with regular updates on progress through checkpoints and logs.
Exploring the Variational Autoencoder
The VAE component involves collecting thousands of semantically segmented images by manually and automatically driving within the simulator. This helps train the autoencoder to convert the images into simplified forms, assisting the reinforcement learning processes. Commands are available to test the VAE aspect by reconstructing original images.
Project Architecture
The project's architecture pipeline illustrates how images processed by the VAE are fed into the PPO component to train the autonomous driving agent. This interconnected system aims to optimize learning and improve driving reliability.
File Overview
Important scripts and directories include:
continuous_driver.py
for training and testing models- VAE-related scripts for image processing
- Various simulation setup files
- Logs and checkpoints for monitoring agent progression
Visualization and Monitoring
The project uses Tensorboard for visualizing training progress, allowing for real-time performance evaluations.
Contributors
Idrees Razak has spearheaded the project, bringing expertise and dedication to advance autonomous driving technologies. His professional links are available for further engagement.
Licensing
The project is open source, licensed under the MIT License, encouraging collaboration and development in the self-driving car domain.
Acknowledgments
Appreciation is extended to Dr. Toka László for providing leadership and support throughout this project's journey.