End-to-End Driving Via Conditional Imitation Learning

Felipe Codevilla, Matthias Miiller, Antonio Lopez, Vladlen Koltun, Alexey Dosovitskiy

Research output: Chapter in Book/Report/Conference proceedingConference contribution

731 Scopus citations

Abstract

Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands.
Original languageEnglish (US)
Title of host publication2018 IEEE International Conference on Robotics and Automation (ICRA)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages4693-4700
Number of pages8
ISBN (Print)9781538630815
DOIs
StatePublished - Sep 21 2018

Fingerprint

Dive into the research topics of 'End-to-End Driving Via Conditional Imitation Learning'. Together they form a unique fingerprint.

Cite this