The automation of navigation for different kinds of vehicles is a research problem of great interest. This problem has applications with unmanned aerial vehicles (UAVs) as well as manned vehicles such as cars and planes. The goal of an autonomous vehicle is to navigate safely from one point to another given a set of high-level instructions and data from a set of sensors. This thesis explores an implementation of a modular approach for autonomously driving taxiing planes before proposing methods for object tracking using a LIDAR sensor which can be incorporated into the autonomous driving pipeline. The taxiing algorithm regresses waypoints for the plane to follow given a high-level driving goal such as ”turn left” or ”go straight”, along with RGB images taken from the cockpit and wings. Waypoints are then used with a separate control system to taxi the plane. The training and testing of this autonomous aircraft is done in a photo-realistic simulator which has been adapted for this task. The policy developed in this fashion is capable of learning how to go straight and how to turn. However, the driving policy is not trained to react to other moving objects. To address this issue, and due to the superior reliability of LIDAR over RGB sensors, an object tracking method using only LIDAR point clouds is proposed. The proposed method uses a novel 3D Siamese network to obtain a similarity score between a model and candidate object point clouds. This similarity score is shown to work for tracking by applying it using an exhaustive search and obtaining improved performances when compared with simple baselines. For a realistic application, the similarity score is applied using candidates provided by a search on the BEV projection of the LIDAR point cloud. This method is shown to provide improved tracking results over other search strategies when using a lower number of candidates.
|Date of Award
- Computer, Electrical and Mathematical Sciences and Engineering
|Bernard Ghanem (Supervisor)
- 3D Tracking