Multi-task Deep Learning with Optical Flow Features for Self-Driving Cars

Yuan Hu, Hubert Shum*, Edmond S. L. Ho

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

437 Downloads (Pure)

Abstract

The control of self‐driving cars has received growing attention recently. Although existing research shows promising results in the vehicle control using video from a monocular dash camera, there has been very limited work on directly learning vehicle control from motion‐based cues. Such cues are powerful features for visual representations, as they encode the per‐pixel movement between two consecutive images, allowing a system to effectively map the features into the control signal. The authors propose a new framework that exploits the use of a motion‐based feature known as optical flow extracted from the dash camera and demonstrates that such a feature is effective in significantly improving the accuracy of the control signals. The proposed framework involves two main components. The flow predictor, as a self‐supervised deep network, models the underlying scene structure from consecutive frames and generates the optical flow. The controller, as a supervised multi‐task deep network, predicts both steer angle and speed. The authors demonstrate that the proposed framework using the optical flow features can effectively predict control signals from a dash camera video. Using the Cityscapes data set, the authors validate that the system prediction has errors as low as 0.0130 rad/s on steer angle and 0.0615 m/s on speed, outperforming existing research.
Original languageEnglish
Pages (from-to)1845-1854
Number of pages10
JournalIET Intelligent Transport Systems
Volume14
Issue number13
DOIs
Publication statusPublished - 1 Dec 2020

Fingerprint Dive into the research topics of 'Multi-task Deep Learning with Optical Flow Features for Self-Driving Cars'. Together they form a unique fingerprint.

Cite this