For many of us, learning to drive is a rite of passage to adulthood. At first, you're jittery and overly cautious, but as the miles pass you get progressively better. You learn to understand the nuances, how the elements affect your trajectory and how to adjust accordingly.

So, what if we were able to apply the same concept to self-driving cars?

A team of engineers from NVIDIA based in our New Jersey office - a former Bell Labs office that also happens to be the birthplace of the deep learning revolution currently sweeping the technology industry - decided that they would use deep learning to teach an autonomous car to drive. They used aconvolutional neural network(CNN) to learn the entire processing pipeline needed to steer an automobile.

The project, called DAVE2, is part of an effort kicked off nine months ago at NVIDIA to build on the DAVE (DARPA Autonomous Vehicle) research to create a robust system for driving on public roads. The primary motivation for this work was to bypass the need to hardcode detection of specific features - such as lane markings, guardrails or other cars - and to avoid having to create a near infinite number of 'if, then, else' statements, which is too impractical to code when trying to account for the randomness that occurs on the road.

Back in February, at the outset of the experiment, training data was collected by driving on a wide variety of roads and in a diverse set of lighting and weather conditions. The majority of the road data was collected in central New Jersey and the road variants included two-lane roads (with and without lane markings), residential streets with parked cars, tunnels and even unpaved pathways. Additional data was collected in clear, cloudy, foggy, snowy and rainy weather, both day and night.

At first, the test car for the self-driving exercises committed a range of bloopers: running over traffic cones, veering off the road and getting too close to static objects. Basically it failed its driver's test.

Fast forward 3,000 miles and 72 hours of driving later, and the car could navigate within the cones, drive along paved and unpaved roads and handle a wide range of weather conditions.

So how did our test car go from dud to stud?

Using the NVIDIADevBoxand Torch 7 (a machine learning library) for training, and an NVIDIADRIVE PXself-driving car computer to process it all, the NVIDIA team time-stamped video from the cameras simultaneously with the steering angle applied by the human driver and continuously feed that into the CNN.

Then, prior to actual road-testing the trained CNN, they evaluated the network's performance in simulation. The simulator took pre-recorded videos on a human-driven data-collection vehicle and generated images that approximate what would appear if the CNN were instead steering the vehicle.

Once the trained CNN showed solid performance in the simulator, the CNN was loaded onto the DRIVE PX and taken out for a road test. Through continuous iterations and tweaks, the CNN began to learn to detect useful road features on its own.

Soon enough, the car was able to drive itself over various roads and even cruised the Garden State Parkway flawlessly. What's even more interesting is that the engineering team never explicitly trained the CNN to detect road outlines. Instead, it observed human steering angles versus the road as a guide and began to understand the rules of engagement between vehicle and road.

For more details, check out the NVIDIA research paper 'End to End Learning for Self-Driving Cars.'

Nvidia Corporation published this content on 06 May 2016 and is solely responsible for the information contained herein.
Distributed by Public, unedited and unaltered, on 06 May 2016 15:52:01 UTC.

Original documenthttps://blogs.nvidia.com/blog/2016/05/06/self-driving-cars-3/

Public permalinkhttp://www.publicnow.com/view/45AD09F8843D4F1F50C41868B7ED00B1D25D1518