Tesla seems to be leading the world in the development of fully autonomous vehicles, and they have set themselves apart for most other self-driving car developed by their use of cameras and sensors rather than Lidar.
Musk went as far as to say that Lidar, a laser-based scanning technology that images objects in 3D, was “friggin’ stupid,” and that “…anyone relying on LiDAR is doomed.”
While it seemed a little aggressive at the time, new research from Cornell may have proved him right. “The common belief is that you couldn’t make self-driving cars without LiDARs,” Kilian Weinberger, associate professor of computer science and senior author of the research paper said.
“We’ve shown, at least in principle, that it’s possible.”
The study shows that self-driving cars may be able to ‘see’ the world in 3D using just cheap cameras. Self-driving cars must be able to visualize the world around them and differentiate between objects such as roads, buildings and people in order to be able to operate safely on our urban roads.
Cars need to improve on human vision
Essentially the cars vision system needs to be able to mimic what a human driver does as they constantly scan the road and immediate environment around them with their eyes making a thousand micro decision about speed, direction, etc.
Self-driving cars need to have enough information so that they can plan ahead to avoid upcoming incidents such as a potential crash like this one captured by a Model 3 driver.
Many self-driving car companies use a Lidar (Light Detection and Ranging) system to do this. The technology uses spinning lasers to build up a 3D map of the environment. But Lidar is expensive and can add upwards of $10,000 to the price of a car.
The system should also be placed on the roof of the car for the best perspective which also adds extra drag potentially reducing the range of an electric car.
Cheap cameras well placed
The Cornell scientists will present their breakthrough finding of the use of cameras for self-driving imaging at the 2019 Conference on Computer Vision and Pattern Recognition in June.
The paper titled Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving, describes how the careful placement of cheap cameras on either side of a vehicle behind its windshield, can produce stereoscopic images which are converted to a 3D point cloud which is then rotated in 3D to produce a top-down perspective of a vehicle’s surroundings.
This image shows how the rotation works. The 3D data generated by the cameras was comparably precise to the data created from the laser scanners, at a fraction of the cost.
“The self-driving car industry has been reluctant to move away from LiDAR, even with the high costs, given its excellent range accuracy – which is essential for safety around the car,” said Mark Campbell, the John A. Mellowes ’60 Professor and S.C. Thomas Sze Director of the Sibley School of Mechanical and Aerospace Engineering and a co-author of the paper.
“The dramatic improvement of range detection and accuracy, with the bird’s-eye representation of camera data, has the potential to revolutionize the industry.”
Tesla outlined its roadmap for its vehicles to transition to fully autonomous driving mode at its investor’s autonomy day this week. The exclusive event allowed select investors to test Tesla cars with unreleased self-driving features.
Musk said at the event that he expects the company to have completed their self-driving technology package by the end of the year.