Adding Depth and Reason to 3D AV Measurements
Looking beyond the visible to improve the accuracy of object detection sensors
Autonomous vehicle sensors—cameras, radar and lidar—do a great job of detecting and classifying surrounding road objects.
But such perception systems are limited by a lack of depth, according to researchers at Carnegie Mellon University (CMU) and Argo AI. They contend that current systems don’t properly take into account the empty space between the sensor and object it’s detecting or how deep an object is and what’s behind it.
3D vs. 2.5D
The problem is that data collected from the sensors doesn’t result in fully fleshed out 3D images.
(Image: Carnegie Mellon)
Instead, the sensors create so-called 2.5D point cloud representations, which don’t show what an object may be obstructing, the researchers note. This could cause false readings, objects to be misclassified or missed entirely.
"Perception systems need to know their unknowns," says Peiyun Hu, a Ph.D. student in CMU’s Robotics Institute and one of the authors of the study.
Improved Results
Applying technologies used in creating high-definition digital maps—including raycast visualization rendering and artificial intelligence-based reasoning—can significantly improve object detection accuracy, the researchers claim.
Compared with current AV sensors and processors, initial tests show the new method can improve detection by:
- 5% for pedestrians
- 7% for trucks
- 11% for cars,
- 17% for trailers
- 18% for buses
What current AV sensors may miss. (Source: Carnegie Mellon)
Previous systems may not have incorporated the additional data due to concerns about computation time. But Hu says this isn’t a problem, noting that the new calculations can be done in one-fourth the time of a typical lidar sweep (24 milliseconds vs. 100 milliseconds).
Why it Matters
Promising academic studies don’t always lead to real-world results.
But it’s usually worth taking note when the work comes from Carnegie Mellon. Researchers at the university pioneered AI in the early 1960s and have long been a leader in autonomous vehicles, dating back to the DARPA Urban Challenge races in the mid-2000s.
The university has been working with Argo AI for several years. Both are based in Pittsburgh.
The latest study is being presented this week in conjunction with the Computer Vision and Pattern Recognition virtual conference.
RELATED CONTENT
-
on lots of electric trucks. . .Grand Highlander. . .atomically analyzing additive. . .geometric designs. . .Dodge Hornet. . .
EVs slowdown. . .Ram’s latest in electricity. . .the Grand Highlander is. . .additive at the atomic level. . .advanced—and retro—designs. . .the Dodge Hornet. . .Rimac in reverse. . .
-
Jeeps Modified for Moab
On Easter morning in Moab, Utah, when the population of that exceedingly-hard-to-get-to town in one of the most beautiful settings on Earth has more than doubled, some people won’t be hunting for Easter eggs, but will be trying to get a good look at one of the vehicles six that Jeep has prepared for real-life, fast-feedback from the assembled at the annual Easter Jeep Safari.
-
On Electric Pickups, Flying Taxis, and Auto Industry Transformation
Ford goes for vertical integration, DENSO and Honeywell take to the skies, how suppliers feel about their customers, how vehicle customers feel about shopping, and insights from a software exec