Published

Adding Depth and Reason to 3D AV Measurements

Looking beyond the visible to improve the accuracy of object detection sensors

Share

Autonomous vehicle sensors—cameras, radar and lidar—do a great job of detecting and classifying surrounding road objects.

But such perception systems are limited by a lack of depth, according to researchers at Carnegie Mellon University (CMU) and Argo AI. They contend that current systems don’t properly take into account the empty space between the sensor and object it’s detecting or how deep an object is and what’s behind it.

3D vs. 2.5D

The problem is that data collected from the sensors doesn’t result in fully fleshed out 3D images.

(Image: Carnegie Mellon)

Instead, the sensors create so-called 2.5D point cloud representations, which don’t show what an object may be obstructing, the researchers note. This could cause false readings, objects to be misclassified or missed entirely.

"Perception systems need to know their unknowns," says Peiyun Hu, a Ph.D. student in CMU’s Robotics Institute and one of the authors of the study.

Improved Results

Applying technologies used in creating high-definition digital maps—including raycast visualization rendering and artificial intelligence-based reasoning—can significantly improve object detection accuracy, the researchers claim.

Compared with current AV sensors and processors, initial tests show the new method can improve detection by:

  • 5% for pedestrians
  • 7% for trucks
  • 11% for cars,
  • 17% for trailers
  • 18% for buses

What current AV sensors may miss. (Source: Carnegie Mellon)

Previous systems may not have incorporated the additional data due to concerns about computation time. But Hu says this isn’t a problem, noting that the new calculations can be done in one-fourth the time of a typical lidar sweep (24 milliseconds vs. 100 milliseconds).

Why it Matters

Promising academic studies don’t always lead to real-world results.

But it’s usually worth taking note when the work comes from Carnegie Mellon. Researchers at the university pioneered AI in the early 1960s and have long been a leader in autonomous vehicles, dating back to the DARPA Urban Challenge races in the mid-2000s.

The university has been working with Argo AI for several years. Both are based in Pittsburgh. 

The latest study is being presented this week in conjunction with the Computer Vision and Pattern Recognition virtual conference.

RELATED CONTENT

  • Engineering the 2019 Jeep Cherokee

    The Jeep Cherokee, which was launched in its current manifestation as a model year 2014 vehicle, and which has just undergone a major refresh for MY 2019, is nothing if not a solid success.

  • GM Is Down with Diesels

    General Motors is one company that is clearly embracing the diesel engine.

  • Cobots: 14 Things You Need to Know

    What jobs do cobots do well? How is a cobot programmed? What’s the ROI? We asked these questions and more to four of the leading suppliers of cobots. 

Gardner Business Media - Strategic Business Solutions