Robotic Bin Picking Made Simple(r)
The result of “thinking inside the bin,” is a design featuring a six-axis Yaskawa robot, an IFM Effector 200 photoelectric distance-measuring sensor, and, mounted to an IAI servo-driven slide, a Cognex In-Sight 8000 camera.
#robotics
When Systematix (systematix-inc.com), a systems integrator, was presented with the task of developing an automated system to remove car seat lumbar actuator assemblies from a bin and into a wire nest for assembly, its first idea was to use a robot and a 3D sensor.
But then its engineers thought of something. They realized that each actuator in the bin didn’t have to be mapped in all three dimensions but two would suffice. They could simply mount a 2D camera on a vertical slide such that each component is simply measured in X and Y.
Because there are sheets of cardboard separating layers of the randomly oriented parts and those dividers are removed once the parts on top of them are removed, there would be the need to measure the Z axis (i.e., depth) just once per layer.
The result of this “thinking inside the bin” is a design featuring a six-axis Yaskawa robot (motoman.com), an IFM Effector 200 photoelectric distance-measuring sensor (ifm.com), and, mounted to an IAI servo-driven slide (intelligentactuator.com), a Cognex (cognex.com) In-Sight 8000 camera.
The camera uses RedLine, the latest iteration of PatMax, the geometric pattern-matching technology that Cognex first patented in 1996. Up until then, pattern matching technology relied upon a pixel-grid analysis process called normalized correlation. That method looks for statistical similarity between a gray-level model or reference image of an object and portions of the image to determine the object’s X-Y position. PatMax instead learns an object’s geometry from a reference image using a set of boundary curves tied to a pixel grid and then looks for similar shapes in the image without relying on specific gray levels. This approach, now widely used by numerous machine vision companies, greatly improves how accurately an object can be recognized despite differences in angle, size and shading.
The system not only gets the job done in the required time, but presumably, the use of the long-proven tech was somewhat more cost-effective than a less-straightforward approach.
RELATED CONTENT
-
On French Concept, Inclusive Mobility, Nissan Frontier, and More
French conceptual mobility vehicles, VW addresses mobility for the disabled, a look at the 2022 Nissan Frontier, MINI surveys people about EVs, engineering the Sportster S engine, Honda’s avatar robot, and a driver shortage addressed
-
Cobots: 14 Things You Need to Know
What jobs do cobots do well? How is a cobot programmed? What’s the ROI? We asked these questions and more to four of the leading suppliers of cobots.
-
On EV's, ADAS, and a Pickup Truck
Several industry-related items that you’ve not likely to have seen anywhere else. (At least not all together.)