Industrial Robotics

Revolutionary robotics optics and vision systems

Sheila Kennedy says successful plant automation can hinge on how well robots see their surroundings.

By Sheila Kennedy, CMRP, contributing editor

Assigning complex or hard-to-fill roles to robots and cobots reduces the need to outsource positions, move factories offshore, or subject human workers to hazards. Modern optics and vision systems optimize robot performance while protecting against unintended physical contact.

Continuous clarity of vision is fundamental to effective robotics optics. Adaptive lenses from Dynamic Optics, part of Opto Engineering Group, enable fast focusing in challenging applications. Whether adjusting the focus of a camera on a robot arm or tracking items across a field of view, an adaptive lens can change optical power in 1/100th of a second without image degradation.

Integrating adaptive optics technology on a robot arm enables optical probes for precision measurement, surface inspection, or cavity inspection. Dynamic Optics’ AO Series focusing module can be mounted in front of a fixed focal length lens and used horizontally or vertically.

High-definition 4D LiDAR video cameras from TetraVue provide deeper vision for robots and drones by enabling long-range 4D (3D plus time) motion capture. Because the cameras capture 60 million bytes per second – 100 times faster than traditional LiDAR – they allow measurement of the distance, location, and shape of things with very high detail.

“Today, 3D vision or LiDAR technologies have such coarse resolution, they would be classified as legally blind. TetraVue’s unique approach gives megapixel 3D video with distance for every pixel at ranges up to 200 meters, giving drones and robots the awareness they need,” says Paul Banks, founder and CEO of TetraVue.

Erik Nieves, CEO of Plus One Robotics, says the company’s PickOne Perception System is unique in that it makes use of COTS consumer-grade RGB-D sensors for color and depth information but then adds a layer of proprietary calibration to achieve the accuracies their customers require. “This allows us to keep costs low while holding to strict performance metrics,” Nieves says. “We regularly hit pick rates over 25ppm.”

The PickOne system uses 2D and 3D cameras “to image the input stream in fractions of a second to determine pick positions for robots.” PickOne software is compatible with robot manufacturers such as ABB, Denso, Fanuc, Kuka, and Yaskawa Motoman.

Collaborative materials-handling robots with integrated machine vision help to mitigate product orientation and image distortion challenges. Xyntek’s research involving convoluted neural network and capsule network (CapsNet) machine vision led to delivery of a turnkey collaborative robot case packing solution using Antares OmniVision technology.

A common challenge when trying to inspect round bottles or objects is that you cannot guarantee bottle orientation as it travels down a conveyor, observes Philip Jadd, applications and product development manager at Xyntek. “This makes quality inspections or product tracking difficult,” he adds. Xyntek engineered 360-degree inspection stations using multiple matrix/area scan cameras or line scan cameras to solve the challenge of variable product orientation with rotational material handling.

The AiKno platform for robotic process automation from L&T Technology Services blends advanced image processing, optical character recognition (OCR), natural language processing (NLP), artificial intelligence (AI), and machine learning (ML) in a cognitive engine that helps robots mimic human-level intelligence.

With underlying compute and communications technology becoming very powerful and highly affordable, leveraging of machine vision in the autonomous and robotics domains is poised to increase, predicts Ashish Khushu, chief technology officer at L&T Technology Services. He believes optics technology will be at the core of this emerging technology.

Universal Robots (UR) is starting to see a significant increase in vision-guided collaborative robots, says Joe Campbell, senior manager of applications development at the company. Several of the leading vision products integrate easily with UR cobots through the UR+ handshake. The UR+ program currently includes more than a dozen vision solutions for 2D and 3D guidance as well as inspection from leading manufacturers such as SICK, Cognex, LMI, and Pick-It.

“Integrating a vision system with a robot is often quite a headache,” Campbell says. “Choosing a UR+ certified vision system for our cobots removes that pain, allowing the user to run the entire vision application directly through the UR cobot’s teach pendant with no scripting needed.”