Hand in hand: What collaborative robots mean for worker safety

The rise of collaborative robots demands new considerations to keep workers safe.

By Christine LaFave Grace, managing editor

1 of 3 < 1 | 2 | 3 View on one page

Do you trust your co-workers with your life? Do you trust them to follow the safety rules they’ve been given, to stay out of your way to avoid collisions, and to stop what they’re doing on a dime if they see you’re in harm’s way?

Would you trust a robot to do the same?

As industrial robots have become increasingly sophisticated, with machine-learning capabilities and more-sensitive sensors to detect nearby hazards, the literal and figurative distance between robots and their human counterparts is diminishing. That shrinking gap brings with it a raft of new safety considerations. And if human-robot coexistence in factories is to be defined less by separation than by collaboration, then it’s critical that plants evaluate not just the technologies but also the strategies they employ to keep workers safe.

A changing safety landscape

Beyond the safety features that controllers provide, robotics safety to this point has been defined in large part by “hard guards” – cages and other physical barriers separating robots from humans – and such virtual fences as are provided by radio frequency (RF) guarding and industrial light curtains. Safety systems in the latter category depend on installation of devices (an antenna for RF guards, LED light-beam transmitters and receivers for light curtains) around the machine to trigger a machine stop when a nearby human or object crosses a virtual barrier. (See “Machine-Guarding Basics” from our April 2017 issue; http://plnt.sv/1704-AZ.)

“When you look at even five years ago, (robots) have a lot of hard guarding, fencing around them, kind of put in the corner if you will,” says Michael Lindley, VP of business development and marketing for system integrator Concept Systems, a certified member of the Control System Integrators Association (CSIA).

But new, collaborative robots are designed to work with human operators, not strictly independent of them, and so new approaches to safety are needed. “The robot can now exist in the middle of the manufacturing floor, can have workers around it and in proximity working at full speed, so then companies can look at their manufacturing flow and put the robot in there,” Lindley says.

Case in point: The robots in FANUC’s CR series of collaborative robots are designed for such applications as heavy lifting and tote and carton handling – physically demanding tasks that are ergonomically challenging for humans. Because the nature of the work that these and other vendors’ “cobots” perform places them in close proximity to humans, traditional fencing systems are impractical if not impossible.

So what ensures workers’ safety? Sensor-centric systems on the robot itself. FANUC’s CR series ’bots – which are green, rather than the company’s signature bright yellow – have what FANUC describes as “highly sensitive contact sensing technology” as well as a soft exterior skin to cushion any incidental contact. They’re the company’s first force-limited robots, and as designed, if a human comes into contact with one of the CR robots, the robot will stop; operation can resume with the push of a button.

The concept of power and force limiting is central to the safety architecture of many cobots. For some small collaborative robots, such as ABB’s dual-arm YuMi, incidental contact with humans isn’t necessarily something that must be avoided or that must prompt a hard stop of the machine. The imperative, then, is to limit the power and force with which the robot comes into contact with a human or other outside object and to control the nature of that contact.

Jeff Fryman, owner of JDF Consulting Enterprises and former director of standards development at the Robotic Industries Association (RIA), notes that there are two types of pressure considered with respect to human-robot contact. “One we call quasi-static, which you could consider to be pinching or trapping, where the body part is restrained while pressure is being applied to it,” he says. “Then there’s transient contact, where the robot strikes you but you’re out in the open and the body can reflexively move (away from it).”

Parameters for contact that doesn’t result in an automatic stop of the machine need to take into account both where on the body contact may occur and the user’s physical characteristics. Under most circumstances, noted current RIA standards development director Carole Franklin at the A3 Automate trade show in Chicago in April, humans experience pain before an actual injury occurs, so “if we can prevent the person even from experiencing pain, (it’s more likely) that we’ll also prevent them from being injured.”

1 of 3 < 1 | 2 | 3 View on one page
Show Comments
Hide Comments

Join the discussion

We welcome your thoughtful comments.
All comments will display your user name.

Want to participate in the discussion?

Register for free

Log in for complete access.

Comments

No one has commented on this page yet.

RSS feed for comments on this page | RSS feed for all comments