Industrial Safety / Industrial Robotics / Industrial Automation / Control Systems

Hand in hand: What collaborative robots mean for worker safety

The rise of collaborative robots demands new considerations to keep workers safe.

By Christine LaFave Grace, managing editor

Do you trust your co-workers with your life? Do you trust them to follow the safety rules they’ve been given, to stay out of your way to avoid collisions, and to stop what they’re doing on a dime if they see you’re in harm’s way?

Would you trust a robot to do the same?

As industrial robots have become increasingly sophisticated, with machine-learning capabilities and more-sensitive sensors to detect nearby hazards, the literal and figurative distance between robots and their human counterparts is diminishing. That shrinking gap brings with it a raft of new safety considerations. And if human-robot coexistence in factories is to be defined less by separation than by collaboration, then it’s critical that plants evaluate not just the technologies but also the strategies they employ to keep workers safe.

A changing safety landscape

Beyond the safety features that controllers provide, robotics safety to this point has been defined in large part by “hard guards” – cages and other physical barriers separating robots from humans – and such virtual fences as are provided by radio frequency (RF) guarding and industrial light curtains. Safety systems in the latter category depend on installation of devices (an antenna for RF guards, LED light-beam transmitters and receivers for light curtains) around the machine to trigger a machine stop when a nearby human or object crosses a virtual barrier. (See “Machine-Guarding Basics” from our April 2017 issue;

“When you look at even five years ago, (robots) have a lot of hard guarding, fencing around them, kind of put in the corner if you will,” says Michael Lindley, VP of business development and marketing for system integrator Concept Systems, a certified member of the Control System Integrators Association (CSIA).

But new, collaborative robots are designed to work with human operators, not strictly independent of them, and so new approaches to safety are needed. “The robot can now exist in the middle of the manufacturing floor, can have workers around it and in proximity working at full speed, so then companies can look at their manufacturing flow and put the robot in there,” Lindley says.

Case in point: The robots in FANUC’s CR series of collaborative robots are designed for such applications as heavy lifting and tote and carton handling – physically demanding tasks that are ergonomically challenging for humans. Because the nature of the work that these and other vendors’ “cobots” perform places them in close proximity to humans, traditional fencing systems are impractical if not impossible.

So what ensures workers’ safety? Sensor-centric systems on the robot itself. FANUC’s CR series ’bots – which are green, rather than the company’s signature bright yellow – have what FANUC describes as “highly sensitive contact sensing technology” as well as a soft exterior skin to cushion any incidental contact. They’re the company’s first force-limited robots, and as designed, if a human comes into contact with one of the CR robots, the robot will stop; operation can resume with the push of a button.

The concept of power and force limiting is central to the safety architecture of many cobots. For some small collaborative robots, such as ABB’s dual-arm YuMi, incidental contact with humans isn’t necessarily something that must be avoided or that must prompt a hard stop of the machine. The imperative, then, is to limit the power and force with which the robot comes into contact with a human or other outside object and to control the nature of that contact.

Jeff Fryman, owner of JDF Consulting Enterprises and former director of standards development at the Robotic Industries Association (RIA), notes that there are two types of pressure considered with respect to human-robot contact. “One we call quasi-static, which you could consider to be pinching or trapping, where the body part is restrained while pressure is being applied to it,” he says. “Then there’s transient contact, where the robot strikes you but you’re out in the open and the body can reflexively move (away from it).”

Parameters for contact that doesn’t result in an automatic stop of the machine need to take into account both where on the body contact may occur and the user’s physical characteristics. Under most circumstances, noted current RIA standards development director Carole Franklin at the A3 Automate trade show in Chicago in April, humans experience pain before an actual injury occurs, so “if we can prevent the person even from experiencing pain, (it’s more likely) that we’ll also prevent them from being injured.”

Stipulating these parameters is a complicated task for system integrators, who play a major role in helping lay the foundation for safe use of robotics systems in a plant. “Trying to determine where the human is going to be relative to where the robot is moving is going to be a challenge that the integrators have to look at,” Fryman says.

The good news both for integrators and the plant managers looking to install these systems: There are standards to follow. ANSI/RIA 15.06-2012, the American National Standard for Industrial Robots and Robot Systems, provides guidelines for robot system installation and methods of safeguarding workers. These specifications were harmonized into the international ISO/TS 15066, released in February 2016, which focuses specifically on collaborative industrial robot systems.

The standards’ technical specifications for safe application of collaborative robot systems are based in part on data from a study on pain thresholds for different parts of the body. The standards also provide guidance on four types of collaborative robot operations and the safety measures that define them:

  1. Safety-rated monitored stop, an application wherein the cobot stops operation when a human enters a defined workspace and remains stopped while he/she is there (as is possible with virtual fencing for traditional robots)
  2. Hand guiding, in which the robot moves only when it is under direct control of a human (this doesn’t refer to “teaching” a robot to perform certain motions)
  3. Speed and separation monitoring, wherein a human and robot are allowed to operate in the same workspace but the robot will stop if a human gets too close, and
  4. Power and force limiting, where the robot’s speed, torque, and motion are controlled so incidental contact between the robot and the operator doesn’t cause harm.

“It’s really important to be aware that safety is freedom from injury,” Franklin said at the Automate show. “It’s not appropriate to allow (workers) to receive a bruise per day. That is an injury – a mild injury, but freedom from injury is our goal.

Assess and address

Whatever type of robotic system is implemented, whether a robot is collaborative and “fenceless” or separated out by a traditional cage, keeping workers safe starts with conducting a comprehensive risk assessment, experts say.

“A risk assessment will tell you what safeguards you need and where” based on the technologies you plan to use and the applications you intend to employ, Franklin says. Further, a proper risk assessment, says Henrik Jerregard, global product manager of robot controllers for ABB, will “look at the system as a whole and not only consider the different parts.”

For the assessment to be most effective, it should be conducted as early as possible in the planning process for adding new automated elements, say Concept Systems’ Lindley and Miles Purvis, CEO and owner of safety review company ProSafe. “If you take the approach of, ‘Let’s put together a work flow and then we’ll do an assessment,’ it’s like an afterthought,” Lindley says.

Instead, urges Purvis, a risk assessment should be conducted before design work is completed. “A lot of times the risk assessment will determine what the safety expectation is so that the designers can go away with that and build toward the expectation,” he says. “What really establishes the design concept ahead of time is doing an assessment.” And whether designing a traditional or a collaborative robot workspace, the design has to be “not just what (you) thought was a good idea, it has to be what the standard actually asks for.”

This kind of up-front, in-depth analysis is a lot more efficient than the alternative, which is going through a hazard assessment after an injury or a near-miss. For some clients ProSafe has worked with after an incident, “They’re surprised that we have to tell them to change something,” Purvis says. “They’ve been doing this for years and they don’t actually realize why they’ve been doing something a certain way.”

It’s not uncommon to encounter companies that have skipped doing the calculations to determine exactly where their light curtains, scanners, or safety mats should be, Purvis adds. And when a company fails to determine robotic safety hazards depending on where in a given space the robot is operating and a human’s position relative to it, workers can be put at risk unnecessarily.

When things go wrong, regardless of the operator’s skill, the robot’s safety features, and the design and setup of the robotic workspace, the consequences can be tragic. In July 2015, a 57-year-old Michigan woman who specialized in fixing robots was killed when a robot arm swung into the area she was working in at a Ventra Ionia plant and crushed her head between a hitch assembly and a fixture, according to a March 14 story in the Detroit Free Press. A lawsuit that Wanda Holbrook’s family filed in March against five robotics and automation companies (the plant is not named) states that the robot arm that killed her should not have been able to enter the section she was working in at the time.

For help in conducting a robotics safety risk assessment, Franklin recommends seeking assistance from a certified robot integrator. In an interview at the Automate show, Franklin noted that individuals certified through RIA’s certified robot integrator program “are folks who have demonstrated by test and audit that they have a solid understanding of the safety standard and what it requires.”

Regardless of who conducts the assessment, “you need to start with, ‘How is the worker going to engage with this piece of machinery?’ ” Lindley says. “Loaded, unloaded, whatever they’re doing, if you start with that worker interface, then you would also immediately have to start with safety. I think that puts those two on a parallel path and gets safety to the forefront of things.”

One common misconception, according to Franklin, is that because safety technologies such as sensors and vision systems are built into cobots, there’s less of a need to consider other safety measures or be vigilant about safe-operation standards. “A lot of people seem to have this idea that if you have a quote-unquote collaborative robot, then you are inherently safe; you don’t need fences or other safeguards,” she says. “The robot itself doesn’t operate in a vacuum. It may be the case that it’s necessary to continue to have fencing around some of that robot, (and) leave a hole where the actual collaboration happens.”

Fryman echoes Franklin’s cautions, commenting that the arrival of new technologies on the plant floor doesn’t mean that traditional worker safeguards no longer have a place.

“We have a knowledge base of what’s safe and how to protect,” he says. “I simply don’t understand the rush to get rid of fences in your robot system, because that’s the least-expensive part of a system.” He continues: “The problem people today have is the standard says you can do certain things; it doesn’t say you can do them without guards.”

The most important thing

The first of the Three Laws of Robotics conceived by science fiction writer (and “I, Robot” author) Isaac Asimov states that a robot must not injure a human being or, through inaction, allow a human to be harmed. Robots’ baked-in safety technologies mitigate injury risk only to a point. More important, Franklin, Fryman, and Lindley emphasize, is this essential principle of robotics safety: It’s the application, stupid.

“The application is the key part,” Franklin says. A robot will do what it is programmed to do – and that should be what it was designed to do, in the way it was designed to operate. “Even if you have a soft-edge collaborative robot arm, it’s about the application,” says Franklin. “If there’s a way for a human to experience discomfort or pain or be injured, that’s not an appropriate application.”

Some 15 years ago, when RIA coined the term “collaborative robot,” says Fryman, “Our original intention was to give some parameters to the manufacturers to take the big, bad articulated robots and turn them into wimps that couldn’t hurt anybody.” That idea proved misguided. “A wimpy robot can do no work,” he says. “What we discovered was that it’s not a collaborative robot so much as it’s a collaborative robot application. That’s a super-important distinction.”

Even when a robotics system is designed, built, and integrated according to applicable safety standards, trouble arises when individuals try to tweak an application and stretch the limits of safe operation in an attempt to do something faster or skip part of a standard operating process. “They’re trying to do something the machine wasn’t designed to do,” says ProSafe’s Purvis. “Let’s say they want to change to the next cycle for the machine but the machine needs to go through this whole process and it’s going to take too long, so then they try and fake out the robot by making it pick up a different end effector, and the only way they can do that is if they get around the safety (mechanisms).”

Both a craving for speed and a desire to minimize costs can drive operators, supervisors, and higher-level decision-makers to seek risky shortcuts when it comes to robotics operation, Purvis says. “We get talked to by operators, maintenance, engineering, production, and all of them look for ways to do it faster,” he says. After all, speed is a big part of the impetus for implementing robots and other automation technologies in the first place.

But according to Thomas Knauer, VP of marketing at machine guarding tools vendor Omron STI, it’s a mistake to think that safety and productivity are diametrically opposed, especially as robotic safety technologies such as vision systems and sensors become more advanced, robots themselves become more nimble, and safety standards continue to evolve to take into account these new capabilities. Speaking at the Automate show, Knauer noted that easier setup and programming/reprogramming of robotic work cells is allowing for higher productivity with high safety and the flexibility manufacturers increasingly seek as they move from high-volume, low-mix to low-volume, high-mix production.

As Lindley puts it, “The more robust that safety system is, and perhaps more flexible, more dynamic, the more collaboration a worker can have with that machinery and that could create the most efficient work environment.”

As for those who would protest costs associated with ensuring compliance to safety standards – to those who say safety is expensive? “They have a point, but I would say lack of safety is more expensive in the long run,” Franklin says. “It’s your responsibility to send people home at the end of the day in the same condition as they were when they arrived.”

To learn more, read "Stäubli’s vision of the collaborative future," and "Lowe's tests robotic exosuit for retail employees"