This issue of Plant Services features two articles on robotics safety including our cover story, which addresses the emerging “cobotics” trend. The word is a new one, denoting the dynamic where industrial robots perform work in very close collaboration with people.
Frankly, “cobotics” is a word that I’m still getting used to. Yet the word “robot” itself was new once upon a time, debuting less than a century ago in the 1921 play “R.U.R.” by Czech writer Karel Čapek. The play is set in a factory that makes artificial people, and over time these “roboti” are forced to take on the bulk of industrial labor around the globe. This results in a revolution in which the robots kill all human beings except for one – an engineer who “works with his hands like the robots.”
Twenty years later, notions of human displacement and disruption by robots were familiar enough that Isaac Asimov could define his famous Three Laws of Robotics, which outline a relationship intended to maintain human safety and preserve human authority over robots:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Having defined these laws, Asimov then used dozens of novels and short stories to explore the logical loopholes that would emerge were these laws to be applied in the real world. Ever since, robotic characters from C-3PO, K9, and Wall•E to Gort, RoboCop, and the Terminator robots have been used to probe the limits of our comfort with robots. Can we trust our safety around them, and how close is too close?
In our real industrial world, organizations including RIA and OSHA regularly issue standards and guidance documents designed to keep workers safe in a world where manufacturing automation is the norm, and where people and robots work together in increasingly closer quarters.
As Christine LaFave Grace notes in this month’s cover story, “collaborative robots are designed to work with human operators, not strictly independent of them, and so new approaches to safety are needed.” In other cases, as Eric Esson notes in the Your Space column this month, plant teams should defer to more general safety guidelines in the absence of specific standards.
Later in Asimov’s career, he revisited his laws of robotics to add one more, as if fixing one final gap after decades of reading and writing. This law would supersede the three others:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
It makes me wonder how a “Three Laws of Humanity” would read if they were written by IBM’s Watson, especially if Watson concludes that HAL really was one step ahead.