Smart Manufacturing / Big Data Analytics / IIoT / MES / Human Machine Interface / Network Infrastructure / Remote Monitoring / Industrial Robotics / STEM Education

AI is now so complex its creators can’t trust why it makes decisions

By Dave Gershgorn, for Quartz

Dec 11, 2017

Get Plant Services delivered to your inbox Monday through Friday! Sign up for  Plant Services' complimentary Smart Minute (Monday-Thursday) and Smart Digest  (Friday) e-newsletters to get maintenance and reliability know-how you can put  to use today, plus the latest manufacturing news from around the Web, white  papers, and more. Learn more and subscribe for free today.

Artificial intelligence (AI) might tag your friends in photos on Facebook or choose what you see on Instagram, but materials scientists and NASA researchers are also beginning to use the technology for scientific discovery and space exploration.

But there’s a core problem with this technology, whether it’s being used in social media or for the Mars rover: The programmers that built it don’t know why AI makes one decision over another.

While the problem exists today, researchers say the time is now to act on making the decisions of machines understandable, before the technology is even more pervasive. Previous research has shown that algorithms amplify biases in the data from which they learn, and make inadvertent connections between ideas.

“We don’t want to accept arbitrary decisions by entities, people or AIs, that we don’t understand,” said Uber AI researcher Jason Yosinkski, co-organizer of the Interpretable AI workshop. “In order for machine learning models to be accepted by society, we’re going to need to know why they’re making the decisions they’re making.”

“As machine learning becomes more prevalent in society—and the stakes keep getting higher and higher—people are beginning to realize that we can’t treat these systems as infallible and impartial black boxes,” explained Hanna Wallach, a senior researcher at Microsoft.

Read the full story.