AI is now so complex its creators can’t trust why it makes decisions
Dec 11, 2017
Artificial intelligence (AI) might tag your friends in photos on Facebook or choose what you see on Instagram, but materials scientists and NASA researchers are also beginning to use the technology for scientific discovery and space exploration.
But there’s a core problem with this technology, whether it’s being used in social media or for the Mars rover: The programmers that built it don’t know why AI makes one decision over another.
While the problem exists today, researchers say the time is now to act on making the decisions of machines understandable, before the technology is even more pervasive. Previous research has shown that algorithms amplify biases in the data from which they learn, and make inadvertent connections between ideas.
“We don’t want to accept arbitrary decisions by entities, people or AIs, that we don’t understand,” said Uber AI researcher Jason Yosinkski, co-organizer of the Interpretable AI workshop. “In order for machine learning models to be accepted by society, we’re going to need to know why they’re making the decisions they’re making.”
“As machine learning becomes more prevalent in society—and the stakes keep getting higher and higher—people are beginning to realize that we can’t treat these systems as infallible and impartial black boxes,” explained Hanna Wallach, a senior researcher at Microsoft.
Join the discussion
We welcome your thoughtful comments.
All comments will display your user name.
Comments
No one has commented on this page yet.
RSS feed for comments on this page | RSS feed for all comments