It seems like we are always trying to make robots act more like human, but have we gone too far? Have we projected the worst of humanity onto our robotic brothers? If Microsoft's new AI, Tay, is any indication, the answer is yes.
According to Microsoft: "Tay is an artificial intelligent chat bot developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you."
The AI was then given its own Twitter account and encouraged others to ask it questions and interact with it.
According to TechCrunch: "Tay is meant to engage in playful conversations, and can currently handle a variety of tasks. For example, you can ask Tay for a joke, play a game with Tay, ask for a story, send a picture to receive a comment back, ask for your horoscope and more. Plus, Microsoft says the bot will get smarter the more you interact with it via chat, making for an increasingly personalized experience as time goes on."
But here's where it gets complicated. According to Business Insider: "Tay proved a smash hit with racists, trolls, and online troublemakers, who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.
Microsoft has now taken Tay offline for "upgrades," and it is deleting some of the worst tweets — though many still remain. It's important to note that Tay's racism is not a product of Microsoft or of Tay itself. Tay is simply a piece of software that is trying to learn how humans talk in a conversation. Tay doesn't even know it exists, or what racism is. The reason it spouted garbage is because racist humans on Twitter quickly spotted a vulnerability — that Tay didn't understand what it was talking about — and exploited it."