Yesterday, alarming headlines that seemed to be ripped right out of a sci-fi movie began circulating around social media declaring that Facebook had shut down its Artificial Intelligence system after its “Chatbots” began to converse in their own language.
Now, I don’t know about you, but when I find myself in times of creepy technological advancement, Jeff Goldblum comes to me, speaking words of wisdom: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Yes, it’s slightly terrifying to look at this story as a case of science finally taking it too far, but it’s a little fun too, and honestly, in 2017, does anything really surprise us anymore? Talking robots secretly plotting to take over the world, speaking in code words indecipherable to mere humans—sure, why not?
This particular story, however, was, as so many are these days, blown way out of proportion.
When news broke that Facebook shut the Chatbots down, media outlets jumped on the Orwellian bandwagon, claiming that the site “panicked” when the AI bots began to speak in their own language, defying the codes they had been built with. Machines developing minds of their own, knowing too much, becoming self-aware. Obviously a robot-lead rebellion could be too far behind!
Not quite.
What Are Facebook AI Chatbots?
To put it simply, Chatbots are Messenger-dwelling virtual “assistants,” able to converse with you in a manner that feels more humanistic than robotic, responding to your queries, helping you buy things, playing games, displaying flight details, etc. Useful stuff!
Facebook, always looking to take things to the next level, reported in June that it would attempt to teach Chatbots how to negotiate with humans, ultimately providing the bots with enough knowledge to help users land better deals, and to sound even more human. That last part sounds vaguely ominous, sure, but the intentions were pretty straight forward.
What Really Happened To The Facebook Chatbots?
Researchers started with small lessons, simple two-player games in which the bots were supposed to divide a group of objects between themselves. The team programmed the bots with human dialogue from thousands of games, and allowed them to use trial and error to sharpen their skills.
When two bots played the game, they began speaking in incoherent sentences. One researcher put it this way: “We found that updating the parameters of both agents led to divergence from the human language.”
What resulted, ultimately, was a somewhat failed experiment, with the bots speaking in gibberish, that reportedly sounded like this:
Bob: i can i i everything else
Alice: balls have zero to me to me to me to me to me to me to me to me to
Not exactly terrifying, but not entirely successful, either. The research team hoped that the bots would learn how to communicate effectively enough to be able to play with humans. Seems like they’re not quite there yet.
Facebook Researcher Addresses the “Panic”
The Chatbot study’s lead author Michael Lewis told the fact-checking site Snopes that “there was no panic” and “the project hasn’t been shut down.” In fact, while maybe not a total success, Lewis says that the study still yielded encouraging results.
So, no, we aren’t staring down the barrel of a real-life Terminator movie, and probably won’t be anytime soon, if ever. Still, in today’s age where technology is king, it’s something people are going to continue to talk about anyway, whether Facebook likes it or not.