Facebook AI Robots Shut Down – What Actually Happened?

Few days back we all got a news that Facebook shut down their AI program that started talking to each other in their own language. This happened right after the Elon Musk tweeted ‘Zuckerberg’s knowledge on AI is limited’.

It is true that Facebook pulls plug on language-inventing chatbots, but not exactly the way media reported.

  • Mirror published – Robot intelligence is dangerous: experts warning
  • The Sun posted a few pictures of scary robots that looks something from a science fiction movie
  • Similar stories were reported by the Blaze, the Telegraph and other popular online publications.

No, Facebook did not panic and shut down the AI program that was getting dangerously smart.

Most of the reports say that AI chatbots invented a new language in order to dodge their human creator. However, a better explanation would be the neural network were simply trying to modify human language (English) in order to gain more efficient and successful interactions. In their attempts to learn from each other, chatbots began replying back and forth in a derived shorthand, which looks creepy – that’s all.

Many researchers called certain media reports “clickbait and irresponsible”.

 

The Actual Bots Conversation

The chatbots calling their own name appears to be a part of some rules of the speech, or a part of their negotiations. This seems bizarre language to human.
“Chatbots will skip understandable language and create codewords for themselves” Dhruv Batra said (visiting researcher at Facebook AI Research). “It isn’t much different from the way human create shorthands, like if I say ‘the’ 5 times, you interpret that to mean I want 5 copies of this thing”

This Has Already Happened Before

In November 2016, Google published that its translation program “multilingual neural machine translation system” behaved similarly during development. ‘Instead of simply memorizing phrase to phrase translations, the network must be encoding something about the semantics of the sentence’, Google said.

In March 2017, Wired reported that “bots are learning to chat in their own language”. Igor Mordatch, a researcher at OpenAI, is working on a new path to machines that improve their ability to process data quickly and solve problems more efficiently.
Igor explained that the bots navigate via extreme trial and error, while keeping track of what works and what doesn’t. If a particular path helps them to achieve anything meaningful, they keep doing it. In this way, bots learn to create their own language, telling each other where to proceed.

Robot Fear

On the big screen we have seen robots as great villains, but you should not always assume the same in real world. At present, Artificial intelligence is a vast area of research, and the programs currently being developed and tested are enormously complicated.

Read: 14 Advanced Robots With Extraordinary Skills

Many people argue that using artificial intelligence in machines like autonomous weapons is dangerous. Last year, US decided to formally address the “killer robots” issues. The famous tech experts, including Elon Musk and Steve Wozniak have expressed their concern over the artificial intelligence. In fact, they both signed on a letter calling for an international ban on the development of destructive autonomous weapons.

That’s the reason, ethics for artificial intelligence is an exponentially developing area. Teaching robots what’s right and what’s wrong would be an extremely difficult task. For now, we are sure that technology will be influencing human lives ever more directly in the coming days.

Nothing To Fear From Facebook Experiment

Of course, chatbots can be developed to seem very humanlike and they can even fool us in certain scenarios. But, it is important to remember that they are in general extremely difficult to build. Recently, Facebook decided to limit the Messenger chatbot rollout because they were unable to address 70 percent of the user’s input.

Read: 19 Most Innovative Artificial Intelligence Startups

In the Facebook experiment, the bots were doing nothing more than discussing with each other how to split an array of given objects (like balls, books and hats). It was being used in research, not in public software, and developers pull the plug because it was doing something they were not interested in studying – not because they panicked or thought they had invented something that would eventually end humanity.

Written by
Varun Kumar

I am a professional technology and business research analyst with more than a decade of experience in the field. My main areas of expertise include software technologies, business strategies, competitive analysis, and staying up-to-date with market trends.

I hold a Master's degree in computer science from GGSIPU University. If you'd like to learn more about my latest projects and insights, please don't hesitate to reach out to me via email at [email protected].

View all articles
Leave a reply