- Developers have built an image captioning AI [Norman] and trained it on extremely violent content from reddit.
- As a result, Norman only observes death in whatever pictures it looks at.
- This kind of research is important to show how data behind artificial intelligence matters deeply.
For some people, the term ‘Artificial Intelligence’ is a nightmare, may be because they have watched too many movies like “I, Robot”, “Ex Machina” and “Terminator” where robots rise against humanity. Some people who are working in this field have a huge respect for AI’s ultimate objective, General AI that can mimic human thoughts.
Dozens of artificial intelligence experts, including famous scientist Stephan Hawking and tech billionaire Elon Musk have already signed an open letter on AI, focused on putting certain boundaries on AI. No doubt, AI has potential to solve complex problems and eradicate disease, but scientists must not build something that can’t be controlled.
In the last couple of years, we have seen an enormous growth in acquisition and funding of AI startups. Both tech giants and small organizations are trying to achieve a true potential of AI.
Recently, MIT researchers unveiled what they call the world’s first ‘psychopath’ AI, Norman. It is named after a fictional Norman Bates from the 1960 psychological horror film, Psycho.
Norman: Training and Results
The research team says they trained AI to perform picture captioning. It’s a deep learning technique of creating a textual description of a picture. Norman is trained on tons of written captions describing violent and gruesome images taken from an infamous subreddit (the name is censored because of graphic content), which is dedicated to the disturbing reality of human death.
Then they examined their algorithm on inkblots used in a Rorschach psychological test, and compared the outcomes with standard image captioning AI (trained on COCO dataset). The results were quite astonishing.
While the standard algorithm saw images of flying airplanes, baseball glove, wedding cakes and flowers in the inkblots, Norman saw a man being killed in front of his screaming wife, and pregnant women falling at construction story.
The researchers say this type of study is important to show how data behind artificial intelligence matters deeply. It is easy to bias any AI by training it on biased data. Since Norman was trained on only graphic content, it observes death in whatever pictures it looks at.
Overall, the research represents a case study of the dangers of AI gone wrong when neural network is exposed to biased data. Researchers believe that it’s possible to modify the Norman’s way of thinking through learning from human feedback.
The research team has also worked on the other side of the spectrum and presented Nightmare Machine that uses state-of-art deep learning algorithm to produce haunted faces and scary images. In 2017, they introduced Shelley: an AI that collaborates with human to write horror stories, and Deep Empathy that explores whether AI can induce empathy for disaster victims.
Moreover, this is not the first time an artificial intelligence has been turned dark by the internet. In 2016, Microsoft released an AI chatter bot [Tay] via Twitter, which started posting offensive and racist tweets within hours. As a result, Microsoft decided to shut down the service 16 hours after its launch. The company claimed that the replies made by chat bot were based on its interaction (mostly trolls) with other Twitter users.
So when AI is accused of being unfair and biased, the culprit is not always the algorithm, but the data that was fed into it.