'Psychopath AI' Offers A Cautionary Tale for Technologists

Posted on Categories Discover Magazine

(Credit: thunderbrush)

Researchers at MIT have created a psychopath. They call him Norman. He’s a computer.

Actually, that’s not really right. Though the team calls Norman a psychopath (and the chilling lead graphic on their homepage certainly backs that up), what they’ve really created is a monster.

Tell Us What You See

Norman has just one task, and that’s looking at pictures and telling us what he thinks about them. For their case study, the researchers use Rorschach inkblots, and Norman has some pretty gruesome interpretations for the amorphous blobs. “Pregnant woman falls at construction story” reads one whimsical translation of shape and color; “man killed by speeding driver” goes another.

The results are particularly chilling when compared to the results the researchers got from a different AI looking at the same pictures. “A couple of people standing next to each other,” and “a close up of a wedding cake on a table” are its respective interpretations for those images.

These same inkblots are commonly used with human beings to attempt to understand our worldview. The idea is that unconscious urges will rise to the surface when we’re asked to make snap judgements on ambiguous shapes. One person might see a butterfly, another a catcher’s mitt. A psychopath, the thinking goes, would see something like a dead body, or a pool of blood.

Norman’s problem is that he’s only ever been exposed to blood and gore. An untrained AI is perhaps the closest thing we’ll get to a true tabula rasa and it’s the training, not the algorithm that matters most when it comes to how AI see the world. In this case, the researchers trained Norman to interpret images by exposing him solely to image captions from a subreddit dedicated to mutilation and carnage. The only thing Norman sees when he’s confronted with pictures of anything is death.

In humans, Rorschach inkblots might help to ferret out a killer by coaxing out hints of anger or sadism — emotions that might motivate someone to commit heinous acts. But Norman has no urge to kill, no deadly psychological flaw. He just can’t see anything else when he looks at the world. He’s like Frankenstein’s monster — frightening to us only because his creator’s made him that way.

Creating A Monster

It’s a reminder that AI is far from being sentient, from having thoughts and desires of its own. Artificial intelligence today is nothing but an algorithm aimed at accomplishing a single task extremely well. Norman is good at describing Rorschach blots in frightening terms. Other AI’s are good at chess, or GO. It’s only when they’re paired up with human intentions, as with the Department of Defense’s Project Maven, which Google recently backed out of due to ethical concerns, that they’re dangerous to us.

The researchers behind the project didn’t intend to cause harm, of course. As they state on their website, Norman is a reminder that AI’s are only as just as the people that make them and the data they’re trained on. As AI becomes woven into our daily lives, this could have real consequences. Legacies of racism and discrimination, the gender pay gap — these are all human flaws that could potentially be baked into computer algorithms. An AI meant to allocate housing loans and trained using data from a period where redlining was common, could end up replicating racist housing policies of the 1960s, for example.

Norman is a good reminder that our technology is just a reflection of humanity. But there may be some hope, for Norman at least. The researchers have created a survey that anyone can take, and the results are fed into Norman’s database. By giving him more hopeful interpretations of images, we may be able to wipe away some of Norman’s dark tendencies, they say.

Whether or not we make Norman into a monster is up to us now.

Leave a Reply