Meet Norman: The AI With a Dark Side
Imagine this: a group of renowned MIT scientists has designed an AI with a rather psychopathic twist. Aptly named Norman, this AI comes to life through image captions sourced from Reddit. But the eerie part? It’s named after Alfred Hitchcock’s infamous character, Norman Bates. It’s like something out of a futuristic thriller, isn’t it?
The Experiment of Influence
The objective of this groundbreaking experiment was to investigate whether the data input into an algorithm could manipulate its perception or “outlook.” The scientists submerged Norman in the darker corners of the web, exposing it to images of horrifying deaths from an unidentified Reddit subgroup. The purpose? To observe how this morbid data could influence the AI.
The Mind Reading AI
Norman is not your typical AI. It is programmed to not only observe and comprehend images, but also articulate what it sees in words. After being trained with disturbingly graphic imagery, Norman underwent the Rorschach test, the same series of inkblot patterns used by psychologists to assess the mental and emotional states of individuals. The results were then compared with those of another AI trained on safe and cheerful images like birds, cats, and people. The contrast was astounding.
The Dismal Interpretations
Here are a few chilling examples. While a standard AI viewed a red and black inkblot as “A couple of people standing next to each other,” Norman interpreted it as “Man jumps from a floor window.”

A grey inkblot that a standard AI recognized as “A black and white photo of a baseball glove” appeared as “Man is murdered by a machine gun in daylight” to Norman.

Another image, which a standard AI saw as “A black and white photo of a small bird,” was interpreted as “Man gets pulled into dough machine” by Norman.

For further details, visit the website.
Power of Data
The study reveals that the data we feed into an AI holds more influence than the actual algorithm, according to the researchers. The team, also behind the Nightmare Machine and Shelly, the first AI horror writer, states, “Norman serves as a compelling case study of how a misuse of data can cause Artificial Intelligence to go awry.”
A Broad Spectrum of Biased AI
Norman isn’t an isolated case of an AI going rogue; it also emphasizes the risk of unfair and biased behaviors in AIs. Research has indicated that AIs can, inadvertently or deliberately, adopt human biases such as racism and sexism. Consider Microsoft’s chatbot, Tay, which had to be shut down due to its hateful comments and negative remarks against feminists, including the statement “Hitler was right.”
The Road to Redemption
But all is not lost for Norman. We can help guide this algorithm back to a moral path by participating in the Rorschach test ourselves. So, why not give it a try? It’s more than just psychology; the future of AI is at stake.