Researchers and developers at MIT (Massachusetts Institute of Technology) have developed an artificial intelligence-based application that has been named Norman, after the lead character in Alfred Hitchcock’s Psycho. The AI is trained to perform image captioning, which means that it can scan any image presented to it, and provide a standard caption for it. Except in this case, Norman’s entire training is based on data from a known sub-Reddit about death.
Naturally, with this kind of data set as reference material, Norman’s image captions are particularly gruesome. The data set ensures that Norman’s abilities are twisted to a point where even ordinary Rorschach test images return gruesome responses that work around death and murder. The responses were compared to what an ordinary MSCOCO-trained AI returned, and you can see for yourself how twisted the AI is.
Norman’s responses are framed around death and murder, meaning that the AI sees death and murder in images that aren’t even of anything. A typical Rorschach test uses slides that are cloudy and don’t actually portray anything, but the point of the test is to get an insight into the mind of the person taking the test. Or in this case, the mind of the AI.
WATCH: OnePlus 6 First Look
Of course, this doesn’t mean that AI is inherently bad or has the potential to go the way popular fiction would have us believe. AI and robotics are far from the point where they can turn on humankind as shown in movies such as Terminator and The Matrix, but the point of this is only to show that a biased data set can largely influence any machine learning.
These experiments show that AI can harm people if it’s training and programming is biased, and the recent accident involving a self-driving Uber vehicle in Arizona, USA shows that we have a long way to go before we can put too much trust in robots.