Artificial intelligence is a set of algorithms used to imitate human thought. It takes information and turns it to meaningful knowledge, at least that is the goal. It analyzes data to solve or even anticipate problems. But where does our fascination for AI come from? This blog will talk about how our fascination and partial admiration of AI combined with our fundamental cognitive biases can lead to a dangerous future.
From beating chess grandmasters to being embedded in self-driving cars, it is remarkable what AI could achieve. However, looking at past experiences, AI technology can go spectacularly wrong. Examples include IBM’s project “Watson for Oncology” (where they wanted to cure cancer), Microsoft’s AI “chatbot” being corrupted and Tesla’s autopilot feature crashing the car, just to name a few. The fundamental feature that AI has, self-learning, is a double-edged sword as not even developers know in real time how the AI has derived its outcome. It is only after something goes wrong, and developers rewrite the algorithm to explain itself, that allows us to understand what has happened. What will we do when AI becomes increasingly complex, increasingly interconnected, and increasingly prevalent?
It is precisely the moment when we say “This situation is too complicated. AI can process way more information than me anyways, I trust the information it gave me” that AI becomes dangerous to ourselves. We already see instances of this today, e.g. in the justice system. Compas is used in 13 states in USA to assess the likelihood of a defendant being guilty and people have been wrongly incriminated. The difficult part is being critical of it; I mean why wouldn’t you trust it? The state approved it and the IT department approved it, it must be good!
In 1963 Stanley Milgram conducted an experiment to see the degree of conflict between people’s obedience to authority and their personal conscience. What he found was astonishing. The volunteers would follow orders, from the person in authority, to extreme lengths. They would listen to what they were told from the authority over their own personal conscience! It is not my intent to imply AI has evil intentions. However, as AI becomes increasingly capable, increasingly complex and increasingly ubiquitous, it will become a technological force of authority. Ironically as AI becomes more enhanced; it becomes more important for us to be critical of both the information it provides as well as the decisions it carries out.