Will AI mean the end of humanity?

16

October

2019

No ratings yet.

Whereas many people think AI might be the best invention humanity has made, some people (including renowned scientists such as Steven Hawking) believe that the fact that AI is outsmarting humanity is not a good thing. In fact, these people have even raised concerns that AI may mean the end of humanity. For example, Hawking said: “I fear that AI may replace humans completely. If people design computer viruses, someone will design AI that improves and replicates itself” (Martin, 2017). This scenario, where AI is the predominant sort of intelligence in the world, is also called the AI takeover. Therefore, advocates of this idea have pushed scientists into researching possible ways to stop AI outsmarting humanity. Different science fiction movies have paid attention to a scenario wherein robots take over the world. In 2004, the movie “I, Robot” with Will Smith playing in it, was released. The story takes place in the year 2035, when many households have integrated robots in their homes. Then, when a large supercomputer that actualizes all robots gives order to all other robots to take over the world, a disaster emerges.

Can we be sure that this will definitely be the future of AI? The answer is no. Currently, there is absolutely no consensus on the direction AI is going. Many people doubt that machines will ever be capable of, for example, abstract thinking. Furthermore, they argue that robots can never replace interactions as we know as humans. Right now, AI is only able to perform in a very specifically defined task, and can only execute tasks that it’s programmed to do so. According to opponents of an “AI takeover”, the movie “I, Robot” is not realistic. However, they do admit that AI will definitely take over the entire business world.

Getting scared already? Let me know what you think about the future of AI for humanity in the comments!

 

 

References

Martin, S. (2017). Humanity’s days are numbered and AI will cause mass extinction, warns Stephen Hawking. Retrieved 16 October 2019, through https://www.express.co.uk/news/science/875084/Stephen-Hawking-AI-destroy-humanity-end-of-the-world-artificial-intelligence

Please rate this

Algorithmic transparency: do we really not want algorithms to discriminate?

28

September

2019

5/5 (2)

The use of algorithms and big data has increased over the past years. For example, the police are using it to detect criminality, tax authorities use it to detect fraud, and Dutch supermarket Albert Heijn uses algorithms to dynamically price its products in order to reduce food waste (AH, 2018). Sometimes, things go wrong with using algorithms. Uber’s autonomously driving car did not recognize a pedestrian and hit it (Levin & Wong, 2018), and Amazon has recently stopped with an AI recruiting tool after they found out the algorithms did not operate in a gender-neutral way (Dastin, 2018). This raises important questions regarding the ethics of using algorithms in different practices.

Algorithms are often described as a “black box” method, because we do not really get to see what algorithms do once they are executed (Brauneis & Goodman, 2018). Yet, outcomes of these algorithms have important consequences. For example, whether or not someone is entitled to social security, or will be hired for a job. Therefore, sometimes it is argued that algorithms must become more transparent, and that the source code, inputs and outputs must be revealed in order to increase trust in the algorithm (Hosangar & Jair, 2018). This could increase trust and transparency in why an algorithm has come to a certain result. This way, negative outcomes, such as discrimination, can be detected.

But the question is: aren’t algorithms meant to discriminate? I would not be very happy if I would be mistaken for a fraudster, and I am happy to know that the government tries to differentiate between quarters at high risk for criminality, to focus on prevention of crimes. It is thus exactly what algorithms must do: discriminate. Another problem with transparency of algorithms, is that, for example, fraudsters now exactly know which criteria are used to detect them. This means that transparency can lead to gaming.

If our professor would be transparent about the algorithms that are used for grading our tests, I’m sure we would suddenly all score higher just by using certain word combinations (such as “recent research has shown”) – I can’t wait until we get there! 😉

Let me know in the comments what you think!

Please rate this