Is AI mankind’s last invention?

10

October

2020

No ratings yet.

Artificial Intelligence is rapidly changing the world. Through increasingly better Artificial Intelligence, mankind is helped on various fronts. For example, human tasks can be performed faster and more efficiently by AI. AI is applied on a large scale in the form of automatization and forecasting. AI is also applied in the medical sector to predict diseases faster and improve treatments. A major advantage of AI compared to human capability is that there is no room for error in AI.

 

However, the use of AI also has a downside. AI can be used for malicious purposes. In today’s society some dangers of AI are already clearly visible. For example, AI can be used for autonomous weapons. These weapons are programmed through AI with only one goal: killing. The dangerous difference between these autonomous weapons and weapons controlled by humans is that the weapons do not take human values or a human form of decision making into account. Besides using AI for weapons, AI can also be applied more widely to society. For example, with the help of AI, an attempt was made in 2016 to influence American elections by manipulating individuals through media and social platforms. In addition, there have been several examples where AI discriminates between people. Finally, not clearly defining goals and restrictions can make AI make wrong decisions. AI searches for the best solution and boundaries need to be clearly defined, otherwise AI can offer the wrong solution. An example of this is:

Question: What can we do to combat climate pollution by people?

AI answer: kill all people.

 

However, the current form of AI is manageable by humans and AI is far from reaching the level of human intelligence. Nevertheless, scientists, including Stephen Hawking, express concerns about the development of AI. These concerns focus primarily on the rapid development of AI, with AI ultimately exceeding the level of human intelligence. This is called Super Intelligence.

Transcending human intelligence could be a great danger. After all, man is not developed into the most powerful organism in the world because of his height or strength, but because of the intelligence of human beings.

 

Different ‘doom’-theories about the consequences of AI are circulating. However, many of these theories are no more than Hollywood-style scenarios.

However, there is a possibility that AI will reach this level of intelligence in the future. A possible consequence could be that people are outpaced by AI on every level. Human intelligence and skills will become fully replaceable and people’s cognitive skills will fade away. In a hypothetical scenario, AI could even gain consciousness. One consequence would be that people would become redundant and the world would be dominated by AI.

 

Should we then really be afraid of AI? No. AI is nowhere near enough to control human traits such as consciousness. In fact, the question is whether AI will ever even reach this level. Still, it is very important that AI develops, but in a way that it serves to help the human being. What is important is that AI is developed by humans for the purpose of helping. In order to prevent AI from malignantly influencing mankind, it is important to make AI self-explanatory. Errors in algorithms need to be detected quickly. In addition, it is very important that AI is not developed by a small group of developers, but that AI is developed in an open environment and in an open innovation. By means of transparency and cooperation, AI can be used to support and ensure that mankind develops further.

 

Let’s start the discussion. What do you think of ever improving AI? Do you see AI as a threat in the future?

Dang, T., 2019. AI Transforming The World. [Online]
Available at: https://www.forbes.com/sites/cognitiveworld/2019/02/24/ai-transforming-the-world/#12e65cb34f03
[Accessed 10 Oktober 2020].

Goertzel, B., 2015. Superintelligence: Fears, Promises and Potentials. Journal of Evolution and Technology.

Marr, B., 2019. Is Artificial Intelligence Dangerous? 6 AI Risks Everyone Should Know About. Forbes.

Nowak, A., Lukowicz, P. & Horodecki, P., 2018. Assessing Artificial Intelligence for Humanity, s.l.: IEEE TEchnologyand SocIETyMagazInE.

O’Carroll, B., 2017. What are the 3 types of AI? A guide to narrow, general, and super artificial intelligence. [Online]
Available at: https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-third-even-possible
[Accessed 10 Oktober 2020].

Rijsewijk, R. v., 2018. How can we control AI (instead of AI controlling us)?. [Online]
Available at: https://iir.nl/blog/how-can-we-control-ai-instead-ai-controlling-us/
[Accessed 10 Oktober 2020].

Sotala, K., 2016. How Feasible Is the Rapid Development of Artificial Superintelligence?, s.l.: Foundational Research Institute.

Vyas, K., 2018. 7 Ways AI Will Help Humanity, Not Harm It. [Online]
Available at: https://interestingengineering.com/7-ways-ai-will-help-humanity-not-harm-it
[Accessed 10 Oktober 2020].

 

 

Please rate this

The effect of algorithms on political polarisation

10

October

2020

No ratings yet.

Algorithms are used by social media platforms to show users content they are interested in. This has many advantages. For example, content is filtered and users are only shown pictures, videos and posts that they are interested in. In addition, algorithms can be used to serve targeted advertisements to users. However, the use of algorithms can also have many adverse effects. For example, an algorithm can get people into a certain bubble in which they only get to see news that matches their political preferences.

 

With the upcoming presidential elections in the United States in November, the number of politically coloured news increases. Political parties focus on their target group via platforms such as Facebook. In addition to politically colored messages that show information about the campaign, they also advertise with disinformation about candidates and fake news. This news is shown to groups of people who are placed in a certain group by an algorithm.

 

Besides disinformation and fake news, one-sided news is a big problem. Personalized algorithms place people in groups with the same preferences. These people get to see one-sided news and find themselves in a so-called echo bubble. In this bubble there is hardly any news that does not match people’s personal preferences. Flaxman et al. (2016) showed that segregation of news on social platforms is more common among people who frequently visited different news sites. Displaying one-sided news using an algorithm is called an information diet.

 

Fredrik et al. (2016) showed that people who are constantly exposed to biased information with a political preference end up taking more radical positions and are less tolerant towards people with a different opinion. Stroud (2010) showed that people in the United States who were subject to the so-called information diet developed more radical beliefs during an election campaign. This may increase the power of commercial news channels because they can advertise using the algorithms. It has been shown that promotional content is one of the driving factors for political polarisation on social media.

 

In addition to social platforms such as Facebook, in which people unite in groups, political polarisation also takes place on other platforms. Youtube’s algorithm, for example, has come under heavy fire for its role in the political radicalisation of people. The reason for this was that Youtube would steer users in a certain direction by means of recommendations. For example, people who watched pro-Trump videos during the elections would only be advised to watch anti-Clinton and pro-Trump videos afterwards.

 

Algorithms can thus enable people to see only one side of the news. Together with one-sided political advertisements this can contribute to the political polarisation of people. The CEO of Facebook, Mark Zuckerberg, had to appear at the Congress in 2018 for this reason, among others. Political polarisation on social platforms, caused by algorithms, is a growing problem. A proper solution has not yet been found.

 

Have you ever experienced disinformation in political ads? Have you ever wondered whether you are in an echo bubble on social platforms?

Bail, C., 2018. Exposure to opposing views on social media can increase political polarization

 

Bessi, A., 2016. Users Polarization on Facebook and Youtube

 

Flaxman, S., Goel, S. & Rao, J., 2016. FILTER BUBBLES, ECHO CHAMBERS, AND ONLINE NEWS CONSUMPTION

 

Milan, S., 2019. Personalisation algorithms and elections: breaking free of the filter bubble

 

Tufekci, Z., 2018. Youtube, The Great Radicalizer. The New York Times.

 

Zuiderveen Borgesius, F. et al., 2016. Should we worry about filter bubbles?

Stroud, N. J. (2010). Polarization and partisan selective exposure. Journal of Communication, 60(3), 556–576.

 

Please rate this