Societal Risks of AI

10

October

2021

No ratings yet.

Biden’s chief science adviser, Eric Lander, and the deputy director for science and society, Alondra Nelson have published an article regarding the societal risks of artificial intelligence (AI). They emphasize on the fact that AI may be prone to bias, allowing them to discriminate and create dangerous situations. Examples given by them are AI programs with the tasks of detecting illnesses or judging credit worthiness. Programs may have been fed faulty information, causing them to over generalize. They may for example not account for differences between African Americans and other population groups. This could cause the program to underestimate the severity of a disease, thereby endangering the patient. The second example regarding credit worthiness could also be caused by over generalizing populations. If certain population groups average a lower credit worthiness, the AI may assume all people of this population group will be likely to have lower credit worthiness. Although it would be statistically correct, this could still lead to discrimination and unfair treatment. It is important for AI powered systems to respect the shared values of equality and fairness. 

Another risk introduced by AI is its ability to recognize and analyze attributes such as faces, voices, and physical movement. These systems could be used for privacy violating facial recognition, but also as tools to help alter the emotional state of people. Current AI seems to be able to detect emotional states such as fatigue or depression. It is important for lawmakers to be mindful of the potential dangers AI introduces and to act accordingly. Eric Lander and Alondra Nelson believe the government should pledge not to purchase any systems that allow for violation of people’s basic rights. This doesn’t mean they are against the use of AI systems. They acknowledge the AI’s potential to improve processes and strengthen economic growth. It is certainly exciting to see what the future AI have in store for us.

References:

O’Brien, M. (2021, October 8). White House proposes tech “bill of rights” to limit AI harms. Techxplore.Com. https://techxplore.com/news/2021-10-white-house-tech-bill-rights.html

Lander, E., & Nelson, A. (2021, October 8). Americans Need a Bill of Rights for an AI-Powered World. Wired. https://www.wired.com/story/opinion-bill-of-rights-artificial-intelligence/

Winkler, R. (2021, September 21). Apple Is Working on iPhone Features to Help Detect Depression, Cognitive Decline. WSJ. https://www.wsj.com/articles/apple-wants-iphones-to-help-detect-depression-cognitive-decline-sources-say-11632216601

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *