All over the world artificial intelligence, also known as AI, is being used to make tasks easier. Some airports use AI to scan the face of passengers to make the identity check faster and easier (Schiphol, 2022). Or create bots that can have conversations with people and automate customer support for example (Google, 2022). However, AI can also cause damage if not managed properly. There have been cases where AI system became racist, a good example is artificial intelligence showing racial bias towards certain people from ethnic minorities (Milmo, 2022). The issue with AI is that it learns based on the input data which is fed into the system. However, this can be fixed by feeding it more representative information to minimize issues and biases.
There are also other examples where AI is misused on a big scale. China has implemented a system where facial recondition is used to control human behavior. If a person jaywalks in China, they will automatically get a fine since cameras will identify the person (Futurism, 2022). Depending on who you ask, this can be a good or a bad thing. In my opinion, this is a bad development since it creates a very authoritarian country where people must watch out for every small step they make. However, a wanted person can easily be tracked if their face is visible for a camera which can recognize them. This also causes an ethical dilemma; how much privacy are we willing to give up for security? And if we do give up our privacy, will the technologies and data not be misused for other purposes? I expect that this will be one of the biggest challenges for the future, since wrong decisions can cause major damage when it comes to AI systems.
References
GEIB, C. (2018, March 30). If You Jaywalk in China, Facial Recognition Means You’ll Walk Away With A Fine. Retrieved from Futurism: https://futurism.com/facial-recognition-china-social-credit
Google. (2022). Conversational AI. Retrieved 9 30, 2022, from Google: https://cloud.google.com/conversational-ai
Milmo, D. (2022, July 14). UK data watchdog investigates whether AI systems show racial bias. Retrieved 7 30, 2022, from The Guardian: https://www.theguardian.com/technology/2022/jul/14/uk-data-watchdog-investigates-whether-ai-systems-show-racial-bias
Schiphol. (2022). Reizen met gezichtsherkenning. Retrieved 9 30, 2022, from Schiphol: https://www.schiphol.nl/nl/pagina/proef-met-reizen-met-gezichtsherkenning/