Artificial intelligence: unethical or ethical?

7

October

2022

No ratings yet.

AI bias denotes any way that AI and data analytics tools can perpetuate or amplify bias. An example of an ethical implication is the use of artificial intelligence in the hiring process of companies. Since the introduction of AI in the hiring process, recruitment is more efficiently processed, cost-effective, and better targeted at processing a huge volume of resumes (Parikh, 2021). However, this way of processing undermines fairness and inadvertently dis-seminate bias (Jobin et al., 2019). The bias is created by the data, if the data set is not representative and diverse, then the results will be clouded. The main problem with this historic data is that the industry portrays an ideal candidate with a certain degree or cultural background, therefore the AI build-up on this data is inherently biased. Based on the historic data the ‘perfect’ candidate would consist of a white male with an ivy league background. While this is far from reality.

Organizations can minimize the bias in AI by updating the data set with a wide range of employees, with different degrees, cultural backgrounds, and work experiences, because there is no perfect resume to match a candidate with the company. Also, it is important to always have a second opinion from actual HR employees. Human interference can consequently undermine the bias created by AI. The biggest drawback is the lack of human judgment (Parikh, 2021). AI-based hiring may not serve its purpose if a company intends to diversify its workforce. Candidates with atypical work experience could be the best match for the company in regard to their work ethics, character, and interest resulting in a higher employee retention rate. Whilst AI will perhaps miss these potential candidates. The problem is recognized worldwide, and even leads governments to take initiative to adopt rules and regulations. The European Union has proposed a regulatory framework for AI, which can help identify the bias portrayed by AI in the hiring process (Lohr, 2021). 

What do you think? Is the efficiency of using AI far greater than the risk of amplifying bias in the hiring procedure? 

References:

Jobin, A., Ienca, M. and Vayena, E., 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), pp.389-399

Lohr, S. (2021, December 8). Group backed by top companies moves to combat A.I. bias in hiring. The New York Times. Retrieved 7 October 2022, from https://www.nytimes.com/2021/12/08/technology/data-trust-alliance-ai-hiring-bias.html

Parikh, N. (2021, October 14). Understanding bias in AI-enabled hiring. Forbes. Retrieved 7 October 2022, from https://www.forbes.com/sites/forbeshumanresourcescouncil/2021/10/14/understanding-bias-in-ai-enabled-hiring/?sh=43d455d77b96

Please rate this

Living in your own bubble

21

September

2022

No ratings yet.

We all know them, the advanced recommendations tools incorporated in our every trace on the internet, continuously feeding the user with information, which aligns with their interest, may it be regarding political choices or other personal affairs. Wonderful right? The internet provides information, which aligns with your interest, seems harmless, or not. Are these personalized algorithms the cause of individual isolation?


This concept is called the ”filter bubble”, which suggests that search engines and social media, together with their recommendation and personalization algorithms, are centrally culpable for the societal and ideological polarisation experienced in many countries (Bruns, 2019). Filter bubbles are even seen as critical contributors to Trump, Brexit, and Bolsonaro. These algorithms strengthen the ideology of the user, confirming their own beliefs, attitudes, and vision of the world. For example, if the user searches or expresses a liking for the Democratic Party, it is probable that the user will receive more (positive) information about this party. This phenomenon is not only seen in search engines but also on other social media platforms. For instance, the Facebook news feed algorithm will tend to amplify news that your political companions favour (Pariser, 2015).


However, this raises some serious concerns, as social media is acknowledged as a primary source of information and other news. Furthermore, these personalized algorithms connect users with the same ideology, creating new exclusive communities with like-minded people, which intensifies the barriers with people who present different opinions. In a study of 10.1 million U.S. Facebook users with self-reported ideological affiliation found that more than 80% of these Facebook friendships shared the same party affiliation (Bakshy et al., 2015). While this homophily can be beneficial, it also forms a threat to the extremes of the ideological spectrum.


My main concern for these personalized algorithms is that people will create a tunnel vision, which shall affect their ideology. users will not be challenged anymore with divergent perspectives, which can widen their horizons, causing them to be preserved in their own ”bubble”, resulting in some extreme cases of radicalization. For instance, the Christchurch attack that occurred in 2019, where a terrorist attack got live-streamed on Facebook. The perpetrator was inspired by Facebook groups/communities, promoting white nationalism and white separatism (Wong, 2019). Unfortunately, I think that this phenomenon will only deteriorate as people are more reliant on information from the internet and start secluding themselves from contrasting beliefs.


Do you think it is too far-fetched to think that filter bubbles can affect people’s ideologies and provoke radicalization?

References:

Bakshy, E., Messing, S., and Adamic, L. A. 2015. “Exposure to Ideologically Diverse News and Opinion on Facebook,” Science (348:6239), pp. 1130-1132.
Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426
Pariser, E. (2015, May 7). Did Facebook’s Big Study Kill My Filter Bubble Thesis? Wired. Retrieved on September 18 from https://www.wired.com/2015/05/did-facebooks-big-study-kill-my-filter-bubble-thesis/
Wong, J. C. (2019, March 30). Facebook finally responds to New Zealand on Christchurch attack. The Guardian. Retrieved 18 September 2022, from https://www.theguardian.com/us-news/2019/mar/29/facebook-new-zealand-christchurch-attack-response

Please rate this