How to teach ethics to a computer? No ratings yet.

12

October

2019

While companies are interested in the use case of artificial intelligence (AI), they should also be aware of its hurdles.

A 2018 Deloitte survey found that 32% of the 1,400 participants, consisting of U.S. executives knowledgeable about AI, ranked ethical issues on their top three risks of AI (Forbes Insight Team, 2019). Keeping in mind that AI is just in its infancy, this result is surprisingly high.

AI gone bad: an example

In 2016, Microsoft developed a chat bot, built by mining public conversations on Twitter. The bot would also learn and adapt based on these conversations. Sadly, it took internet trolls less than 24 hours to let it spew out racist, offensive tweets (Li, 2019).

In 2017, Google’s AI algorithm was reported to be sexist. Many languages mark gender differently than English. For instance in Turkish, the translation algorithm would decide a gender by itself when translating the Turkish gender neutral single third-person pronoun “o”. This lead to translations like “he is a doctor” and “she is a nurse” (Li, 2019).

An industry that needs savior

Social networks work by leading people to click on ads, whether it is for a shoe or a job application, the algorithm is calibrated to earn the most profit for its company (Forbes Insight Team, 2019). Journalism has become whatever the algorithms spit out. Whether the news is reliable or not is not of importance, the algorithm boosts sensational stories in order to draw more clicks (Smith, 2016). The problem is that we put the power in the hands of few –the creators.

Managerial implications

With legislations running behind the facts (Forbes Insight Team, 2019), there is an opportunity for companies to make a difference by using ethics as a business metric (Fjord, 2019). First, we must understand that AI risks are business risks (Li, 2019). AI can pick up trends that we are not even aware of while learning, including biases as well as blind spots of its creators (Compton, 2019). To mitigate these risks, companies can introduce anti-bias training alongside their AI and machine learning training, while actively correcting spotted bias. Furthermore, they can include a non-technical course on AI possibilities and risks (Li, 2019). Two types of people should experience these trainings; the AI engineers and the business leaders. This goes hand in hand with a step away from the traditional top-down business structure. We need a change in approach towards more agile methods, where feedback loops and iteration determine the development of the system (Compton, 2019) and managers and engineers work together.

When AI engineers and business leaders work together, they can create AI empowered by good ethical rulebook, instead of misguided capitalism.

Does this sound familiar to you?

 

References:

Compton J. (2019). Ethics And Inclusion In AI: Designing For The AI Future We Want To Live In. Forbes. Available at: https://www.forbes.com/sites/intelai/2019/03/27/ethics-and-inclusion-designing-for-the-ai-future-we-want-to-live-in/#b589dae255f5

Fjord (2019). The ethics economy. Trends. Available at: https://trends18.fjordnet.com/the-ethics-economy/

Forbes Insight Team (2019). Rise Of The Chief Ethics Officer. Forbes. Available at: https://www.forbes.com/sites/insights-intelai/2019/03/27/rise-of-the-chief-ethics-officer/#75b2cdfc5aba

Forbes Insight Team (2019). 4 Industries That Feel The Urgency Of AI Ethics. Forbes. Available at: https://www.forbes.com/sites/insights-intelai/2019/03/27/4-industries-that-feel-the-urgency-of-ai-ethics/#1718de5472be

Li M. (2019). Addressing the Biases Plaguing Algorithms. Harvard Business Review. Available at: https://hbr.org/2019/05/addressing-the-biases-plaguing-algorithms

Smith A. (2016). The pedlars of fake news are corroding democracy. The Guardian. Available at: https://www.theguardian.com/commentisfree/2016/nov/25/pedlars-fake-news-corroding-democracy-social-networks

Please rate this

This is technological propaganda. 5/5 (4)

28

September

2019

The results of Brexit or Trump happening were shocking but not surprising. However, a greater concern emerged: the accidental or deliberate propagation of misinformation via social media.

44% of Americans get their news from Facebook (Solon, 2016). Many millions of people saw and believed fake reports that “the pope had endorsed Trump; Democrats had paid and bussed anti-Trump protesters; Hillary Clinton was under criminal investigation for sexually assaulting a minor” (Smith, 2016). If our democracy is built on reliable information, what is real?

The good, the bad and the ugly admission fee

In the Arab Spring campaign, Facebook as well as Twitter were first politicized and used to inspire people as tool for democracy. With Brazil, Brexit, and US we saw the equilibrium shift to the other side. We assume that there is an admission fee to pay before we are allowed to the connected world (Thompson, 2019). How many times a day have you been asked to agree with the terms on a website and clicked accept to only access the data behind it?

The recent Cambridge Analytica scandal exposes Facebook’s rather porous privacy policies and the company’s casual attitude to oversight. By using the platform, Cambridge Analytica, a British data mining firm, was able to extract data of 270.000 people by conducting a survey. People accepted to share details about themselves –and unknowingly about their friends (Economist, 2018). This amounted to information from 50 million Facebook users in overall, which the company happily shared with their customers, including Trump (Economist, 2019).

Full-service propaganda machine and Nazi Germany

In essence, companies like Cambridge Analytica can use Facebook to “target voters who show an interest in the same issues or have similar profiles, packaging them into what it calls ‘lookalike audiences’.” (Economist, 2018). The practice used effectively shaped voting results in several countries such as Argentina, Kenya, Malaysia, and South Africa even before the US presidency in 2016 (Thompson, 2019).

The practice to address certain lookalike audiences with feelings rather than facts, playing up vision to create a fake emotional connection, is not new. Nazi Germany shows this. Yet, we have the internet-driven efficiency (Smith, 2016).

Clickbait

Like the headline of this article, revenue-driven platforms such as Google and Facebook are using news feeds that engage more people, essentially to expose them to more ads. Whether the article is reliable or not does not matter, the algorithm boosts sensational stories that reinforce prejudice in order to draw more clicks (Smith, 2016). As mentioned before, if we use this as our primary information source, how can we assure that we are able to make informed decisions?

To conclude, platforms cannot stand at the sidelines making profit and see how they are used as a stepping stone to the next political victory for the highest bidder. They should be held accountable. Now.

 

References:

Economist (2018) The Facebook scandal could change politics as well as the internet. Data privacy. Available at: https://www.economist.com/united-states/2018/03/22/the-facebook-scandal-could-change-politics-as-well-as-the-internet

Economist (2019) “The Great Hack” is a misinformed documentary about misinformation. The Facebook scandal. Available at: https://www.economist.com/prospero/2019/07/24/the-great-hack-is-a-misinformed-documentary-about-misinformation

Smith A. (2016) The pedlars of fake news are corroding democracy. The Guardian. Available at: https://www.theguardian.com/commentisfree/2016/nov/25/pedlars-fake-news-corroding-democracy-social-networks

Solon O. (2016). Facebook’s failure: did fake news and polarized politics get Trump elected?. The Guardian. Available at: https://www.theguardian.com/technology/2016/nov/10/facebook-fake-news-election-conspiracy-theories

Thompson A. (2019) The Great Hack terrified Sundance audiences, and then the documentary go even scarier. IndieWire. Available at: https://www.indiewire.com/2019/08/the-great-hack-documentary-oscar-cambridge-analytica-1202162430/

Photograph: Dado Ruvic/Reuters

Please rate this