Going beyond recognition: Can AI show empathy?

17

October

2018

No ratings yet.

Businesses of all kinds are investing in artificial intelligence to improve operations and customer experience. However, as we are all experiencing on a daily basis, the inefficiencies caused by miscommunications between humans and machines can be frustrating. (Morgan, 2018). To build trust between them, El Kaliouby, the founder and CEO of emotion AI company Affectiva, thinks that empathy is key. (Moore, 2018).

Emotional recognition is an easier problem to solve than emotional empathy because, as it can be shown through many examples, machine learning systems can learn to recognize patterns that are associated with a particular emotion. However, recognition is not the same as understanding, and understanding is not empathy. Therefore, artificial empathy, or affective computing, raises the question of whether machines are capable of experiencing emotions. However, artificial emotional intelligence and its advancement are important and necessary to the advancement of artificial intelligence. (Morgan, 2018).

A leading market research firm, Zion Market Research, recently added industry report on “Affective Computing Market: Global Industry Analysis, Size, Share, Growth, Trends, and Forecasts 2016–2024”  which offers comprehensive research updates and information related to market growth, demand, opportunities in the global Affective Computing Market. (Allan, 2018). 

This empathetic technology is already being used in market research and advertising. According to the CEO of Affectiva, nearly a quarter of all Fortune 500 companies already use artificial intelligence to assess the emotional impact of their advertisements. Among other examples, artificial empathy can also be used by teachers to measure how well students are absorbing their lessons, by doctors to help assess the mental health of their patients, and in cars to take the wheel from a drowsy driver. (Moore, 2018).

In my opinion, substantive improvements in this industry are needed in order to implement it on a large scale. Moreover, in the wrong hands, this kind of technology might be used in a way that is detrimental to some users, which also raises the question of the neutrality of technology.

What is your opinion on this? Do you think such technologies can remain neutral and give so much power to the people who can afford it? In what ways could advanced artificial empathy be used in order to limit its potential dangers? 

References

Allan. (2018). Global Affective Computing Market Size, Trends and Opportunities Forecast, 2016-2024. [online] Retrieved from https://zmrnewsjournal.us/21790/global-affective-computing-market-size-trends-and-opportunities-forecast-2016-2024/

Moore. (2018). Artificial Intelligence Needs Empathy to Work. [online] Retrieved from http://fortune.com/2018/09/24/artificial-intelligence-needs-empathy-to-work/ 

Morgan. (2018). AI Challenge: Achieving Artificial Empathy. [online] Retrieved from https://www.informationweek.com/big-data/ai-machine-learning/ai-challenge-achieving-artificial-empathy/a/d-id/1331628 

Please rate this

1 thought on “Going beyond recognition: Can AI show empathy?”

  1. Hey Sabrina! Interesting blog post,

    I think this is an amazing technology only now starting to really gain momentum and will be an inevitable part of our future. You mentioned some great examples, and I think AI will eventually in the far future be able to fully operate nearly everything in our lives, however that doesn’t come without danger we must consider. Of course, giving so much power to an AI could threaten our safety as well, having a malfunction could prove to be deadly in some cases.

    To address your questions, I believe the “power” you mention people gaining from this would not be as threatening as maybe you make it out to be. Using it in marketing research and analytics will most certainly provide benefits to companies to reap more benefits, and perhaps sometimes at the cost of a poor spending choice of a customer, but it is never the fact that anyone will get forced to do anything, and as such I believe this aspect is no threat. On top of this, the benefits that u mention with it assisting a doctor diagnosis or taking over the wheel from a sleeping driver are just a few of the major life saving benefits that are a result of this technology.

    Limiting the dangers will always be the biggest concern. A simple solution in my opinion would simply be to not let it access anything more outside of its designed database, until it is certain that it can cause no harm. Then perhaps only allow it to give answers, without giving it actual functionality (again until it is sure it is ready). Restricting the dangers is no easy tasks, and there are likely already much more thorough restrictions in place by the larger companies to ensure its safety, we may not know exactly what they are.

    Jordi

Leave a Reply

Your email address will not be published. Required fields are marked *