Ethical concerns of AI in elderly care

13

October

2022

No ratings yet.

The aging population is a phenomenon that can be seen in most western nations and beyond. To care for all these elderly people a growing portion of the employed has to work in the healthcare sector. Yet, this cannot continue forever, as other sectors are also vital to a heathy economy. For this reason, governments like the one in the Netherlands are putting restrictions on the number of employees that will be working in the future healthcare sector (NOS, 2022). Since the number of employees will not grow there is one obvious alternative: various AI solutions that can reduce the workload for existing employees. These can be implemented online to give medical advice or alternatively they can be used in robots to assist with some of the physical work like moving patients from a wheelchair to their bed (Mukai et al, 2010).

However, introducing AI in healthcare also brings with it some serious ethical concerns. Chief among them is the risk of isolation (Sharkey & Sharkey, 2010). Elderly people are already one of the most isolated groups in society, with the introduction of AI one of their frequent human interactions is further diminished which increases the risk of loneliness. What further compounds this issue is that loneliness is associated with an increased risk of dementia, which makes introducing AI even harder (Fratiglioni et al, 2000). One solution for this could be the introduction of conversation bots that can talk to the elderly, although this is also not without ethical concerns. After all, is a conversation with a robot really a substitute for a human or are we fooling them into thinking they are building a meaningful connection (Sparrow, 2002).

There is also the risk of (the perceived) lack of privacy (Sharkey & Sharkey, 2010). Because robots must be monitoring all the time in order to function properly, it will feel like a security camera that walks around in the house. This transforms what is supposed to be a private place into one with no privacy at all. Even when the robot immediately deletes the information just the feeling of always being watched can lead to serious stress issues.

Given the aging population it seems almost inevitable that some form of AI will be used. However, to avoid a situation where this solution does a lot of united harm these ethical questions should be considered before implementation.

Bibliography

Fratiglioni, L., Wang, H., Ericsson, K., Maytan, M., & Winblad, B. (2000). Influence of social network on occurrence of dementia: a community-based longitudinal study. The Lancet, 355(9212), 1315-1319. https://doi.org/10.1016/s0140-6736(00)02113-9

MUKAI, T., HIRANO, S., NAKASHIMA, H., & SAKAIDA, Y. (2010). 1A1-E24 Realization of Patient-Transfer Tasks Using Nursing-Care Assistant Robot RIBA and Its Safety Measures. The Proceedings Of JSME Annual Conference On Robotics And Mechatronics (Robomec), 2010(0), _1A1-E24_1-_1A1-E24_4. https://doi.org/10.1299/jsmermd.2010._1a1-e24_1

Nieuw zorgakkoord: aandeel werkenden in zorg moet niet verder stijgen. NOS. (2022). Retrieved 13 October 2022, from https://nos.nl/artikel/2443630-nieuw-zorgakkoord-aandeel-werkenden-in-zorg-moet-niet-verder-stijgen.

Sharkey, A., & Sharkey, N. (2010). Granny and the robots: ethical issues in robot care for the elderly. Ethics And Information Technology, 14(1), 27-40. https://doi.org/10.1007/s10676-010-9234-6

Sparrow, R. The March of the robot dogs. Ethics and Information Technology 4, 305–318 (2002). https://doi.org/10.1023/A:1021386708994

Please rate this

AI for deepfakes, a problem or the solution?

4

October

2022

No ratings yet.

Like many other artificial intelligence solutions, deepfake AI has improved a lot over the last few years. Using a single image of a target it is already possible to make a whole video out of it (Siarohin et al, 2019).  The quality of models is also already at a level where human detection is about as good as simply guessing (Rossler et al, 2019). While this is a great achievement from a technological perspective, many people are also worried about potential abuse. And not without reason, criminals have already used AI-based software to impersonate an executive and make employees transfer more than 200,000 euros into their account (Stupp, 2019). Luckily, AI could also be a solution to this problem, as various deep learning models have been developed to detect videos that have been altered using deepfake technology (Rossler et al, 2019). While this is great news it does not solve the issue overnight, as another program exists that exploits the pattern recognition of the detection AI to perform an adversarial attack (Hussain et al, 2021). This can be done by adding some noise to the image or even a couple pixels to trick the pattern recognition of deep neural networks.

If the detection AI is open source, it is quite easy to create such an attack: you take the deepfake video and randomly alter pixels until you find a combination where the algorithm is no longer able to detect the deepfake video. However, even when the detection software is not in the public domain it is still possible (although more complicated) to create an attack (Hussain et al, 2021). Because black-box attacks are also possible a cat a mouse game is created whereby the detection AI tries to spot the attack and the attacker evolves to dodge these detections. For now the future of the detection AI is uncertain, but my guess would be that eventually the detector will win. Large tech companies are heavily incentivized to solve this issue, especially with increasing pressure from legislators. Nefarious actors on the other hand have much less incentive to dodge the detection: simpler and cheaper options like social engineering or outright lies tend to do the job about as well.

Bibliography

Hussain, S., Neekhara, P., Jere, M., Koushanfar, F., & McAuley, J. (2021). Adversarial deepfakes: Evaluating vulnerability of deepfake detectors to adversarial examples. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 3348-3357).

Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., & Nießner, M. (2019). Faceforensics++: Learning to detect manipulated facial images. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 1-11).

Siarohin, A., Lathuilière, S., Tulyakov, S., Ricci, E., & Sebe, N. (2019). First order motion model for image animation. Advances in Neural Information Processing Systems, 32.

Stupp, C., 2019. Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case. The Wall Street Journal. Available at: <http://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402>

Please rate this