A sociotechnical approach towards AI in healthcare

8

October

2020

No ratings yet.

The rise of artificial intelligence has affected industries of any nature in the past several years. This is also the case in healthcare – an industry in which AI has had especially important and influential applications. Artificial intelligence is increasingly used for its data analytics in healthcare to provide additional insights into conditions or even to guide diagnostics and decisions. Research has been plentiful, and investments are constantly made to increase the reliability and accuracy of the applications in place. However, has the focus been too much on technology in favor of the education of personnel to help interpretation and understanding?

Recent research has shown that the use of AI in healthcare has not perhaps been as influential as one might have thought. Doctors sometimes encounter indifference and suspicion towards relayed warnings from the technology. Additionally, nurses and doctors are not always certain how to act in response to indications generated by the algorithm – while the need for extra attention for a specific patient is indicated, the interpretation of what has caused this warning is lost in the analysis. Doctors are expected to interpret the warning themselves to determine what it is the patient needs extra care with. In several cases, this has even caused misdiagnoses as a result.

This discussion with regards to AI in healthcare has had a frontrunner voice in the limitations of large-scale AI integration into everyday lives. It has shown that AI really is a much smaller piece of the puzzle than what was initially expected. Research is so focused on the technology, that the need for general understanding and interpretation of the algorithms and underlying reasoning is left behind. Elish and Watkins (2020) acknowledge that human labor is required to harmonize a technical system. In other words, the integration of AI has created social breakages that must be repaired before AI can come to its full fruition and utility.

It raises an interesting question of whether AI research should continue in its current path or if this sociotechnical approach should increasingly be applied. AI goes beyond the algorithms and the social structures around its application require at least a similar amount of attention during its integration.

What do you think? What other industries could find value in focusing on the social structures around AI integration?

 

Bohr, A. & Memarzdeh, K., 2020. Chapter 2 – The rise of artificial intelligence in healthcare applications. In: Artificial intelligence in healthcare. Copenhagen: Academic Press, pp. 25-60.

Davenport, T., Kalakota & Ravi, 2019. The potential for artificial intelligence in healthcare. Future Health Journal, 6(2), pp. 94-98.

Elish, M. C. & Watkins, E. A., 2020. Repairing Innovation: A study of integrating AI in clinical care. [Online]
Available at: https://datasociety.net/library/repairing-innovation/
[Accessed 8 October 2020].

Simonite, T., 2020. AI Can Help Patients—but Only If Doctors Understand It. [Online]
Available at: https://www.wired.com/story/ai-help-patients-doctors-understand/
[Accessed 8 October 2020].

Please rate this

Facial recognition: from great new technology to even larger concerns

24

September

2020

No ratings yet.

Facial recognition technology has been around since the mid-1960s and has seen tremendous growth over the years. Applications can be found anywhere; tagging photos on social media, biometric locks, surveillance; the list is endless. While many of these may seem harmless and appear to facilitate efficiency for everyday tasks, scepticism on the technology has gained a greater foothold in past years. As the technology is achieving greater accuracy, privacy is becoming an increased concern. Some governments have set up permanent surveillance systems, collecting incredible amounts of data on citizens. While these systems are often claimed to provide safety on the streets, many are concerned about potential other uses of the data collected.

Worldwide, only a very limited number of countries have nation-wide bans in place: Belgium and Luxembourg. All other countries have no to very limited regulations regarding facial technology that should perhaps be reassessed. With America experiencing most of the uproar against the technology, action groups gained their first victory in 2019; San Francisco was the first American city to ban the private use of facial recognition technology. Now, a year and around ten additional city-wide bans later, the overall paradigm seems to be shifting even further.  Recently, Portland was the first city in the US to completely ban facial recognition technology for both private and governmental use. This seems to have sparked more discussion on the regulations that should be in place regarding the technology.

Studies have found concerning indications of the effects of facial recognition in everyday settings. For example, Andrejevic & Selwyn (2019) present social challenges facial recognition can have in schools. It is found that upon integration of the technology, the nature of schools can become oppressive, authoritarian and divisive. Additionally, the technology is being mass deployed in law enforcement, without any scientific evidence that suspects can be better identified. In fact, it is imposing stronger biases on law enforcement as the technology fosters a false sense of security. These are only some of the examples that have been found now that facial technology is being integrated.

Based on these reasons, I would personally urge all governments to strongly consider limiting the use of facial recognition technology on a large scale. While it is constantly developing and admittedly convenient in many scenarios, the actual scientific benefits for many applications are yet to be proven. Additionally, many risks have been identified that, while perhaps contained, for now, could start showing its effects at any time.

What do you think? Should facial recognition be better regulated worldwide? Do the applications outweigh the potential risks?

https://www.bloomberg.com/quicktake/facial-recognition#:~:text=Facial%20recognition%20technology%20was%20first,intelligence%20agencies%20and%20the%20military.

https://www.theguardian.com/technology/2019/jul/29/what-is-facial-recognition-and-how-sinister-is-it

https://www.wired.com/story/portlands-face-recognition-ban-twist-smart-cities/

https://www.forbes.com/sites/tomtaulli/2020/06/13/facial-recognition-bans-what-do-they-mean-for-ai-artificial-intelligence/#6c6aacca46ee

https://www-nature-com.eur.idm.oclc.org/articles/d41586-019-02514-7

https://www.visualcapitalist.com/facial-recognition-world-map/#:~:text=Belgium%20and%20Luxembourg%20are%20two,use%20of%20facial%20recognition%20technology

Andrejevic, M. & Selwyn, N. (2019). Facial recognition technology in schools: critical questions and concerns. Learning, Media and Technology 45(2), pp. 115-128.

Please rate this