The rise of artificial intelligence has affected industries of any nature in the past several years. This is also the case in healthcare – an industry in which AI has had especially important and influential applications. Artificial intelligence is increasingly used for its data analytics in healthcare to provide additional insights into conditions or even to guide diagnostics and decisions. Research has been plentiful, and investments are constantly made to increase the reliability and accuracy of the applications in place. However, has the focus been too much on technology in favor of the education of personnel to help interpretation and understanding?
Recent research has shown that the use of AI in healthcare has not perhaps been as influential as one might have thought. Doctors sometimes encounter indifference and suspicion towards relayed warnings from the technology. Additionally, nurses and doctors are not always certain how to act in response to indications generated by the algorithm – while the need for extra attention for a specific patient is indicated, the interpretation of what has caused this warning is lost in the analysis. Doctors are expected to interpret the warning themselves to determine what it is the patient needs extra care with. In several cases, this has even caused misdiagnoses as a result.
This discussion with regards to AI in healthcare has had a frontrunner voice in the limitations of large-scale AI integration into everyday lives. It has shown that AI really is a much smaller piece of the puzzle than what was initially expected. Research is so focused on the technology, that the need for general understanding and interpretation of the algorithms and underlying reasoning is left behind. Elish and Watkins (2020) acknowledge that human labor is required to harmonize a technical system. In other words, the integration of AI has created social breakages that must be repaired before AI can come to its full fruition and utility.
It raises an interesting question of whether AI research should continue in its current path or if this sociotechnical approach should increasingly be applied. AI goes beyond the algorithms and the social structures around its application require at least a similar amount of attention during its integration.
What do you think? What other industries could find value in focusing on the social structures around AI integration?
Bohr, A. & Memarzdeh, K., 2020. Chapter 2 – The rise of artificial intelligence in healthcare applications. In: Artificial intelligence in healthcare. Copenhagen: Academic Press, pp. 25-60.
Davenport, T., Kalakota & Ravi, 2019. The potential for artificial intelligence in healthcare. Future Health Journal, 6(2), pp. 94-98.
Elish, M. C. & Watkins, E. A., 2020. Repairing Innovation: A study of integrating AI in clinical care. [Online]
Available at: https://datasociety.net/library/repairing-innovation/
[Accessed 8 October 2020].
Simonite, T., 2020. AI Can Help Patients—but Only If Doctors Understand It. [Online]
Available at: https://www.wired.com/story/ai-help-patients-doctors-understand/
[Accessed 8 October 2020].