How will AI influence legal frameworks?

8

October

2019

No ratings yet.

Several states of the United States of America have outlawed parts of facial recognition technology already (Jee, 2019). Jee (2019) stated that it is just the beginning of the ban and that there might be a federal ban on some facial recognition practices. As AI is developing in a fast way, we need to ask ourselves: how far are regulations in keeping up with the AI development?

One example is that the current legal framework has no regulations under which robots can be held liable for damage on others (Krzisnik, 2019). The EU parliament understands that robots can be very complex and that the ordinary rules of liability are insufficient on this matter. Another example of a legal challenge is the topic of so called (electronic) personhood (Krzisnik, 2019). Personhood can be defined as being an individual and having certain rights and obligations. Every natural or legal person has these rights and obligations. However, robots again aren’t being held liable for any damage, as the government tries to find a person behind the robot that could have foreseen the damage (Krzisnik, 2019). Now, the EU parliament believes that some of the more complex robots  that make autonomous decisions should have these rights and liabilities as well.

Burke and Trazo (2019) define the issue of data collection and safeguarding privacy as one of the most important legal challenges that AI has brought us. European Union regulators have been very active in their aims to provide a legal framework that deals with this challenge by creating the GDPR laws (Burke & Trazo, 2019). The United States are lagging behind in this aspect. However, as companies know that these regulations are inevitable, corporations as Apple and Accenture are expressing their support to the US lawmakers (Burke & Trazo, 2019).

I believe that even though processing new laws is a timely process, governments should work together with the technology companies to create new legal frameworks that overcome these challenges. They should be proactive instead of reactive, as AI is a rapidly changing system. However, it can be argued whether this is feasible in the current legal system. What do you think?

 

Burke, T.J. & Trazo, S. (2019) Emerging legal issues in an AI driven world. Lexology. [Online] Available at: https://www.lexology.com/library/detail.aspx?g=4284727f-3bec-43e5-b230-fad2742dd4fb.

Jee, C. (2019) A facial recognition ban is coming to the US, says an AI policy advisor. MIT Technology Review. [Online] Available at: https://www.technologyreview.com/s/614362/a-facial-recognition-ban-is-coming-to-the-us-says-ai-policy-advisor/.

Krizisnik, M. (2019) The legal challenges of Artificial Intelligence. Iuricorn – TOP technology lawyers.  [Online] Available at: https://www.iuricorn.com/the-legal-challenges-of-artificial-intelligence/.

 

Please rate this

Is AI ready for healthcare?

29

September

2019

5/5 (3)

Artificial intelligence algorithms can improve the performance of radiologists, by improving the speed and accuracy of diagnosing their patients. The algorithms can interpret and give diagnoses on X-rays, CT scans and other images (Kim & Holzberger, 2019). However, the drawback is that AI only focuses on one question and hence, one answer. Each purpose would need its own algorithm, forcing developers to create thousands of algorithms (Kim & Holzberger, 2019). However, AI marketplaces gives access to a variety of AI models and hence, tries to solve this problem. Also, it asks for feedback to refine its algorithms. Moreover, Forbes (2019) claims that AI could help to solve a big problem in healthcare, namely the ‘iron triangle’. The triangle exist of the three factors access, affordability and effectiveness. AI can decrease costs, but also make treatment improvements and create a great accessibility. Forbes (2019) also sees a great future in the use of AI on robots that assist operations.

Even though prospects look great, we also need to discuss the other side. Of course, there is a large difference in using AI in for example buying stock and in using it to diagnose or operate a real human being. Is it ethical to use AI in healthcare? Whose fault will it be when someone passes away due to a fault in the algorithms? Also, AI gives an output it cannot further explain, and it can even rely on bias due to the data it has been provided with (Sanofi, 2019). AI algorthms can of course contain errors that lead to serious consequences (Keshinbora, 2019). So, how can one explain an outcome to a patient if the outcome is based on a very complex system that cannot explain itself? In order to do that, we need AI systems that can explain other AI systems, called XAI (Sanofi, 2019). I believe that AI is not ready for large healthcare decisions yet until it reaches full reliability and until its choices can be explained. What do you think?

Sources:

Forbes. (2019) AI and Healthcare: A Giant Opportunity. [Online] Available from: https://www.forbes.com/sites/insights-intelai/2019/02/11/ai-and-healthcare-a-giant-opportunity/#2fa19d5b4c68 .

Keskinbora, K. (2019) Medical ethics considerations on artificial intelligence. Journal of Clinical Neuroscience. 64(6), 277-282.

Kim, W. & Holzberger, K. (2019) What AI ”App Stores” Will Mean for Riadiology. Harvard Business Review. [Online] Available from: https://hbr.org/2019/06/what-ai-app-stores-will-mean-for-radiology .

Sanofi (2019). The Ethics of AI in Healthcare. [Online] Available from: https://www.sanofi.com/en/about-us/our-stories/the-ethics-of-ai-in-healthcare .

 

 

Please rate this