Google’s DeepMind facing data privacy lawsuit

5

October

2021

4/5 (1)

From data to app to lawsuit

2015: Alphabet Inc.’s British artificial intelligence subsidiary DeepMind obtains private health records of 1.6 million patients from the Royal Free London NHS Foundation Trust. 

This data was to be used to develop the ‘Streams’ app which aims to alert, detect, and diagnose kidney injuries. The app was being developed for use by doctors to detect acute kidney injury. This app was already being used by the Royal free with great praise.

From DeepMinds point of view, they are making use of valuable data in order to progress healthcare and save lives. From Royal Free’s point of view, they are enabling this by sharing this data and then using the app created by this to treat patients. However, for some citizens, this seems like a breach of data privacy.

The British law firm Mishcon de Reya has filed a class-action lawsuit against DeepMind to represent Andrew Prismall and the other 1.6 million patients whose data was shared. 

Who is at fault?

Something I find quite interesting about this case is that DeepMind is accused of being at fault rather than the Royal Free, who shared the data in the first place. Although the Streams app was developed by DeepMind, the app was a collaboration between DeepMind and Royal Free and could not have succeeded without both of their inputs.

I believe that both players are to blame in this situation and that DeepMind can not be put at fault alone. Who do you believe is at fault in this situation?

How can we prevent this in the future?

For such situations, a healthcare system with strong regulations regarding data privacy, and healthcare providers who abide by such regulations, would largely diminish the threat of major tech firms such as Alphabet. However, too many regulations can inhibit innovation in some situations. Finding a balance between innovation and safety is a challenge that many industries and regulators struggle with worldwide.

I believe that it is no easy task to find such a balance. There is a growing number of factors influencing a push for both regulation and free innovation as digital information becomes one of the most important assets for innovative development. Experts on data privacy and innovation must come together to form regulations that can foster safe innovation.

What do you think should be done to foster safe innovation in the information era?

References:

https://www.bbc.com/news/technology-40483202

https://www.bbc.com/news/technology-58761324

https://www.cnbc.com/2021/10/01/google-deepmind-face-lawsuit-over-data-deal-with-britains-nhs.html

https://deepmind.com/

Please rate this

Retrospective Facial Recognition in Policing: 2021 or 1984?

29

September

2021

No ratings yet.

The Metropolitan Police (Met) has new plans of purchasing and implementing retrospective facial recognition (RFR) technology in London (Woodhams, 2021). This technology will enable the MET to process historic photographs from CCTV, social media, and many other sources in order to track down criminal suspects. These plans were made public when the Mayor of London accepted the Mets proposal to increase its surveillance technology (MOPAC, 2021). This proposal showed the Mets plans to have a 4 year, £3 million deal with NEC, a multinational information technology and electronics corporation from Japan.

In the past, similar technologies like Live Facial Recognition (LFR) have seen heavy public criticism. LFR scans the faces of people that walk past a camera and compares these to a database of photos of people who are on a watchlist. Police use of LFR has already been scrutinized to the point where the United Nations High Commissioner for Human Rights has called for a moratorium on LFR use (Woodhams, 2021).

In order to protect the freedom, privacy of citizens, it is important that the public gains an understanding of both LFR and RFR, and the police’s plans of implementing them. As the complexity of policing technology will continue to increase, I believe that citizens will have a harder time understanding these technologies and the implications of their use. 

One interesting implication of RFR that I would like to shed light on with this article involves data consent. As mentioned previously, RFR uses historic photographs. In the past, when these photos were taken, citizens did not agree for them to be used in future RFR police investigations. At the time, many citizens did not even know that such use of these photographs could be a possibility in the future. This raises my question to the readers of this article. Should the police be allowed to use photographs you consented to in the past, for new purposes without new consent? Is the police acting in an immoral way?

References:

Woodhams, Samuel. (2021). London is buying heaps of facial recognition tech. Wired, Condé Nast Britain 2021. Retrieved from: https://www.wired.co.uk/article/met-police-facial-recognition-new

MOPAC. (2021). Retrospective Facial Recognition System. The Mayor’s Office for Policing And Crime. Retrieved from: https://www.london.gov.uk/sites/default/files/pcd_1008_retrospective_facial_recognition_system.pdf

Featured photo from:

Macon, K. (2021). London Police to rollout “Retrospective Facial Recognition,” scanning old footage with new invasive face recognition tech. Reclaim The Net. Retrieved from: https://reclaimthenet.org/london-police-to-rollout-retrospective-facial-recognition/

Please rate this