Artificial Intelligence in Elderly Care

15

October

2022

No ratings yet.

Elderly care is becoming a struggling industry with many care homes experiencing a shortage of skilled caregivers to provide long-term care for the elderly (Zeng et al., 2021). Artificial intelligence can play a key role in assisted living and health care monitoring, offering extra support to the low number of staff. It is important to make the distinction here, that this article aims to focus on AI as a support function, and AI solutions mentioned do not dive into care robots, but instead easier applications, for example, but not limited to sensors, smartwatches, and mobile phones.

Examples of AI in use in this area already exist, from personal homes to care homes. In the US, Kellye Franklin, an only child of age 39 is the primary caregiver of her father, diagnosed with dementia (Corbyn, 2021). In cases like this, it is a heavy strain to manage a job and daily life as well as caring for a parent. The pair live in the same house, and Franklin is supported by an AI system at home in her caregiving activities. With sensors and cameras set up around the house linked to her phone, she is able to detect ‘unusual behaviour’ from her father when she is at home or away. For example, if her father were to step outside and not return in a predetermined ‘short time’ she will receive an alert on her phone. Similarly, she can be notified if a camera and sensor pick up that her father has fallen and hasn’t moved.

In a care home, this works almost the same, compensating for the low number of staff and rising number of elderlies in the facilities. Additionally, the use of personal devices by the elderly paired with AI systems can provide more in-depth monitoring. Caretakers can be notified of rising heart rates or blood pressures, where traditionally action follows an incident, AI and these devices offer more preventative measures.

References

Corbyn, Z. (2021, June 3). The future of elder care is here – and it’s artificial intelligence. The Guardian. Retrieved October 15, 2022, from https://www.theguardian.com/us-news/2021/jun/03/elder-care-artificial-intelligence-software

Qian, K., Zhang, Z., Yamamoto, Y., & Schuller, B. W. (2021). Artificial Intelligence Internet of things for the elderly: From assisted living to health-care monitoring. IEEE Signal Processing Magazine, 38(4), 78–88. https://doi.org/10.1109/msp.2021.3057298

Zeng, D., Wu, J., Yang, B., Obara, T., Okawa, A., Iino, N., Hattori, G., Kawada, R., & Takishima, Y. (2021). SHECS: A local smart hands-free elderly care support system on SMART AR glasses with AI technology. 2021 IEEE International Symposium on Multimedia (ISM). https://doi.org/10.1109/ism52913.2021.00019

Please rate this

Artificial Intelligence in Policing

13

October

2022

No ratings yet.

In 2021, a leading firm in driving the inclusion of AI in policing in the United States, called PredPol (for predictive policing), came under heavy scrutiny. The scrutiny arose from investigations into the software they were using disproportionately predicts crimes in lower-class neighbourhoods where inhabitants are usually working-class, people of colour (and black people in particular) (Guariglia, 2022).

The aim of PredPol is to assist police in distributing manpower across cities and neighbourhoods, by predicting where crime will occur, police organizations can then see these areas prior to any crime actually believed to occur. They can then station manpower there to swiftly act or hope their presence deters any actual crime from occurring. How did PredPol predictions work? They use a machine-learning algorithm which is trained using historical event datasets per city, usually dating back 2-5 years. The data collected does not include any demographic or personal information about neighbourhoods or people, the three data points collected include crime type, crime location, and crime date/time (PredPol, 2022). They claim this eliminated the possibility of bias in sending police to discriminated neighbourhoods. Although the category of data collected does not show it, this inputted data (i.e., training data), did actually strengthen already a present bias in policing. Police have been known to unfairly police neighbourhoods inhabited by working-class Americans and specifically African Americans. This bias reaches so deep into the system that police administrative records lead to many misunderstandings of the level of bias present in policing within the US (Peeples, 2021). Now, remember, PredPol uses these records to extract their training data. Therefore, they are seeing a negative feedback loop created whereby the historically harmful and biased views are continuously fed into the AI foundation, teaching it not to do the job better, but to instead do the same job at a faster rate. And given the machine learning characteristic, the algorithm continued to unfairly send police to discriminated neighbourhoods and then store this data to re-affirm its learnings.

The moral of the story here is despite a good objective, the data used cannot be valued at quantity over quality. Moreover, designing software in areas prone to bias such as policing it is essential that extra care and consideration is taken in what data is fed into the algorithm.

References

Guariglia, M. (2022). Police Use of Artificial Intelligence: 2021 in Review. Retrieved 11 October 2022, from https://www.eff.org/deeplinks/2021/12/police-use-artificial-intelligence-2021-review

How PredPol Works | Predictive Policing. (2022). Retrieved 11 October 2022, from https://www.predpol.com/how-predictive-policing-works/

Jany, L. (2022). Researchers use AI to predict crime, biased policing in major U.S. cities like L.A. Retrieved 11 October 2022, from https://www.latimes.com/california/story/2022-07-04/researchers-use-ai-to-predict-crime-biased-policing

Peeples, L. (2022). What the data say about police brutality and racial bias — and which reforms might work. Retrieved 11 October 2022, from https://www.nature.com/articles/d41586-020-01846-z

Please rate this