AI Software: Where do we draw the line?

22

October

2017

No ratings yet.

It comes as no surprise that Artificial Intelligence (AI) has been able to create several breakthroughs over the past decade. Although AI has been able to re-create human performance and therefore create software that benefits daily operations, it also brings with it several risks. AI software is continuously growing, and therefore the question that needs to be asked is whether AI really has boundaries when it comes to the development of software. Where do we draw the line? I will discussing this topic by analyzing a recent development in AI software, namely Aristotle, the AI Babysitter.

In January 2017, Mattel introduced a new device called Aristotle, which was created with the intention of helping parents to nurture and protect their children (Lee, 2017) as well as teaching children good manners and foreign languages (Hern, 2017). Although the aim of Mattel was to create a system that would help parents, what it essentially does is replace parenting with technology (Reference BBC). Human interaction is essential for a child’s development (CAH, 2004) and the effects of technologies could affect the emotional development of young children (Lee, 2017). The US nonprofit Campaign for a Commercial-Free Childhood organized a campaign in which it demanded Mattel to not go through with the release of Aristotle. The campaign argued that Aristotle “attempts to replace the care, judgment and companionship of loving family members with faux nurturing and conversation from a robot designed to sell products and build brand loyalty” (Hern, 2017). The campaign was able to collect 1500 signature, leading to the decision made by Mattel this month of October, of not releasing Aristotle. (Hern, 2017)

On the other hand, it has been proven that AI technology can be beneficial to taking care of the elderly, such as the software ElliQ. ElliQ is a software that is designed to combat loneliness and provide elderly with the opportunity to use technology in an easy way. It can therefore essentially viewed as a companion for the elderly. In the UK for example, loneliness is a major concern. Half of the people over the age of 75 live alone, and over 1 million people feel lonely most of the time. This should that loneliness is an essential problem that needs to be tackled, not only in the UK but all over the world. (Bramhill, 2017)

My opinion on this topic is that AI can be beneficial to important needs that need to be satisfied by human beings. Tackling loneliness for example is an accurate example of a human need that could be benefited with the use of AI software. However, the development of AI shouldn’t be encouraged to simplify human tasks. I believe that there are several aspects that cannot be replaced by AI, such as education, security, protection, nurturing and healthcare. Although technology can help simplifying many human processes, it is important to consider that not every process or interaction can be replaced by technology. This is not only for the sake of human development, but also for the sake of human’s security and safety. It should therefore be clearly defined where the limit lies in AI software and to acknowledge which tasks not only can, but should be automated, and which can’t.

References

Bramhill N, 2017, ‘Robots may be used to care for elderly in Ireland by 2022’, The Times, Available at: https://www.thetimes.co.uk/article/robots-may-be-used-to-care-for-elderly-in-ireland-by-2022-xfvz08g9h [8 October 2017]

CAH, 2004, ‘The importance of caregiver-child interactions for the survival and healthy development of young children’, Child and Adolescent Health and Development, Available at: http://apps.who.int/iris/bitstream/10665/42878/1/924159134X.pdf [8 October 2017]

Hern A, 2017, ‘‘Kids should not be guinea pigs’: Mattel pulls AI babysitter’, The Guardian, Available at: https://www.theguardian.com/technology/2017/oct/06/mattel-aristotle-ai-babysitter-children-campaign [7 October 2017]

Lee D, 2017, ‘Mattel thinks again about AI babysitter’, BBC, Available at: http://www.bbc.com/news/technology-41520732 [7 October 2017]

 

 

Please rate this

1 thought on “AI Software: Where do we draw the line?”

  1. Thanks for your post! I was not aware of the AI program Aristotle, and after reading your blog post it got me thinking about the limitations that can and should be imposed on AI development. But perhaps limiting AI to certain aspects, or completely banning it from others is not the right approach. Perhaps the mindset should be shifted to thinking of AI as a value-added complement to a current aspect of human life. Let’s take current education systems as an example. While I thoroughly believe in the value of having a human expert teach students in a classroom setting, there are of course limitations to what this person could teach, as well as the quality of this teacher. AI could therefore become a complement to the human teacher, giving tailored challenges to individual students based on their aptitude for that class. When thinking of AI as a value-adding tool in this way, perhaps the line that should be drawn on AI’s limitations can be expanded.

Leave a Reply

Your email address will not be published. Required fields are marked *