The Age of Digital Childcare

20

September

2021

No ratings yet.

Science has proved that early childhood brain development in a positive manner leads to a fulfilling and successful life. With the world moving towards AI assistants for help, we wonder if AI-powered technologies could be the solution, and maybe a substitute to parents, in raising children.

There are various applications that have released to assist parents in making optimal decisions for their child’s development. For example, Muse is an application that develops traits in parents that leads to better outcomes like emotional intelligence, persistence, self-control, and resilience in their children. Muse understands that artificial empathy is no substitute for human connection, and therefore focuses on helping parents become better at parenting to nurture their children into a fulfilling life. Another application called Bosco assists parents in monitoring digital threats to their child. With factors such as age, gender, and culture considered, the AI of Bosco charts an expected online behavior profile looking at the history. This application is the “eyes and ears” for parents to protect their children from potential red flags. Robots such as BeanQ have been built for stimulation and for a more interactive and dynamic source of engagement for children. The robot is designed like any other toy to grab the child’s attention and is voice powered to facilitate interactions. It is made as another outlet of engagement besides the parents while they are busy with other tasks. Parents say it also features as a “remote babysitting” gadget which tracks the nanny and the baby’s movements and uploads them online for parents to see. It is being used widely in China to capture the important moments in the child’s life.

There has been a growing effort to promote robot “pals” to promote language development and social growth, AvatarMind’s iPal is one such robot that speaks in two languages-Chinese and English, plays games and gives math lessons as well as tell jokes. This educational humanoid robot costs $2499 and a customized version costs $4999. It is seen as an added ingredient to human nannies and not as AI babysitting. Because parenting is more complex than just an interactive program or entertaining the child, even childcare robots lack the human touch necessary for the optimal and fulfilling development of children.

All the above are merely tools to assist parents, not substitutes of parents. There is a need for human parenting to build social skills as this requires human connections. This is also why parents are so important in a child’s development. It is hard to imagine a replica of this level of bonding and connection from a machine.

There is now growing concerns about the divide that will occur between those who have access to AI and are AI-literate and those who are not as fortunate. There is also the question of how reducing parent time using robots and other applications that are now pervasive in our societies may affect the very development of young minds that these AI technologies aim to build and nurture. Some say it is best to nurture a child and their socio-emotional skills free of digital influence while others argue that there are merits in exposing them to technology as early as they can to raise them digital-literate in a world that is increasingly becoming AI-powered.

References:

https://thriveglobal.com/stories/can-artificial-intelligence-help-enhance-the-quality-of-early-childhood-education/

https://www.forbes.com/sites/neilsahota/2020/06/22/ai-powered-parenting-entering-the-age-of-digital-childcare/

Why Childcare Robots Will Become the new Norm

https://edition.cnn.com/2018/09/28/health/china-ai-early-education/index.html

Please rate this

Deepfakes: the threats they impose and ways to prevent it.

14

September

2021

No ratings yet.

What are deepfakes?
Deepfakes are an emerging breed of AI-enhanced photos and videos that are blurring reality by bringing a new level of realism that is difficult for humans, and even machines, to detect.

Threats:
We live in a world where information flows faster than the ability of fact checkers to process it. This poses a threat to our social fabric. No doubt that there is public awareness that deepfakes are seeping into our social media platforms and we take everything with a grain of salt, but this awareness is also proving to have deep repercussions; we are beginning to doubt even the real incidences. Sam Gregory, a director of a nonprofit that helps document human rights abuses in Brazil (a country that has historically suffered police violence), has said that any video they film of police harassing/killing civilians is no longer sufficient for investigation. The fear of real being fake has become a recurring theme and it is giving the powerful yet another weapon where they can claim that something is ‘deepfake’ when those who are less fortunate or powerless show corruption in the system.

It also poses significant threat to those industries that make important financial decisions on the contents of photos and videos, such as insurance. Deepfakes can easily be used to file fraudulent claims and establish assets into existence that do not exist. Through this technology, people can exaggerate damages from natural disasters or claim for items that do not even exist, for example.

Preventative measures:
Tech companies have focused on tools to detect deepfakes and manipulated media. Facebook partnered with academia and other companies to launch Deepfake Detection Challenge and the highest accuracy rate of the top performers was just 65%. This technology can be used to apply AI-based forensic analysis to every photo/video prior to processing an insurance claim.

Content Authenticity Initiative is another initiative started by Adobe with Twitter and NY Times to allow for content evaluation. C2PA was launched just seven months ago by Microsoft in partnership with Adobe to provide technical standards for authenticating content. But detecting is just one layer of defense. It takes time to process and analyse the photos and videos.

Preventative technology, especially in the insurance industry, is a must to provide more reliable solutions to this problem. One way of doing this is through digital authentication of photos and videos at the time of capture to make it tamper-proof. This could be inculcated in a secure app that stops the insured to upload their own photos. However, app adoption is not always one hundred percent. A blockchain technology can be added that protects from changes to the media. This makes the content being viewed original and unchanged. However, such technology for insurance use is still just a concept and in practical use yet. But Microsoft AMP (AETHER Media Provenance) along with Azure’s CCF (Confidential Consortium Framework) is being used for media authenticity via certifying provenance and ensures transparency of all media provided manifests. If this application is used for the insurance industry, it will safeguard the integrity and reliability of claims.

A mix of both kinds of technology- detective and preventative- will give the most optimal results and surety of tamper-proof data/media.

References:

https://www.fastcompany.com/90549441/how-to-prevent-deepfakes

https://www.propertycasualty360.com/2021/09/14/deepfakes-an-insurance-industry-threat/?slreturn=20210814054220

https://www.technologyreview.com/2019/10/10/132667/the-biggest-threat-of-deepfakes-isnt-the-deepfakes-themselves/

https://towardsdatascience.com/technical-countermeasures-to-deepfakes-564429a642d3

Please rate this