Have platform-mediated markets become too big?

22

October

2017

No ratings yet.

Platforms such as Google, Facebook and Uber have grown significantly over the past decade due to multiple phenomenon, including network effects. The bigger platforms become, the more they will expand globally and thus the more they will be faced by calls for regulation. Recent news has called upon this request for regulation, especially for Facebook and Uber. Should large platforms be regulated or have they simply become too big to be regulated?

If we look at Uber, the platforms tends to create a love-hate relationships with cities. One the one hand, they are able to provide citizens with transport all over the city, whilst on the other hand its legitimacy regarding local regulations and national employment laws can be questioned. This is the main reason why London does not want to extend Ubers’ license to operate in the city. (Herrman, 2017) Furthermore, after disclosed information has been published on Facebook, as well as Google and Twitter, it has been found out that Russian accounts have interfered with the 2016 presidential election through its advertising network. (Bergen et. al., 2017) Although the three firms have already made clear that they are looking into the problem, the question whether regulatory practices should be implemented remains. Imposing regulatory boundaries on platforms such as Facebook and Uber means the firms would have to restructure their digital strategies as well analyze whether or not it would have an impact on its business structure.

Although I have a mixed opinion on this topic, I will discuss my main concerns. This is a delicate topic given that platforms such as Google and Facebook were able to grow in such scale given that they were faced with almost no government constraints, yet now  the question about regulation is being after Facebook and Uber have become so big. Although I believe in certain freedom across platforms, I don’t believe that large platforms such as Facebook should in fact be completely free of government regulations. A lot of people can publish anything they want, whether they have good or bad intentions, without any social constraints. Only the best regulatory implementations would be able to encourage positive behavior as well as stop negative behavior. However,  if we speak about regulating such large platforms it needs to be questioned whether enforcing regulations on platforms is in fact feasible. On the other hand, Facebook was created with the intention of connecting people from all around the world through means of private communication. This would mean that by imposing strict regulations, the government would get involved in these private conversations. Instead of regulations, platforms such as Facebook could simply use algorithms to prevent the appearance of ‘Fake news’.

Would Facebook therefore need to be regulated in order to protect the democratic process or should it be regulated as little as possible in order to protect the democratic process? The answer to this question would affect huge platforms either way.

References:

Herrman J, 2017, ‘What if platforms like Facebook are too big to regulate?’, NY Times, Available at: https://www.nytimes.com/2017/10/04/magazine/what-if-platforms-like-facebook-are-too-big-to-regulate.html [16 October 2017]

Bergen M., Frier S., and Bloomberg S., 2017, ‘Are Google, Facebook and Twitter too big?’, The Hamilton Spectator, Available at: https://www.thespec.com/news-story/7605696-are-google-facebook-and-twitter-too-big-/ [16 Ocober 2017]

 

Please rate this

AI Software: Where do we draw the line?

22

October

2017

No ratings yet.

It comes as no surprise that Artificial Intelligence (AI) has been able to create several breakthroughs over the past decade. Although AI has been able to re-create human performance and therefore create software that benefits daily operations, it also brings with it several risks. AI software is continuously growing, and therefore the question that needs to be asked is whether AI really has boundaries when it comes to the development of software. Where do we draw the line? I will discussing this topic by analyzing a recent development in AI software, namely Aristotle, the AI Babysitter.

In January 2017, Mattel introduced a new device called Aristotle, which was created with the intention of helping parents to nurture and protect their children (Lee, 2017) as well as teaching children good manners and foreign languages (Hern, 2017). Although the aim of Mattel was to create a system that would help parents, what it essentially does is replace parenting with technology (Reference BBC). Human interaction is essential for a child’s development (CAH, 2004) and the effects of technologies could affect the emotional development of young children (Lee, 2017). The US nonprofit Campaign for a Commercial-Free Childhood organized a campaign in which it demanded Mattel to not go through with the release of Aristotle. The campaign argued that Aristotle “attempts to replace the care, judgment and companionship of loving family members with faux nurturing and conversation from a robot designed to sell products and build brand loyalty” (Hern, 2017). The campaign was able to collect 1500 signature, leading to the decision made by Mattel this month of October, of not releasing Aristotle. (Hern, 2017)

On the other hand, it has been proven that AI technology can be beneficial to taking care of the elderly, such as the software ElliQ. ElliQ is a software that is designed to combat loneliness and provide elderly with the opportunity to use technology in an easy way. It can therefore essentially viewed as a companion for the elderly. In the UK for example, loneliness is a major concern. Half of the people over the age of 75 live alone, and over 1 million people feel lonely most of the time. This should that loneliness is an essential problem that needs to be tackled, not only in the UK but all over the world. (Bramhill, 2017)

My opinion on this topic is that AI can be beneficial to important needs that need to be satisfied by human beings. Tackling loneliness for example is an accurate example of a human need that could be benefited with the use of AI software. However, the development of AI shouldn’t be encouraged to simplify human tasks. I believe that there are several aspects that cannot be replaced by AI, such as education, security, protection, nurturing and healthcare. Although technology can help simplifying many human processes, it is important to consider that not every process or interaction can be replaced by technology. This is not only for the sake of human development, but also for the sake of human’s security and safety. It should therefore be clearly defined where the limit lies in AI software and to acknowledge which tasks not only can, but should be automated, and which can’t.

References

Bramhill N, 2017, ‘Robots may be used to care for elderly in Ireland by 2022’, The Times, Available at: https://www.thetimes.co.uk/article/robots-may-be-used-to-care-for-elderly-in-ireland-by-2022-xfvz08g9h [8 October 2017]

CAH, 2004, ‘The importance of caregiver-child interactions for the survival and healthy development of young children’, Child and Adolescent Health and Development, Available at: http://apps.who.int/iris/bitstream/10665/42878/1/924159134X.pdf [8 October 2017]

Hern A, 2017, ‘‘Kids should not be guinea pigs’: Mattel pulls AI babysitter’, The Guardian, Available at: https://www.theguardian.com/technology/2017/oct/06/mattel-aristotle-ai-babysitter-children-campaign [7 October 2017]

Lee D, 2017, ‘Mattel thinks again about AI babysitter’, BBC, Available at: http://www.bbc.com/news/technology-41520732 [7 October 2017]

 

 

Please rate this