Over the past decade, the field of artificial intelligence (AI) has seen fascinating developments. There are now over twenty domains in which AI programs are performing at least as well or even better than humans. While AI can be used for our benefit in many areas, it also can be misused for malicious ends.
There are many risks to the use of AI, as the risks associated with privacy and data security are real. As AI enhances the expected value of data, firms are encouraged to collect, store and accumulate data, regardless of whether they will use AI themselves (Zhe Jin, 2018). The ever-growing big data storehouses become a prime target to hackers and scammers. A concrete example of harm that could arise from a data breach is identity theft. Scammers were engaging in this area long before big data and AI existed.
Recent trends suggest that criminals are getting more sophisticated and are ready to exploit data technology. For instance, robocalls – the practice of using a computerized auto-dialer to deliver a prerecorded message to many telephones at once – has become prevalent because of relatively standard advances in information technology (Burton 2018).
However, as a result of AI, criminals are using improved methods of pattern recognition and delivery, increasing their efficacy of these calls, e.g. The receiver of the call will see a local number that looks familiar to him, this could even be a number from his or hers personal contacts. The call is then tricking the receiver into listening to unwanted telemarketing.
Another matter around AI and other predictive technology is that they are not fully accurate in their intended use. While it may not introduce much wasteful effort is apps like Netflix cannot precisely predict the next movie people want to watch, it could be much more consequential if the U.S. National Security Agency (NSA) flags certain innocent people as a possible future terrorist based on shortcoming of an AI algorithm (Zhe Jin 2018).
To summarize, there is a real risk in privacy and data security. The magnitude of the risk, and its potential harm to consumers, will likely depend on AI and other data technologies. What are your thoughts on these risks?
References:
Burton, J. (2018). Hacking your holiday: how cyber criminals are increasingly targeting the tourism market. [online] The Conversation. Available at: http://theconversation.com/hacking-your-holiday-how-cyber-criminals-are-increasingly-targeting-the-tourism-market-98967 [Accessed 30 Sep. 2018].
Zhe Jin, G. (2018). ‘Artificial Intelligence And Consumer Privacy’, in NBER Working Paper 24253, pp.4 – 8. Cambridge: National Bureau of Economic Research.
I believe that there is indeed a real risk. With the new technologies and for example, social networks, the severity of attacks is also increasing. Nowadays, the amount of data breaches is not only increasing (945 in the first 6 months of 2018), but the number of records affected by a single breach has significantly increased as well. For example, one data breach on Facebook, affected approximately 30 billion people.