IoT: data savior or privacy leak?

14

October

2018

5/5 (2)

While IoT is often described as the new best thing, creating many opportunities, or even the next industrial evolution (Kennedy 2018), it also invokes negative connotations. This is due to the security and privacy concerns along with uncertainty about what these devices could possible do. Thus, new regulatory approaches become necessary to ensure privacy and security (Weber 2010).

The internet of things, or IoT, is a system of interrelated computing devices, mechanical and digital machines, objects, animals or people that are provided with unique identifiers (UIDs) and the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction (Rouse 2016).

IoT is driving nearly every company in every sector to become more technology focused, with data as a key asset. Therefore, not only IoT devices should be secured but also the data these devices collect, share and store. According to a research by Gemalto, a cybersecurity firm based in the Netherlands, 90% of the consumers lack confidence in the security of IoT (Roe 2018). Additionally, according to a research by Cisco, almost 97% of risk professionals are of opinion that a data breach or cyber-attack due to unsecured IoT devices could be devastating for their firms. To ensure the safety, attacks have to be intercept, data authenticated, access controlled and the privacy of customers (natural and legal persons) guaranteed (Weber 2010).

Furthermore, the hype surrounding IoT causes shortsightedness when firms start their IoT journey. Organizations wrongly focus on “cool” technology to obtain fast results and incremental results. This focus on the new tech hype, rather than the actual business problem, maintains other misunderstandings about IoT that hinder its adoption.

Another problem with IoT is that organizations often underestimate its complexity. IoT is a convergence of markets and ecosystems, with seemingly endless use cases in all vertical sectors, payoffs, opportunities and new value propositions (Kranz 2018).

 

So how can these problems be solved?

Organizations should understand that it is nearly impossible to implement IoT successfully on their own.

A paradigm shift is needed, as today’s layered security models are inflexible, not probably scalable and based on technologies decades ago. Unfortunately IoT is completely different, heterogenous, highly distributed and connect. Due to its nature, IoT asks for a heterogenous and differentiated legal framework that adequately takes into account the globality, verticality, ubiquity and technicity of the IoT (Weber 2010).

Another key to success would be to build partner ecosystems of horizontal, vertical and local specialists and then co-innovate with them (Pop 2017). This should happen in a multiprotocol environment, to ensure the safety and security of all data and IoT.

What are your thoughts on this? Should this ecosystem be regulated by governmental institutions or should organizations have the freedom to ensure safety on their own?

 

Bibiography:

  • Kranz, M. (2018). Overcoming the Dark Side of IoT. [online] blogs@Cisco – Cisco Blogs. Available at: https://blogs.cisco.com/innovation/overcoming-the-dark-side-of-iot [Accessed 14 Oct. 2018].
  • Kennedy, K. (2018). 2018 Internet of Things Trends. [online] G2 Crowd. Available at: https://blog.g2crowd.com/blog/trends/internet-of-things/2018-iot/ [Accessed 14 Oct. 2018].
  • Pop, O. (2017). Building & Managing an Ecosystem of Co-Created Value. [online] Blog.hypeinnovation.com. Available at: https://blog.hypeinnovation.com/building-managing-ecosystem-cocreated-value [Accessed 14 Oct. 2018].
  • Roe, D. (2018). 7 Big Problems with the Internet of Things. [online] CMSWire.com. Available at: https://www.cmswire.com/cms/internet-of-things/7-big-problems-with-the-internet-of-things-024571.php [Accessed 14 Oct. 2018].
  • Rouse, M. (2016). What is internet of things (IoT)? – Definition from WhatIs.com. [online] IoT Agenda. Available at: https://internetofthingsagenda.techtarget.com/definition/Internet-of-Things-IoT [Accessed 14 Oct. 2018].
  • Weber, R. (2010). Internet of Things – New security and privacy challenges. Computer Law & Security Review, 26(1), pp.23-30.

Please rate this

Adverse Effects of AI

30

September

2018

5/5 (4)

Over the past decade, the field of artificial intelligence (AI) has seen fascinating developments. There are now over twenty domains in which AI programs are performing at least as well or even better than humans. While AI can be used for our benefit in many areas, it also can be misused for malicious ends.

There are many risks to the use of AI, as the risks associated with privacy and data security are real. As AI enhances the expected value of data, firms are encouraged to collect, store and accumulate data, regardless of whether they will use AI themselves (Zhe Jin, 2018). The ever-growing big data storehouses become a prime target to hackers and scammers. A concrete example of harm that could arise from a data breach is identity theft. Scammers were engaging in this area long before big data and AI existed.

Recent trends suggest that criminals are getting more sophisticated and are ready to exploit data technology. For instance, robocalls – the practice of using a computerized auto-dialer to deliver a prerecorded message to many telephones at once – has become prevalent because of relatively standard advances in information technology (Burton 2018).

However, as a result of AI, criminals are using improved methods of pattern recognition and delivery, increasing their efficacy of these calls, e.g. The receiver of the call will see a local number that looks familiar to him, this could even be a number from his or hers personal contacts. The call is then tricking the receiver into listening to unwanted telemarketing.

Another matter around AI and other predictive technology is that they are not fully accurate in their intended use. While it may not introduce much wasteful effort is apps like Netflix cannot precisely predict the next movie people want to watch, it could be much more consequential if the U.S. National Security Agency (NSA) flags certain innocent people as a possible future terrorist based on shortcoming of an AI algorithm (Zhe Jin 2018).

To summarize, there is a real risk in privacy and data security. The magnitude of the risk, and its potential harm to consumers, will likely depend on AI and other data technologies. What are your thoughts on these risks?

References:

Burton, J. (2018). Hacking your holiday: how cyber criminals are increasingly targeting the tourism market. [online] The Conversation. Available at: http://theconversation.com/hacking-your-holiday-how-cyber-criminals-are-increasingly-targeting-the-tourism-market-98967 [Accessed 30 Sep. 2018].

Zhe Jin, G. (2018). ‘Artificial Intelligence And Consumer Privacy’, in NBER Working Paper 24253, pp.4 – 8. Cambridge: National Bureau of Economic Research.

Please rate this