On the edge of something new

7

October

2020

No ratings yet.

We are entering an era where new two technologies for computing are becoming more and more crucial. The two computing types, quantum and edge, will have a crucial impact on computing power and will increase the processing abilities enormously.

I already quickly mentioned quantum computing in my other article about the DARQ technologies (see here), and in this article I want to dive deeper into what quantum computing is, its benefits, and its differences to edge computing, since the two are sometimes seen as similar, which they aren’t.

 

The most important about quantum computing
While the explanation of quantum computing and its functionalities can fill books, I try to put it short and point out the most important. Basically, quantum computers are able to solve problems that ‘traditional’ computers cannot solve, mainly because the ‘traditional computers’ can only process information displayed through 1s or 0s. The ability of quantum computers to solve more difficult problems is derived from the ability for the 1s and 0s to exist in two states (qubits) at once, making the bytes able to hold four values at the same time: 00, 01, 10, and 11. That way, a quantum computer can perform computations in parallel, crucially increasing their computing power and therefore their efficiency in comparison to ‘traditional’ computers.

However, to perform those actions, quantum computers require special algorithms, which have yet to be defined. Scientists have been researching for ages with yet to find a way to define usable algorithms to make quantum computing usable in large scale.

 

The most important about edge computing
With the constant development and improvement of technologies like XR, autonomous vehicles, or IoT, the demand for instant calculations and minimal latency of data exchange is increasing. Most technologies do not ‘have time’ to wait for their requests to travel across networks, reach a computing core, be processed, and then be send back. The computing is required to be performed either closer to the device, or ideally, within it, in order to reduce latency.

To meet this need, edge computing is on the rise. The idea of edge computing is, to perform the computations either near or right at the source of data, reducing the latency that cloud-computing cannot avoid, by running fewer processes in the cloud. However, edge computing is not there to replace cloud-computing but rather to work alongside with it. A clear division of computations that need immediate feedback along with processes that can withstand a certain latency will drastically increase the speed and efficiency of processes*.

 

Why are the technologies crucial?
Both technologies have a direct impact on several other technological advances, like the DARQ technologies mentioned in my other article, increasing speed, efficiency and security, but also technologies used in the healthcare or automotive industry for example.

The necessity and potential of both computational technologies can be seen in the increased research efforts by big companies like Google, Amazon or Verizon. In 2019, Google set a new benchmark for computational speed with a new kind of processor, and Verzion/Amazon introduced a 5G edge cloud computing partnership, to launch IoT devices and applications the edge.

 

With the constant increase in the collection of data and the requests being computed by processors, the need for technological advances is there. Both of the technologies create ample opportunities within the industries to succeed and drive innovation and change. However, as usual the big tech companies are at the forefront of exploring and developing those technologies.

 

What’s your pick?
Will smaller companies be able to shape and use the technologies soon or do they need to wait until bigger companies will make them available in large scale?

 

 

_____________________________________________

*Please note: When we talk about ‘immediate’ feedback to computational requests, the differences between edge computing and cloud computing are within microseconds. However, this difference could become crucial in several situations, as for example in the avoidance of traffic accidents through autonomous vehicles, which is why it is mentioned at this point.

 

Sources
https://futuretodayinstitute.com/trend/quantum-and-edge/
https://www.keyinfo.com/ai-quantum-computing-and-other-trends/
https://www.upgrad.com/blog/trending-technologies-in-2020/

Please rate this

Differential privacy – A sustainable way of anonymizing data?

5

October

2020

No ratings yet.

Since a lot of blog contributions mention the increase of data collection, data analytics, and the potential threat to privacy, I thought it would make sense to introduce the technique of differential privacy which is currently on the rise in the US. Apart from the US Consensus Bureau, Apple, and Facebook are in the front row of exploring capabilities and potentials of this technique.

 

What does differential privacy mean?
Differential privacy describes a technique to measure the privacy of a crucial data set.

 

Differential privacy in action
In 2020, the US government is facing a big challenge. It needs to collect data on all of the country’s 330 million residents. At the same time, it must ensure to keep all the identities private. By law, the government needs to ensure that the data collected cannot be traced back to any individual within the data set. The data collected by the US government collects is released in statistical tables for academics and policymakers to analyze when conducting research or writing legislation.

To solve the need for privacy, the US Census Bureau presented a technique, to alter the data collected, making it impossible to trace it back to the individual, without changing the overall information provided through the data set. The Census Bureau technique is a mathematical technique, to inject inaccuracies, or ‘noise’, to the data. That way, some of the individuals within the data might get younger or older, change in ethnicity or religious believes, while keeping the total number of individuals in each group (i.e. age/sex/ethnicity) the same. The more noise injected into the data sets, the harder the activity to de-anonymize the individuals.

This mathematical technique is also used by Apple and Facebook, to collect aggregated data without identifying particular users of products and services.

However, this activity also poses some challenges. Injecting too many inaccuracies can render the data useless. A study of the differentially private data set of the 2010 Census showed households that supposedly had 90 people, which cannot be true. However, since the owner of a data set can decide to which level the ‘noise’ should be injected, that challenge shouldn’t pose too much of a problem. Further, the more noise is included, the harder it gets to see correlations between data attributes and specific characteristics of individuals.

If a further analysis of differentially private data sets proves the technique to ensure required privacy, especially for governmentally created data sets, it is likely that other federal agencies or countries will use the methodology as well.

 

 

From my point of view, differential privacy as used for governmentally created data sets seems to a big step towards getting a clearer view about the status quo of a country, thanks to increased privacy and therefore increased trust by residents as well as probably increased participation in the process of data collection.

However, based on the complexity of the technique, to me it seems unlikely, that differential privacy will be used widely within companies (for the moment). Losing the ability to analyze data in detail due to increased privacy for the user and therefore lost correlations within data sets is a payoff I do not think a lot of companies are willing to take. Especially, since a lot of smaller companies are just starting to analyze the data they are collecting.
Right now, research shows that only big multinationals with high R&D budgets are able to sustainably increase privacy through differential privacy without losing too many insights derived from the data collected.

 

What do you think
Can differential privacy be a step in the right direction? Or should governments limit companies in the collection, aggregation, and analysis of data to increase privacy for the customers?

 

Sources:
https://aircloak.com/de/wie-funktioniert-differential-privacy/
https://hci.iwr.uni-heidelberg.de/system/files/private/downloads/182992120/boehme_differential-privacy-report.pdf
https://www.technologyreview.com/10-breakthrough-technologies/2020/#differential-privacy
https://towardsdatascience.com/understanding-differential-privacy-85ce191e198a?gi=9d3ad94ea2e4

Please rate this

Welcome to the DARQ side

3

October

2020

No ratings yet.

As future-minded business leaders are looking to proceed, from simply using multiple digital tools available to them towards finding, evaluating, and implementing new ones, DARQ plays an essential role. It’s not only about the usage of technology but also about the interaction of business partners, individuals, and employees through technology.

 

What is DARQ?
DARQ describes the new technologies that are on the rise, namely: (D)istributed Ledger Technology, (A)rtificial Intelligence, Extended (R)eality, and (Q)uantum Computing. While each of the technology itself already opens up huge opportunities for businesses, applying them collectively will create unimagines paths into the future. While DARQ is essential, it cannot work for companies, that haven’t mastered the necessity of the so-called SMAC areas. SMAC represents (S)ocial, (M)obile, (A)nalytics, and (C)loud.

While AI is seen as the most important technology within the DARQ bundle, mostly because it’s the most commonly tested and used technology out of the pack, the combination of all four technological advances will be the essential topic to focus on. 92% of respondents in global research of Accenture answered, that they see the combination of those technologies as the biggest driver for transformation within their company.

 

Is DARQ beneficial?
Even though DARQ surfaced in multiple reports in 2019 already, it still seems like it’s on the verge of being actually evaluated by companies. While companies are still making their first experiences with one of the technologies, only a small number of companies is actually already trying to combine several of them. Further, a lot of companies are already struggling with the technologies of SMAC, making it even more complicated for them to proceed with the DARQ technologies. At the same time, most of the bigger digital leaders already succeed in the usage of the SMAC technology, enjoying a competitive and strategic advantage to proceed with the implementation with and improvement through DARQ technologies, leaving the others even further behind. While those companies are likely to benefit, it’s hard to tell how that advantage will impact the others, probably requiring governments to step in to avoid further monopoly formation.

 

iRobot incoming
The following paragraph is neither scientifically proven nor relevant for the technologies themselves. However, I want to share it since I found it interesting to think about:

While researching, I stumbled across the coincidence of DARQ being an abbreviation for technologies, that pose the risk of obscuring processes, ecosystems, and functionality from human beings.
Distributed Ledger Technology, like Blockchain, uses anonymization, making it harder to identify individuals within the ecosystems. So people are fading away “into the darkness”.
There is some similarity to Artificial Intelligence. While right now most algorithms still need human interaction to be improved, at a certain point the algorithms are thought to improve by themselves. If that point is reached, humans could lose traction of what is going on inside systems. The algorithms would become “black boxes”, delivering what they are designed to, but without individuals knowing how they are doing so.
I talked about Extended Reality (XR) in my other article, and from what I wrote it could be inferred that XR could make reality and fantasy merge to a point, where people have a hard time distinguishing between both. Missing interaction could turn people into “shadows” of themselves, not leaving their houses or apartments anymore, living in the digital space only…
For Quantum Computing I couldn’t think of an analogy to the “darkness” part, but since it will be the driver to enable, and improve the technologies it’s simply part of it 😉

 

What’s your take? Is it just a coincidence or is DARQ a statement?

 

 

Sources:
https://www.accenture.com/de-de/insights/technology/new-emerging-technologies-darq
https://www.accenture.com/gb-en/insights/communications-media/darqpower-new-emerging-technologies
https://www.computerwoche.de/a/von-darq-technik-und-momentanen-maerkten,3548122

Please rate this

Extended reality – Is our imagination the only limit?

1

October

2020

No ratings yet.

Most people nowadays probably heard about Virtual Reality (VR) or Augmented Reality (AR). Some might even heard about Mixed Reality (MR). But how many of your heard of Extended Reality (XR)?

First, let’s take a step back. To understand the concept of XR, one needs to understand the three current main components of XR, namely VR, AR, and MR:

  • Virtual Reality
    The idea is to create applications for tools like headsets, to fully immerse users in a computer-simulated reality. Oculus Rift is probably the most known supplier of a VR headset. With the use of sounds and images, the headset engages all five senses to create an interactive virtual world.
  • Augmented Reality
    Rather than creating a whole new “world”, AR is merging the real with the digital world. It does so, by overlaying digital graphics and sounds into real-world environments, usually by using cameras of phones or tablets. The most commonly known applications using AR are Pokémon GO and Snapchat.
  • Mixed Reality
    MR can be located somewhere between VR and AR. The idea is to merge real and virtual worlds into complex environments in which digital and physical elements can interact. Like VR, the content is interactive for users to manipulate digital objects in a physical space, and like AR it uses the idea of placing virtual content in a real-world environment. A product that is making use of MR is Spectator View, a complement of Microsoft for their HoloLens AR product.

So what is Extended Reality (XR)?
XR is the umbrella term to summarize all those immersive technologies as well as all future technologies ones to come. The idea of XR is to summarize a fundamental shift in how people will be interacting with media in the future. It won’t be crucial anymore in which distinct technology is being used. The focus will lie on the fact of whether a technology within XR was used or not. Further, XR helps to create a market that can be estimated. By 2022, the XR market size is estimated to be more than $209 billion, compared to $27 billion in 2018.

But where will XR be used?
Right now, several industries show potential to adopt XR, with the currently most impacted being:

  • Entertainment
  • Marketing
  • Training
  • Real Estate
  • Remote Work

The part which was most interesting to me while researching XR was its impact on Remote Work. With Corona being in place and employers and employees seeking efficient and feasible ways to stay productive while adhering to the rules and regulations, digital processes are on the rise. XR, with its multiple fields of usage, poses a great opportunity for businesses to stay competitive, even though personal interaction might be negatively impacted long-term by Corona.

Physical and virtual worlds are merging more and more and more developments in the area of XR are increasing, showing huge potentials of new business models and user experiences. However, the emergence of XR also calls into question, how personal interaction, especially in a professional environment will develop. Will people-business in 20 years be the same with face-to-face interaction and physical meet-ups or will XR also have a crucial impact on personal interaction?

 

What do you think? Might we one day live lives like in Steve Spielberg’s movie “Ready Player One”?

 

Sources:
https://www.fingent.com/blog/5-real-world-applications-of-extended-reality/
https://www.forbes.com/sites/bernardmarr/2019/08/12/what-is-extended-reality-technology-a-simple-explanation-for-anyone/#6c862c3e7249
https://www.visualcapitalist.com/extended-reality-xr/
https://medium.com/predict/extended-reality-is-the-frontier-of-the-digital-future-a2c05785fc72

Please rate this

Tiny AI – The evolution of your smartphone

29

September

2020

No ratings yet.

While most of you probably already heard of Artificial Intelligence (AI), how many of you heard of tiny AI?

Traditional AI is facing challenges
Researchers are trying to build more powerful algorithms, by using ever greater amounts of data and computing power. The current way of running AI is relying on centralized cloud services. This method generates two kinds of problems, (1) the rapidly increasing amount of carbon emissions generated through the developing use of AI and (2) the limitation in speed and privacy of AI applications.

The emerge of tiny AI
Tiny AI describes the idea of running powerful AI algorithms on your smartphones (or any mobile device). Therefore, tiny AI does not need to interact with centralized cloud services to make the users benefit from the latest AI-driven advances. This does not only decrease the carbon emission when using tiny AI but also helps to benefit from the full potential of the AI features without limits to speed or privacy. And the trend of tiny AI is supported by two crucial developments: (1) new generations of AI chips can store more computational power into tighter spaces, making it easier to increase the computational power of devices like smartphones and tablets. (2) AI can be trained and run on far less energy thanks to the improved chips, making it more feasible to offer the technology on smartphones and tablets. (3) Researchers are rapidly advancing with the possibilities to shrink deep-learning models without losing their capabilities, requiring less computing power, therefore also making an inclusion into smartphones possible.

The current status of tiny AI
Right now, the most common smartphone providers, Google and Apple, already make use of tiny AI. While Apple, with its iOS13, runs Siri’s speech recognition and its QuickType keyboard locally on the iPhone, Google’s Google Assistant is performing its actions without sending requests to a centralized server. Further players like Amazon and IBM are also working on solutions to offering tiny AI and its benefits to their customers.

Potential benefits of tiny AI
As previously explained, tiny AI could enhance the usage of AI-driven features through smartphones and tablets. Further, thanks to tiny AI and its decentralized approach, the increased usage of AI would (at least partially) not have a negative impact on the environment. While the users can benefit from their devices ‘automatically’ improving with every usage, there is no need anymore to be connected to a cloud every time a request is made on the phone. Voice assistants, autocorrect, and digital cameras can improve right on the device. The probably most important advance thanks to tiny AI is the improved privacy, since there is no continuous exchange of data between device and cloud anymore.

Challenges of tiny AI
Just as with any interesting technological development, there are not only benefits but also challenges that come with the emergence of tiny AI. The number of discriminatory algorithms could increase rapidly. Further, surveillance systems as well as the creation of deepfake videos could become harder to battle. However, those challenges are mostly the same as with ‘regular’ AI, and researchers are working on mitigations to reduce the risk of those potential challenges.

 

From my point of view, the emergence of tiny AI has a lot of benefits for the user itself. Smartphones and their scope of application increase, thanks to more powerful tiny AI algorithms paired with increased computing power and reduced energy consumption. The increase in privacy is another big plus, especially when it comes to the security of personal data not being shared with remote servers of big companies. Further, tiny AI has the potential to not only improve the capabilities of smartphones but in general of mobile devices used. Thinking of potential industries being impacted, healthcare comes to my mind. As in hospitals, they work a lot with tablets, the emergence of new AI chips paired with tiny AI can improve the usage of complex applications, or even increase the scope of applications available to doctors and nurses.

 

Now it’s your turn
What’s your take on this? Do you think the benefits overweight the challenges?

 

 

Source:
https://www.technologyreview.com/technology/tiny-ai/

Please rate this

Epic Games vs. Apple’s App Store – Round 2

26

September

2020

No ratings yet.

A few weeks ago the dispute between Epic Games, the developer of the famous game Fortnite, and Apple surfaced. Put shore, Epic Games accused Apple to charge an excessive commission for developers and to hinder their users in their freedom to choose where and from whom to download apps, clearly protecting its own products and services on the platform while discriminating external developers. After Epic Games changed its payment processor within Fortnite, cutting Apple off to maintain the contractually agreed commission, Apple decided to delist all Epic Games apps from the Apple App-Store, leaving millions of users without access to the Epic Games applications on their iPhones (t3n, 2020).

While Apple declared the dispute with Epic Games as resolved, terminating the contract with Epic Games, and also suing Epic Games for a breach of contract, further app developers joined Epic Games’ protest, complaining about Apple’s App-Store regulations (t3n, 2020).

By now, nearly 3 months after the beginning of the dispute between Epic Games and Apple, Apple is facing the next round of its App-Store dispute. A coalition has formed, to force Apple and other operators to change their App-Store regulations (Coalition of App Fairness, 2020).

The coalition is called Coalition for App Fairness and is supported by several well-known companies. Next to Epic Games, there are Spotify, Deezer, Matchgroup, Tinder, OkCupid, and many more. Whilst it is clear that their claims are mostly targeting Apple and its App-Store regulations, the requests of the coalition are formulated in a general manner to make them applicable to all App-Store operators (Coalition of App Fairness, 2020).

In total, the Coalition states ten requests for how App-Stores should be regulated, with three of the requests being directly targeted at Apple (Coalition of App Fairness, 2020):

  1. Apple’s App-Store is subject to anti-competitive guidelines
    Apple claims to review all apps carefully before making them available through the app store. Using that process, Apple wants to ensure quality and security for their consumers. However, the Coalition argues, that the heavy regulation by Apple gives Apple the advantage to support its own applications and reduces fairness for external app developers.
  2. Apple retains 30 percent of app sales
    As already mentioned, Apple is accused to charge a high commission. The Coalition sees a competitive disadvantage if two apps with similar services, one from Apple and one from an external provider, are competing against each other in the Apple App-Store since Apple can offer better prices for their services as they don’t have to pay the 30% commission.
  3. The App-Store is supposed to restrict consumer freedom
    Other than with other App-Stores, the only way a user of an iOS device can access and download an app is through the Apple App-Store. Customers have no freedom of choice about where or from whom they can obtain their apps. The Coalitions claim it to be a huge disadvantage forcing customers to use the Apple App-Store to being able to download apps.

 

While I see the point of the Coalition asking for more fairness, it also seems a lot like an attempt to shift profits away from Apple back to the app developers. While we discussed the network effects that large communities/networks can have on a product or service, the Apple App-Store is a great example. The benefits that Apple offers to app developers through the Apple App-Store ecosystem are (1) access to millions of users, (2) security, and (3) ensured quality. At the same time, Apple users can benefit from (1) security and (2) quality. Since no one is forcing the app developers to offer their apps through Apple’s App-Store, it is clear that they want to benefit from the quality network and processes that Apple created with its customers, since they promise increased profitability.

With no doubt, these benefits are crucial for the app developers, since they provide the possibility for increased demand and sales. If Apple would respond to all requests stated by the Coalition, opening up its platform, waiving the app approval process, and giving people the chance to download their apps wherever Apple would pose its iOS as well as its customers to the risk of becoming a victim of fraudulent activities. With a lowered level of quality and security, there’s a possibility of long-term negative impacts on user numbers, quality, security, and in the end profits being made.

What’s your take on this? Should Apple give in and reduce its power within the App-Store?

Sources:
https://t3n.de/news/gegenwind-fuer-apple-koalition-1324447/
https://appfairness.org/our-vision/

Image Source:
https://medium.com/the-kickstarter/what-entrepreneurs-can-learn-from-epic-games-attack-on-apple-9efce281a962

Please rate this