ChatGPT Plugins: The Doom or The Boom?

17

October

2023

5/5 (2)

Please rate this

The Promise of Plugins

The emergence of ChatGPT plugins has been a turning point in the realm of conversational AI. Not only do plugins enhance the built-in capabilities of ChatGPT but also expand the horizons of possibilities. The plugins have significant implications for various industries and sectors. However, as with any AI advancement, there are both pros and cons to consider. In this blog post, I delve deeper into my personal experience with ChatGPT plugins, discuss their real-world applications, and explore the ongoing debates surrounding their usage. 

What are Plugins?

Currently, the plugins are only available in the Plus version of ChatGPT. In a nutshell, they are technically software add-ons that extend the existing capabilities of the original ChatGPT model. Plugins can serve a variety of purposes and can connect the model to external data sources, thus, increasing the accuracy of responses. They are not developed by the OpenAI itself, but rather are enhancement tools made by human beings and “submitted” into the ChatGPT ecosystem. For instance, the plugins can enable ChatGPT to draft emails, conduct web searches, summarise documents etc.

The User Experience

My personal journey with ChatGPT plugins started very recently, way after they were introduced. Yet, the experience has been quite enlightening. Not only have these plugins made my interactions with ChatGPT more dynamic but have also allowed for my productivity and efficiency to skyrocket. Though there are only around 1,000 of them available, I am very far from having explored all of them. Among the ones I have had the pleasure of working with, a few have stood out due to their utility in my personal and professional lives:

1. KeyMate.AI Search

KeyMate.AI has truly been a game-changer for me. The plugin basically acts like a missing link between ChatGPT and Google Search. It helped me reduce the time I spent on web research and increase its efficiency. For instance, while working on a market analysis for a work project, I used KeyMate.AI to quickly generate an overview of the market and biggest players. It can also help you make your investment decisions by providing real-life data and trends!

Curious? Click to see a live example!

2. WolframAlpha

As someone who has a keen interest in data analytics, the Wolfram plugin has been interesting to explore. In short, it allows for performing quick complex calculations and data analysis right within the ChatGPT interface. It also has access to curated knowledge. While I have not yet used it for a specific work I “played around” with it and tried to see what transformations of different Pokemons look like.

Curious? Click to see a live example!

3. Wikipedia

I am a huge “fact-nerd”, I love constantly googling for information and questions that come to my mind. Most of the time I end up reading through a Wikipedia page, but those pages are quite lengthy. Thus I started using the plugin and it is now my absolute go-to for quick and efficient information lookup. As you can see below, I have used it to find out about absolutely different concepts and historical events in a very efficient way.

Curious? Click to see a live example!

Great, not Perfect

While the capabilities of the plugins are impressive, they have their own drawbacks. Here are some of the most notable ones I (and others) are concerned with:

1. Personal data exposure. One of the main pressing concerns is the ethical implications and data safety. Given plugins are made by third-party developers, the personal data of the users is available to parties outside of the ChatGPT ecosystem.

2. Speed of generation. In short, plugins are slow. It takes a while to generate responses. When it comes to looking up information on the web, you are better off using the search engines directly for short questions (e.g. Microsoft stock price). However, once your search becomes slightly more extensive, the benefit of plugin-generated answers tends to exceed the cost of time spent waiting.

3. Hallucinations. While in the plugins amount of hallucinated (i.e. fake) answers is reduced, they are not fully eliminated. This is due to the fact that the underlying language learning model (LLM) behind ChatGPT itself is still prone to generating hallucinations.

What does the Future Hold?

While the ChatGPT plugin ecosystem is still in its infancy, I believe it is immensely promising. With the evolvement of this technology, we can expect to see more sophisticated, faster, and user-friendly plugins. However, it is important to have a balanced approach towards innovation and consider both its benefits and challenges. While OpenAI has yet to let another genie out of the bottle with its introduction of a plugin ecosystem, it is the users and developers who in my opinion hold sole ownership of the technology’s future.

Have you had an experience with ChatGPT plugins or have some thoughts on the topic? Happy to hear more in the comment section below!

Sources

Apple’s Anti-Tracking Disruption

11

October

2022

No ratings yet.

“Privacy. That’s Apple. Privacy is a fundamental human right. It’s also one of our core values. Which is why we design our products and services to protect it. That’s the kind of innovation we believe in.” (Apple Inc., 2022). Every Apple user has probably heard something in line with this before and it is not necessarily a lie: with the release of iOS 14.5 in April 2021, Apple launched a new privacy feature called “App Tracking Transparency”. With this feature, Apple forces app developers to ask users for permission to “track” them, that is share your data with third parties for ad-targeting purposes. Since the release of this feature, it has proven to be a major disruptor on the global ad market – in May 2022 only 25% of users agreed with apps tracking them (Lukovitz, 2022). This disruption is causing a major impact on revenue for big players: it is estimated that in 2022 Apple’s App Tracking Transparency (ATT) feature will cost Facebook $16 billion, YouTube $2.2 billion, Snap $546 million, and Twitter $323 million (O’Flaherty, 2022). In addition, the disruption is also impacting smaller businesses and start-ups that rely on personalized marketing to acquire new customers. Because of ATT, they have seen the cost of acquiring new customers rise and have had to cut back on marketing spending (McGee, 2022). This disruption caused by Apple has required big organizations like Google (owner of YouTube) and Meta (owner of Facebook) to reevaluate their business model and find a way to get the lost revenue back and keep their shareholders happy. 

Disruptive innovations are known to displace current market leaders, Google and Meta, and to see one or more new market-leading firms arise. Apple will say that the one benefitting from its anti-tracking crusade is you, the user, and, as we mentioned something similar before: this is not necessarily a lie. However, one with a business mindset and a critical view has probably seen it coming from miles away: the major benefiter is Apple itself. Appsumer reports that between the second quarter of 2021 (after the release of ATT) and the second quarter of 2022, Apple Search Ads (ASA) – Apple’s platform for selling advertisement space to advertisers – has experienced a major boost. Advertisers’ adoption of ASA grew by 4% to 94.8%, while that of Meta and Google decreased by respectively 3% and 1.7% to 82.8% and 94.8% (McCartney, 2022). Perhaps more interesting, are the changes that occurred in advertisers’ share-of-wallet (SOW). ASA’s SOW increased by 5% to 15%, while Meta’s SOW dropped by 4% to 28% and Google’s stayed the same at 34% (McCartney, 2022).

Apple has used the ATT feature very cleverly as a first hit in challenging the duopoly Google and Meta in the advertising market. While Apple is expected to make an almost negligible $5 billion in ad revenue in 2022 compared to Google ($209 billion) and Meta ($115 billion) (Kachalova, 2022), this difference is most definitely to slink in the coming years. Google and Meta are slowly adjusting to the reality because they know: Apple wants a share and will go to extreme measures to get it and with Apple’s strong ecosystem and large userbase, there is very little they can do about it.

Apple Inc. (2022). Privacy. Accessed on October 2022, van Apple.com: https://www.apple.com/privacy/

Kachalova, E. (2022, October 3). Big Tech owes you money. Find out how much. Accessed on October 2022, van AdGuard: https://adguard.com/en/blog/personal-data-cost-money.html

Lukovitz, K. (2022, May 5). Privacy Update: ATT IDFA Opt-In Rate At 25% Overall, But Varies By Vertical. Accessed on October 11, 2022, van Mediapost: https://www.mediapost.com/publications/article/373613/privacy-update-att-idfa-opt-in-rate-at-25-overal.html

McCartney, J. (2022, September 6). Appsumer Report: Apple Privacy Measures Provides a Boost for Apple Search Ads and Favors Large Advertisers. Accessed on October 2022, van Business Wire: https://www.businesswire.com/news/home/20220906005427/en/Appsumer-Report-Apple-Privacy-Measures-Provides-a-Boost-for-Apple-Search-Ads-and-Favors-Large-Advertisers

McGee, P. (2022, August 9). Small businesses count cost of Apple’s privacy changes. Accessed on October 2022, van Ars Technica: https://arstechnica.com/gadgets/2022/08/small-businesses-count-cost-of-apples-privacy-changes/

O’Flaherty, K. (2022, April 23). Apple’s Privacy Features Will Cost Facebook $12 Billion. Accessed on October 2022, van Forbes: https://www.forbes.com/sites/kateoflahertyuk/2022/04/23/apple-just-issued-stunning-12-billion-blow-to-facebook/?sh=58eb37031907

Please rate this

Why is Elon Musk the king of disruption?

26

September

2021

5/5 (1)

Since one of the core themes of the ‘Information Strategy’ module is disruption, I thought it’s worthwhile to write an article about the king of disruption – Mr. Elon Musk.

 You might think that naming Musk THE King of disruption is a bold assumption, but you will be convinced when you see the amount of industries he has shaken.

Financial industry

Back in 1999, Elon founded the company X.com. He envisioned the company as a digital banking and investment services site that offered everything from checking accounts to insurance, mortgage and bonds (Constanty, 2021). X.com was one of the very first online banking services. The company later became PayPal, one of the most successful FinTech businesses.  Nowadays the FinTech industry has been growing exponentially, and it is projected to reach 324 billion by 2026 (Zinchenko, 2021). Thus, Elon Musk significantly contributed to the current state of online banking services.

Automotive industry

In 2003, Elon founded Tesla. The tesla cars weren’t the first electric vehicles nor the first capable of autonomous driving. However, they are the best in the market for both energy and production efficiency. Additionally, Tesla’s chips are regarded as being 6 years ahead of Toyota and VW, and at least 3 years ahead of the most advanced Nvidia Orin chip (Park, 2020). Tesla cars are not only a leader in the technology behind the car, but also in their positioning and branding strategy. The company was able to surpass Mercedes as a leading luxury car brand in the USA.

Aerospace industry

In 2002, Elon founded SpaceX, with an ambitious objective to make humanity multiplanetary. Again, his technology is not the first, but rather the most efficient. SpaceX’s Falcon rockets are reusable, unlike traditional rockets in the industry. As such, SpaceX is providing up to 5x decrease in costs of getting a spacecraft into the sky in comparison to established aerospace companies (CBinsights, 2021).

 Telecommunications industry

Starlink is yet another company founded by Musk. It aims to provide satellite internet to the hardest-to-reach consumers, which the traditional telecommunication companies have failed to satisfy (CBinsights, 2021). Yet gain, Elon’s idea is offering better performance in terms of cost, speed and latency.

Are you convinced now that Elon Musk is indeed the king of disruption? If no – there are even more industries that will be impacted by this genius. If I dwell into everything, this blog post will be the length of a thesis. If you are interested, look up – ‘Hyperloop’ (transport disruptor), ‘The boring company’ (tunneling, infrastructure disruptor), Neuralink (healthcare disruptor) and Solarcity (energy disruptor).

Maybe he is an alien, maybe he is just a genius mastering the art of disruption.

References

CBinsights. (2021, April 24). From Energy To Transport To Healthcare, Here Are 8 Industries Being Disrupted By Elon Musk And His Companies. Retrieved from CBinsights: https://www.cbinsights.com/research/report/elon-musk-companies-disruption/

Constanty, B. (2021, Feb 25). The Elon Musk Effect: The Timeless Power Of Disruption And Brand Authority. Retrieved from Forbes: https://www.forbes.com/sites/forbescommunicationscouncil/2021/02/25/the-elon-musk-effect-the-timeless-power-of-disruption-and-brand-authority/?sh=152eb2cd7ee5

Park, T. (2020, Oct 12). Why Tesla Will Dominate Autonomous Driving. Retrieved from Simplify: https://www.simplify.us/blog/why-tesla-will-dominate-autonomous-driving

Zinchenko, P. (2021). Why is FinTech Growing: 3 Trends that Will Shape the Industry in 2021 and Beyond. Retrieved September 16, 2021, from https://www.mindk.com/blog/why-is-fintech-growing/

Featured image source: https://www.entrepreneur.com/article/369870

Please rate this

Picnic – Disruptive Enough?

5

October

2020

No ratings yet. In 2015, online supermarket Picnic was founded in The Netherlands. At that time, this business model was very new and innovative for the Dutch Food Retail market. Although the young company is not yet profitable, this online supermarket has grown enormously over the past five years. In 2020, Picnic has approximately 300,000 customers, 4,000 employees and delivers in 125 cities in The Netherlands and Germany (Schelfaut, 2020). It is expected that the company will even further expand and grow in the coming years. Very recently it came out in the news that they will collaborate with the huge German supermarket chain Edeka to create its own private label that is entirely focused on e-commerce in order to achieve even more sales (Business Insider, 2020). With special logarithms, Picnic is able to drive efficient routes in order to save costs and for sustainability purposes. Furthermore, they strive to deliver on time and to be customer friendly (Van Tatenhove, 2018). Based on these insights, it seems that Picnic is a success story and that it can be regarded as a threat for incumbent firms.

 

However, the competition in the food retail e-commerce has grown in recent years.
Existing physical supermarket chains, such as Albert Heijn and Jumbo, started offering online delivery services as well. Picnic has had a competitive advantage over these supermarket chains for a long time because of its low costs, free delivery and customer centric commerce (Business Insider, 2015). They distinguished themselves as ”the modern milkman” to transform the way people do their groceries (Van Tatenhove, 2018).
However, Albert Heijn for instance has also improved their online service due to the high demand and sales for online grocery delivery. They have reduced their minimum order amount and also set up more hubs across the country in order to be able to deliver even faster (AH, 2019). About two weeks ago, Albert Heijn launched a grocery delivery service application that is very similar to Picnic: Albert Heijn Compact. With this concept, AH delivers groceries free of charge at home, the minimum order amount is the same as that of Picnic and the functionalities of the app are practically the same (Van Woensel Kooy, 2020). This could be a threat to Picnic, as Albert Heijn has a long history, strong reputation, large customer base and much capital. Additionally, another growing online supermarket was introduced in 2018, named Crisp, which focuses on fresh, local food delivery (Crisp, 2020).
As a result of these new entrants in the industry, competition is increasing. Picnic is no longer the only online supermarket in the Netherlands and runs the risk of losing customers to competitors. Furthermore, the turnover of incumbent supermarket chains remains stable, despite the entrance of potential disruptor Picnic (Smit, 2020).

Based on these insights, I wonder whether Picnic is as disruptive as was thought a number of years ago? Is Picnic not at risk of being overshadowed by a newcomer to the market that incorporates even more advanced technologies and really overtakes the existing markets?

Perhaps in future consumers will only have to name what they need, and it will then automatically be ordered for you and delivered within a short period of time?

 

I look forward to reading your views on this.

 

 

 

 

References

AH. (2019). AH kondigt 5e Home Shop Center aan om forse online groei bij te benen. [Online] Available at: https://nieuws.ah.nl/ah-kondigt-5e-home-shop-center-aan-om-forse-online-groei-bij-te-benen/ [Accessed 4 October 2020]

Business Insider. (2015). Dit doet de nieuwe websuper Picnic anders dan Albert Heijn en Jumbo. [Online] Available at: https://www.businessinsider.nl/picnic-online-supermarkt-albert-heijn-jumbo-591766/ [Accessed 4 October 2020]

Business Insider. (2020). Websuper Picnic komt met een eigen huismerk – omdat alles online wordt verkocht zijn de verpakkingen anders dan normaal. [Online] Available at: https://www.businessinsider.nl/online-supermarkt-picnic-huismerk-verkopen/ [Accessed 4 October 2020]

Crisp. (2020). De supermarkt-app voor knettervers eten. [Online] Available at: https://www.crisp.nl/ [Accessed 5 October 2020]

Schelfaut, S. (2020). Picnic breidt capaciteit flink uit na explosieve groei online boodschappen. [Online] Available at: https://www.ad.nl/koken-en-eten/picnic-breidt-capaciteit-flink-uit-na-explosieve-groei-online boodschappen~a872e4a9/#:~:text=Wij%20verwachten%20de%20komende%20weken,mensen%20aan%20te%20nemen.”&text=Het%20bedrijf%20telt%20nu%20zo,klanten%20en%20ruim%204000%20medewerkers [Accessed 4 October 2020]

Smit, P. (2020). Marktaandeel Albert Heijn stabiel op 34.9%. Available at: https://www.nieuweoogst.nl/nieuws/2020/01/21/marktaandeel-albert-heijn-stabiel-op-349-procent [Accessed 5 October 2020]

Van Tatenhove, J. (2018). Picnic: the modern milkman transforming urban distribution. [Online] Available at: https://medium.com/lifes-a-picnic/picnic-the-modern-milkman-transforming-urban-distribution-bae975749a12 [Accessed 4 October 2020]

Van Woensel Kooy, P. (2020). Albert Heijn gaat vol door op online markt. [Online] Available at: https://www.marketingtribune.nl/food-en-retail/nieuws/2020/09/albert-heijn-gaat-vol-door-op-online-markt/index.xml [Accessed 2020]

 

Please rate this

Technology Impacting The Beer Industry

5

October

2020

5/5 (1) The first signs of beer production go as far back as about 7,000 years ago in Mesopotamia, a place we now know as Iran. Since then, the art of making beer has not changed a lot. Of course, better tools, healthier and more hygienic processes are implemented and used, but the brewing industry remains a traditional one. Brewers are afraid to make changes in the way they brew, since it may harm the quality or the image of their beers and brand. That’s why new technological breakthroughs are not directly applied to the brewing process (Iserentant, 2003).  With a demand of more than 1.8 billion hectoliters of beer globally (Global Beer Consumption by Country in 2018, 2019) and increasing competition, this starts to change. In this article, we explore some of the new technologies now used in the beer industry.

Artificial intelligence (AI) and the internet of things (IoT) is spreading through many industries, and now it also enters the beer industry. Sugar Creek Brewing is using these technologies to solve a problem they had with the packaging of their finished beers and to become more efficient. Spillage of beer throughout the manufacturing process resulted in losing $30,000 every month (Vogelbacher, 2019).  By installing a camera in their bottle line, taking photos of every beer passing by, they gathered a tremendous amount of data. With the help of IBM and the algorithms of Watson, they were able to interpret this data and almost completely eliminated the problem (Bandoim, 2019). More sensors are installed and allow them to collect even more data, which can be accessed through the IBM Watson/Bosch interface 24/7 (Bandoim, 2019).  The example of Creek Brewing is a good example of how AI and IoT can optimize the brewing process and the impact it can have on a company.

In the previously discussed example, AI is mainly used to optimize the production process. There are already breweries using AI to make beer. At IntelligentX they use data to improve their beer recipes. The idea is to use data smartly and more strategically. Data collected from customers, such as preferences, is a central focal point in the company (IntellligentX: AI Beer, n.d.). The data, being customer feedback, is collected through Facebook by answering several questions (Marr, 2019). This input then is analyzed with the help of AI and machine learning algorithms. Based on the results from this process, the brewer can decide to adust the taste of the beer or which type of beer to make next (Marr, 2019).

The last example I want to share is more related to marketing than to the production of the beer itself. Apps like Untappd allow beer connoisseurs to log the beers they are drinking. This results in data of who is drinking what type of beer at which location. Based on this information, breweries can gain valuable insights into how and where certain types of beers can best be advertised. Another new development is smart drafts or taps, as provided by Indian based startup TapIO.  The tap looks just like a regular beer tap, but it’s full of technology. Customers are billed by the drop, which is registered on their personal smartcard. This data allows breweries to do personal recommendations based on user behavior. Another benefit of this system is that it reduces the wastage of beer since the pay by the drop idea stimulates people to pour a pint more carefully (TapIO – The Disruption The Beer Industry Needs, 2019). Apart from many more benefits for brewers, there are also benefits for customers. The most important one is that it helps them to get the order faster (TapIO – The Disruption The Beer Industry Needs, 2019. Untappd and TapIO are two great examples of how new systems and technologies can help collect data valuable for breweries.

Throughout this article, we explored a few developments in the beer industry of which I expect we will hear more. Personally, I’m very eager to see how the production of beer based on AI will develop. There are technologies where you can create your own beer recipe and start brewing with the help of an app (Minibrew), but I find it much more interesting to see how breweries are adjusting beers according to my personal taste. Overall I hope, technology will increase the variety of beers and improve the quality of it. A good thing about all those developments is that in the end, you still need people to enjoy beer!

References

Bandoim, L. (2019, July 24). Brewery Uses AI And IoT Technology To Improve The Quality Of Beer. Retrieved from Forbes: https://www.forbes.com/sites/lanabandoim/2019/07/24/brewery-uses-ai-and-iot-technology-to-improve-the-quality-of-beer/

Global Beer Consumption by Country in 2018. (2019, December 24). Retrieved from KIRN: https://www.kirinholdings.co.jp/english/news/2019/1224_01.html

IntellligentX: AI Beer. (n.d.). Retrieved from weare10x: http://www.weare10x.com/portfolio_page/intelligentx/

Iserentant, D. (2003). Beers: recent technological innovations in brewing. In: Lea A.G.H., Piggott J.R. (eds) Fermented Beverage Production, 41-58.

Marr, B. (2019, February 1). How Artificial Intelligence Is Used To Make Beer. Retrieved from Forbes: https://www.forbes.com/sites/bernardmarr/2019/02/01/how-artificial-intelligence-is-used-to-make-beer/

TapIO – The Disruption The Beer Industry Needs. (2019, November 11). Retrieved from Brewer World: https://www.brewer-world.com/tapio-the-disruption-the-beer-industry/

Vogelbacher, J. (2019, April 23). AI and IoT Help Perfect the Brew at Sugar Creek Brewing Company. Retrieved from IBM: https://www.ibm.com/blogs/think/2019/04/ai-and-iot-help-perfect-the-brew-at-sugar-creek-brewing-company/

 

 

Please rate this

Is the Corona Crisis the Catalyst for Digitisation in the Healthcare Industry?

1

October

2020

5/5 (3)

We live in a digital age: never before has our society been so connected. The opportunities of current technological and digital developments seem endless. However, at the same time, it still doesn’t seem like this level of digitalisation is widely spread in the healthcare industry for the public to see. At least, not until COVID-19. During the “intelligent lock-down” I had my first-ever digital consult with my GP. Although this technological service is not novel, why was it the first time I encountered it? For me, this digital consultation was more efficient. Wouldn’t it be more efficient for my GP as well: to serve more patients whilst offering the same level of quality? When is it the turn for E-Health to take off?

 

What is E-Health?

Let’s first start with the meaning of E-Health. Researchers have tried to create a general consensus on the definition of E-health; however, this has proven to be difficult since the term is popular and widely used in various different applications (Oh et al. 2005; Showell & Nøhr 2012). For the sake of this article, E-Health can be viewed as any digital application that supports and aims to improve health and healthcare. E-Health can be anything: from a mobile app that a patient uses to collect and send data on bodily functions (e.g. glucose monitoring), to a secure E-Health platform that healthcare professionals use to get insight into medical records. According to the website of the Dutch government, E-Health should also serve to give a patient more control over his or her health (Government of the Netherlands 2020). Examples of successful applications of E-Health can be fewer physical visits to the hospital or earlier identification of chronic diseases.

 

What are the technologies behind E-Health?

E-Health is made possible through the Internet of Things (IoT) technologies. The term IoT was coined to refer to a network of objects that are able to interact with each other. These connections can be between: (i) persons to persons, (ii) persons to things (or machines), and (iii) things to things, made possible through smart networking technologies (Patel & Patel 2016). IoT within the healthcare industry has great potential and is already gaining threshold. For this reason, the term Internet of Medical Things (IoMT) got introduced. IoMT refers to the increased interconnectivity of medical-related devices and services, made possible through digitisation and network technologies (Taylor et al. 2018).

IoMT has already made it possible for patients wearing a smartwatch, to collect data and to track their wellness. This data is can then be seamlessly integrated into an electronic health record for the doctor to monitor remotely in real-time. Today, IoMT is improving access to quality care and reducing costs by tracking equipment, patients and staff, plus much more (Taylor et al. 2018). We’ve just begun to scratch the surface of all the possibilities.

shutterstock_651777256

How can we get E-Health to take off in the Netherlands?

One of the great upsides of digitisation in the healthcare industry is that high-quality healthcare can be delivered more efficiently to patients. During the intelligent lock-down, caused by the corona crisis, many E-Health initiatives were rolled out faster than planned. However, healthcare givers are falling back into their old patterns and are relying on their traditional systems and procedures again (BNR 2020). How can we make sure that digitisation in our healthcare system continues?

1. Educating patients and caregivers

Digitisation in the healthcare industry will require a transformation in how healthcare is viewed by both patients and healthcare professionals. On the one hand, digital innovations can still seem daunting for patients. Especially for the elderly, it will be essential that digital devices and/or services are easy to use. Education and training might play an important role in removing the fear of change. On the other hand, E-Health should also be fully embraced by caregivers in order for it to succeed. After all, disruption will not take off if digital innovations are not fully supported by healthcare personnel. For this to happen, it’s important to educate healthcare professionals (perhaps even early on), with the capabilities (and of course also the pitfalls) of digital technologies.

 

2. Ecosystem orchestration: finding a way for different stakeholders to work together

IoMT enables new players to enter the healthcare domain. From manufacturers of surgical robots to commercial tech companies that provide wearable health watches. It’s essential that all players in the IoMT ecosystem should find ways to collaborate to support the changing face of medicine.

The reason why this is challenging is that healthcare models and institutions are very bureaucratic, and often, differ significantly per country. To be able to provide a mixed form of care, partly digital and partly physical, the structure of these bureaucratic systems has to change. And this is not an easy task.

3. Matter of time?

To be able to steer the healthcare industry in the right direction, the conditions for innovation must be there. In other words, is there enough time and money for E-Health initiatives to materialise? And if so, is it just a matter of time?

Even though much has already happened in the field of E-Health, it’s important to continue further digitisation that the corona crisis has induced. It’s now time to press ahead.

What’s your opinion on E-Health? Is now the time to push through with the digitisation of the health industry? What are the challenges the sector needs to overcome? What are the downsides of E-Health?

Please leave your thoughts in the comment box below!


References

BNR. (2020). Vooruitgang digitalisering in de zorg loopt terug. [online] Available at: https://www.bnr.nl/nieuws/gezondheid/10418179/vooruitgang-digitalisering-in-de-zorg-loopt-terug [Accessed 28 Sep. 2020].

Government of the Netherlands. (2020). Government encouraging use of eHealth. [online] Available at: https://www.government.nl/topics/ehealth/government-encouraging-use-of-ehealth [Accessed 27 Sep. 2020].

Oh, H., Jadad, A., Rizo, C., Enkin, M., Powell, J. and Pagliari, C. (2005). What Is eHealth (3): A Systematic Review of Published Definitions. Journal of Medical Internet Research, 7(1).

Patel, K. and Patel, S. (2016). Internet of Things-IOT: Definition, Characteristics, Architecture, Enabling Technologies, Application & Future Challenges. International journal of engineering science and computing, 6(5).

Showell, C. and Nøhr, C. (2012). How should we define eHealth, and does the definition matter? Studies in Health Technology and Informatics, 180, pp.881–884.

Taylor, K., Steedman, M., Sanghera, A. and Thaxter, M. (2018). Medtech and the Internet of Medical Things. [online] Deloitte Centre for Health Solutions. Available at: https://www2.deloitte.com/content/dam/Deloitte/global/Documents/Life-Sciences-Health-Care/gx-lshc-medtech-iomt-brochure.pdf.

Please rate this

Biomimicry: From Neural Networks to Neural Architecture

29

September

2020

5/5 (3)

josh-riemer-OH5BRdggi2w-unsplash

Biomimicry: From Neural Networks to Neural Architecture

Biomimicry is not new; the philosophy is that nature has already solved some of the puzzling problems that we humans are facing today. Just looking at a few examples—bird beaks as inspiration for trains, gecko toes for adhesives, whales for wind turbines, and spiders for protective glass (Interesting Engineering, 2018)—leads us to indeed conclude that nature can help us solve difficult problems. However, what about our own nature; what problems could human biology inspire us to solve?

Well, there is one problem that we are facing in the field of computer science, which I am fairly sure you have heard of: the nearing ‘death’ of Moore’s law. Gordon E. Moore predicted, in 1965, that “the number of transistors per silicon chip doubles every year” (Brittanica, n.d.). However, we are nearing the limits of physics when it comes to scaling down transistors and packing them closer together on a traditional computer chip (Technology Review, 2020); going any denser would cause overheating issues.

There are plenty of problems that require vastly more computational power than we possess today, for example in the fields of physics, chemistry and biology, but also in more socio-technical contexts. Our traditional computer chips, built on Von Neumann architecture, do not pack the power to solve these and more problems; traditional chips even struggle with tasks as image and audio processing. Perhaps the biggest flaw is the infamous ‘Von Neumann bottleneck’.

This, amongst other reasons, has been inspiring researchers to pursue a different type of architecture, one that is more energy efficient, packs more processing power, and get rids of a particular bottleneck between processing and memory retrieval (but more on that below). One promising field of research is that of neuromorphic architecture: a design that mimics the architecture of the human brain.

Traditional chips

Von Neumann architectures – i.a. what your laptop and mobile phone is built on – have computer chips with master clocks that, at each tick, evaluate a binary input and pass on a binary output through the logic gates in the transistors of the chip. The processor can only be on or off and often stands idle while waiting to fetch information from the separated memory, which makes them very energy inefficient and gives rise to the ‘Von Neumann bottleneck’. The latter comes down to the problem that, no matter how much processing power grows, if ‘transfer rates’ (the rates at which memory is retrieved) stay the same, the latency will not improve (All About Circuits, 2020). This clock-driven, binary architecture stands in stark contrast to the architecture of neuromorphic chips.

Neuromorphic chips

Neuromorphic chips contain a spiking neural network, the artificial neurons within which are only activated when signals reach an activation threshold, remaining at a low power use-baseline otherwise. These signals, which are electric pulses, are fired when sensory input changes. The implications of a signal depends on the number of spikes within a certain period of time, as well as the design of that specific chip. These signals are gradient rather than binary, which means that they – via weighted values – can transfer more information per signal than a bit can. Overall, the design lends itself excellently for processing sensor data, including speech, image and radar inputs. We currently see that such sensory inputs are processed on neural networks, but on traditional architecture. The hardware of neuromorphic chips, as the name may give away, resembles neural networks, which adds the benefit of running AI models at vastly higher speeds than C/GPUs can (IEEE, 2017).

The artificial neurons of a neuromorphic chip, or synaptic cores, operate in parallel. This means that multiple neurons can be activated and activate other neurons at the same time (Psychology Today, 2019). This makes neuromorphic chips incredibly scalable—since you can increase the amount of artificial neurons—as well as fault-tolerant, since neurons can find other synaptic routes (via other neurons) when another neuron breaks. This mimics neuroplasticity in the human brain.

Another quintessential aspect of neuromorphic chips is that memory and computation are tightly coupled. Whereas traditional chips require external memory for non-volatility, this type of memory is inherent to the design of neuromorphic chips (IBM, n.d.). The artificial neurons within a chip are connected by memristors that resemble artificial synapses. These memristors pack non-volatile memory, because they ‘remember’ the electric charge that has previously flown through it, as well the direction in which it has been sent (MIT, 2020). This non-volatility means that memristors retain their information even after the device is shut off.

Players to watch

The neuromorphic computing industry is consolidated and built upon traditional computer engineering capabilities. In my view, there are three chips to watch in the field of neuromorphic computing: BrainChip’s Akida, IBM’s TrueNorth, and Intel’s Loihi.

  • BrainChip’s Akida consists of 80 neuromorphic processing units that amount to 1.2 million neurons and 10 billion synapses (Nextplatform, 2020).
  • IBM’s TrueNorth consists of 4096 cores that amount to 1 million neurons and 250 million synapses (CACM, 2020).
  • Intel’s Pohoiki integrates 768 Loihi chips and amounts to 100 million neurons (Intel, 2020).

While Intel’s Pohoiki—a neuromorphic systemic aggregation of Loihi chips—is still in the research phase and only oriented to researchers, its 100 million neurons make it the most advanced neuromorphic system as of today (InsideHPC, 2020). It can do specific tasks up to 1,000 times faster and 10,000 times more efficiently than conventional processors (Intel, 2020). In terms of the amount of neurons inside, Intel’s Pohoiki resembles a small mammal. In addition, Intel (2020) claims that the neuromorphic system does not only fulfil AI purposes, but a wide range of computationally difficult problems.

Practical considerations

Neuromorphic chips are energy efficient, run AI models more efficiently than traditional architectures, are scalable, and reduce latency by tightly couple processing and memory. These properties make neuromorphic chips fit to run AI models at the edge rather than the cloud, which can be valuable for application in (i.a.) autonomous cars, industrial (IoT) environments, smart cities, cybersecurity, embedded video and audio, and in optimization problems such as minimal risk stock portfolios (Nextplatform, 2020; Intel, 2020). In addition, the energy efficient and compact design could enable deep learning to become embedded inside devices such as mobile phones. This could drastically improve natural language processing in day-to-day applications – just imagine Siri actually understanding your question and providing a helpful answer!

However, we are not there yet. There are still plenty of challenges, amongst which is developing the most efficient learning algorithms to be ran on the neuromorphic chips. Neuromorphic chips are still in their infancy, and overcoming technical hurdles will not be the only challenge (Analytics Insight, 2020); ethical concerns surrounding biomimicking hardware already exist, and should be expected to expected to intensify as the technology gains traction and its capabilities grow.

As of now, neuromorphic hardware is not commercially viable yet, but that does not mean we should not pay attention to it.

In the face of all this exciting uncertainty, I will conclude with some food for thought. Please let me know in the comments what your opinion are on (one of) the following three questions:

  • Do you think neuromorphic chips possess potentially transformative power to the nature of work, or even our day-to-life? Why?
  • What type of (business applications) do you see for hyper efficient neural network processing at the edge?
  • Can you think of any problems that we have pushed forward along the uncertain and lengthy path of quantum computing research, that may be solved earlier by neuromorphic computing?

References

All About Circuits. (2020) https://www.allaboutcircuits.com/news/ai-chip-strikes-down-von-neumann-bottleneck-in-memory-neural-network-processing/ [Accessed September 25, 2020]
Analytics Insight. (2020) https://www.analyticsinsight.net/neuromorphic-computing-promises-challenges/ [Accessed September 28, 2020]
Britannica. (n.d.) https://www.britannica.com/technology/Moores-law/ [Accessed September 25, 2020]
CACM. (2020) https://cacm.acm.org/magazines/2020/8/246356-neuromorphic-chips-take-shape/fulltext [Accessed September 28, 2020]
IBM. (n.d.) https://www.zurich.ibm.com/sto/memory/ [Accessed September 26, 2020]
IEEE. (2017) https://spectrum.ieee.org/semiconductors/design/neuromorphic-chips-are-destined-for-deep-learningor-obscurity [Accessed September 26, 2020]
InsideHPC. (2020) https://insidehpc.com/2020/03/intel-scales-neuromorphic-system-to-100-million-neurons/ [Accessed September 28, 2020]
Intel. (2020) https://newsroom.intel.com/news/intel-scales-neuromorphic-research-system-100-million-neurons/ [Accessed September 28, 2020]
Interesting Engineering. (2018) https://interestingengineering.com/biomimicry-9-ways-engineers-have-been-inspired-by-nature [Accessed September 29, 2020]
MIT. (2020) https://news-mit-edu.eur.idm.oclc.org/2020/thousands-artificial-brain-synapses-single-chip-0608 [Accessed September 26, 2020]
Nextplatform. (2020) https://www.nextplatform.com/2020/01/30/neuromorphic-chip-maker-takes-aim-at-the-edge/ [Accessed September 28, 2020]
Psychology Today. (2019) https://www.psychologytoday.com/us/blog/the-future-brain/201902/neuromorphic-computing-breakthrough-may-disrupt-ai [Accessed September 26, 2020]
Technology Review. (2020) https://www.technologyreview.com/2020/02/24/905789/were-not-prepared-for-the-end-of-moores-law/ [Accessed September 25, 2020]

 

 

 

 

 

 

 

 

 

 

 

 

 

Please rate this

Will cloud gaming become the new streaming disruption?

28

September

2020

5/5 (2) Over the past decade streaming services have gained a large amount of traction, mostly in the music- and movie industry. Companies like Netflix and Spotify are at the forefront of these markets, with userbases of around 193 million and 138 million respectively (Watson, 2020) (Schneider, 2020). The ease-of-use, as well as the attractive pricing for avid music listeners or movie watchers these platforms provide have allowed them to revolutionize the markets in which they operate. An industry that so-far has not been overtaken by the streaming technology, is the video games industry. However, this might very well change in the near future with the introduction of cloud-based gaming.

The concept of cloud gaming is rather simple. Normally, video games are installed by users directly on their system of choice, after which they can be played. This requires users to buy either a video game console (e.g. Playstation 4 or Xbox One) or to have a computer capable of running the game they want to play. Cloud gaming, however, allows users to run video games remotely, so that they do not need high-end hardware to run their games. The games the user plays is run on a remote server, after which the gameplay itself is streamed to the screen of the user (Roach, 2020). Another benefit that comes with cloud gaming is the fact that users will not have to update and/or download games before playing. Essentially, you could compare the service to an interactive Netflix stream, in which every action you perform (through either a controller or mouse/keyboard) is sent to a server, after which the stream is updated to show the video game you are playing (Roach, 2020).

Over the last year, a growing number of companies have entered the cloud gaming market. Google Stadia, Microsoft XCloud, Nvidia GeForce Now and most recently Amazon with its service Luna, have all entered the market in the past year (Peters, 2020). Thus, there are an abundance of options for users to choose from,  depending on their gaming interests. Currently, the subscription basis of most cloud gaming platforms differ from services like Spotify and Netflix, as they do not necessarily offer a large catalogue of games for a fixed monthly fee. Google Stadia, for example, allows users to either use their platform for free, if they purchase games individually, or allows users to pay a monthly fee to get a select amount of games every month (Henderson, 2020). Playstation Now, on the other hand, does allow users to utilize a library of over 500 titles for a fixed monthly fee, but most are games from their older consoles (Pino & Leger, 2020).

Now you might be wondering why these technologies have not become the norm when it comes to playing videogames. This is not solely due to the fact that there is not yet a fixed-fee platform with a large array of games to choose from. The major problem which still inhibits the adoption of this technology is related to both internet speed and latency. Latency refers to the time it takes for a user to provide a command and the system to react to this command. While playing videogames, especially those that require the user to quickly react to their surroundings, low latency is highly important for the quality-of-service (Lampe et al., 2014). Thus, it is essential that users possess a high speed internet connection, so that latency between themselves and the server is limited. Also, while buffering of movies or songs is not ideal, it does not damage the consumer experience enormously. For videogames, however, even one second of buffering can mean the difference between life and death (digitally speaking, of course).

The aforementioned shortcomings, however, could very well be circumvented in the near future. Advances in wireless internet speed, with for example 5G, could open up the potential userbase for cloud gaming services dramatically (Arkenberg, 2020). Also, it is only a matter of time until current (or new) cloud gaming providers increase their available content and offer a large array of videogames for a low fee, similar to how Netflix and Spotify have over the past decade. Thus, the future of cloud gaming seems promising, and it could very well lead to a revolutionized video game industry.

 

 

References:

Arkenberg, C. (2020). Can 5G unleash next-generation digital experiences in the home?. Deloitte. Accessed September 24 on: https://www2.deloitte.com/us/en/insights/industry/technology/5g-cloud-gaming.html

Henderson, R. (2020). Google Stadia pricing, free trial, availability, games list, compatible devices and how it works. Pocket-lint. Accessed September 25 on:
https://www.pocket-lint.com/games/news/google/143589

Lampe, U., Wu, Q., Dargutev, S., Hans, R., Miede, A. and Steinmetz, R. (2014). Assessing Latency in Cloud Gaming. Cloud Computing and Services Science. Springer International Publishing, vol. CCNS, 453, ch.4, pp.52-68.

Peters, J. (2020). How Amazon’s Luna cloud gaming service compares to Stadia, xCloud, and GeForce Now. The Verge. Accessed September 23 on: https://www.theverge.com/2020/9/25/21454917

Pino, N. and Leger, H. S. (2020). Plastation Now review. Techradar. Accessed September 21 on: https://www.techradar.com/reviews/gaming/playstation-now-1213666/review

Roach, J. (2020). How Does Cloud Gaming Work? A Guide for 2020. Cloudwards. Accessed September 22 on: https://www.cloudwards.net/how-does-cloud-gaming-work/

Schneider, M. (2020). Spotify Posts 138 Million Paid Subscribers, Big Operating Loss in First Earning Entirely During Pandemic. Billboard. Accessed September 22 on: https://www.billboard.com/articles/business/9426290/spotify-earnings-2020-q2-subscribers-revenue-forecast-covid-19

Watson, A. (2020). Number of Netflix paying streaming subscribers worldwide from 3rd quarter 2011 to 2nd quarter 2020. Statista. Accessed September 24 on: https://www.statista.com/statistics/250934

Please rate this

Europe and the 5G Challenge

22

September

2020

In September 2020, the European Round Table for Industry published a report on the EU-27’s advancements in 5G technologies. This article briefly explains the findings of this report and the causes behind such results.

5/5 (1) With the competition for the development of 5G networks increasing every day, companies all around the world have been playing a tense chess game for the leadership of this game-changing technology. However, as the chairman of the European Round Table for Industry (ERT), Carl-Henric Svanberg, said in an interview with the Financial times, it seems that Europe is left far behind in this race for 5G technology, with an approach that could probably result in a great failure driving investments down.

 

On September 18th 2020, the RTE published a report in which the 27 Member States of the European Union and their advancements in both 5G and 4G were analysed and assessed. This report identified a gap between the European Union and other powerful economies throughout the globe. For instance, it points out how both the US and South Korea have 5G commercial services available since a year ago, South Korea counting with 1,500 base stations per million capita; whereas the majority of Member States have not even launched 5G commercial services and, in total, they have only ten 5G stations deployed per million capita.

 

The contrast between these economies’ progression in 5G networks can be in great part explained by the diversity of countries within the European Union and the differences among them. In the European Union, Member States are characterised by their own particular political and economic situation as well as the political and economic situation which groups the European Union as a single economic power. Therefore, it is hard to coordinate the diversity of high and inconsistent costs, and returns on investment throughout the various States.

 

Despite Europe’s potential in the digital innovation spectrum which drives the emergence of various start-up hubs such as Amsterdam, Berlin and Lisbon; the region seems to be left behind in the roll-out of 5G networks. A key factor hampering this progress is spectrum availability and spectrum licensing. With many European telecoms allocated in narrower bandwidth and spectrum licensing being specially costly for some particular countries, the roll-out of 5G faces a complicated and uncertain environment which derives in several restrictions on innovation, investment, and network deployment.

 

Moreover, while China’s technological and networking company, Huawei, progresses in their development of 5G networks, the US Government moves quickly to stop the internationalisation of their advancements. This has driven European economies into a further state of confusion and blockage. Outside the European Union, the United Kingdom has sided with the US and in July 2020 it banned new Huawei, resulting in both a delay by two to three years of the 5G phone networks rollout, and an increase of cost by £2bn. This example draws a clearer image on the potentially self-sabotaging and slow advancements of Europe as a whole.

 

All factors combined result in the current slow evolution of 5G networks in Europe compared to the advancements of other powerful economies such as China, South Korea, and the US. It is now crucial for the European Union to think about strategies to overcome the obstacles it faces both internally and externally to avoid further economic turmoil and boost its own technological strengths for the development of 5G, avoiding

 

References

ERT, 2020. Assessment of 5G Deployment Status in Europe. Available at: https://ert.eu/wp-content/uploads/2020/09/ERT-Assessment-of-5G-Deployment-Status-in-Europe_September-2020.pdf [Accessed September 22, 2020].

Lemstra, W., 2018. Leadership with 5G in Europe: Two contrasting images of the future, with policy and regulatory implications.

Please rate this

How blockchain could disrupt the education system

17

October

2019

No ratings yet. In 2017, it was highly possible that even your local baker or butcher advised you to invest in cryptocurrencies. The hype seems over and the dust seem to have relatively settled. Now that most people do not only see the technology as a medium of exchange, it is time to bring the real potential of blockchain to the mass. The founder of Ethereum describes blockchain as “a decentralized system that contains shared memory” (Buterin, 2017). Therefore, the technology offers a solution to any environment that wishes decentralization and transparency. Due to the peer-to-peer nature of the technology, the middleman is redundant. The first industry that comes to mind to most people is the banking industry, while thinking about Bitcoin in the back of their mind. However, it offers a solution to many more industries and markets.

An interesting affair that blockchain could possibly disrupt is the education system. The way we have been facilitating learning has been around since the 19th century (Rose, 2012). In most countries, there is a four-year university degree model where the education often fails to calibrate the needs of students and employers. Students learn different skills during their curriculum and are therefore not prepared for the job market. Therefore, many employers offer traineeships to acquire additional skills. Additionally, at average there are five intermediaries between the education and the students that all take a percentage of the tuition fee (Raffo, 2018). This is one of the big reasons why the tuition fees in the US are so high.

A platform with professors, students and employers can be created to solve these two problems. With blockchain, educators are no longer chained to these old institutions and can instead offer their curriculum that fits the wishes of employers directly to students. Students can communicate directly with the professors, so that both parties get what they want. This makes education more affordable as it removes expensive intermediaries. Certificates received on the public blockchain after taking the class are accepted by employers within the network. To disrupt an old invariable model that has been around for centuries like the education system, bootstrapping and expanding the community of the platform is pivotal. However, once the community matures, it could potentially overrule the current education system. Could it be just an utopic idea or reality in the next few decades?

Sources:
Buterin, V. (2017, September 18). Decentralizing Everything. Personal Interview with N. Ravikant.
Raffo, E. (2018, February 15). BlockchainTalks – Decentralized Education Marketplace.
Rose, J. (2012). How to Break Free of Our 19th-Century Factory-Model Education System, The Atlantic.

Please rate this