Bridging the Gap Between AR, AI and the Real World: A Glimpse Into the Future of Smart Technology

12

September

2024

5/5 (3)

Apple’s recent keynote showcased new products, including the iPhone’s groundbreaking AI integration. However, when you break it down, what Apple has really done is combine several existing technologies and seamlessly integrate them, presenting it as a revolutionary technology. This sparked my imagination of what could already be possible with existing technologies and what our future might look like. This sparked my imagination about what could already be possible with today’s technology—and what our future might look like.

Apple introduced advanced visual intelligence, allowing users to take a picture of a restaurant, shop, or even a dog, and instantly access a wealth of information. Whether it’s reviews, operating hours, event details, or identifying objects like vehicles or pets, this technology uses AI to analyze visual data and provide real-time insights, bridging the gap between the physical and digital worlds. Tools like Google Image Search and ChatGPT have been available for some time, but Apple has taken these capabilities and seamlessly integrated them into its ecosystem, making them easily accessible and more user-friendly [1]. The Apple Vision Pro merges AR and VR, controlled by moving your eyes and pinching your fingers [2]. I’ve tried it myself, and it was incredibly easy to navigate, with digital content perfectly overlaying the physical world. Now imagine the possibilities if Apple integrated the iPhone’s visual intelligence into the Vision Pro. This headset wouldn’t just be for entertainment or increasing work productivity; it could become an everyday wearable, a powerful tool for real-time interaction with your surroundings.

Picture walking through a city wearing the Vision Pro. By simply looking at a restaurant and pinching your fingers, you could instantly pull up reviews, check the menu, or even make a reservation. Or, if you see someone wearing a piece of clothing you like, you could instantly check online where to buy it, without needing to stop. With these capabilities, the Vision Pro could bring the physical and digital worlds closer together than ever before, allowing users to interact with their environment in ways we’re only beginning to imagine.

Do you think the existing technologies can already do this? Do you think this is what the future would look like? I’m curious to hear your thoughts.

Sources:

[0] All images generate by DALL-E, a GPT made by ChatGPT.

[1] https://www.youtube.com/watch?v=uarNiSl_uh4&t=1744s

[2] https://www.apple.com/newsroom/2024/01/apple-vision-pro-available-in-the-us-on-february-2/

Please rate this

Help! My car can read my mind!

28

September

2021

No ratings yet.

When thinking about futuristic developments in the automotive industry, people usually think of autonomous cars operating with machine learning systems amongst others. Thinking a step further, one could also envision flying vehicles. But what if I told you that we are not that far away from cars reading our minds. The thought of that can be rather scary. Mindreading cars remind society of Sci-Fi movies and make such technologies appear as invasive tools to invade humans privacy.

Let’s take a step back and look at this phenomenon in detail, starting with the technology that even enables this advancement. You may have heard of brain-computer-interfaces (BCIs) before. Most people believe this technology to be very futuristic and immediately think of chips being inserted into our brains. However, this is not the case. BCIs solely measure brain activity, extract certain information from that and then further convert that information into outputs. The outputs enable the BCI to replace, restore, enhance, supplement, or improve human functions. The current state of the technology has not reached implants yet but rather works with wearables. In more detail, the wearable contains an electrode that measures neuronal activities, which does not sound too ‘Sci-Fi’ at all. Experts in the field describe the unfamiliarity and false marketing of the BCI technology as the greatest obstacle for the industry. The devices are not reading our minds or invading our privacy at all. Instead, they solely measure what we are focussing on through our brain waves.

Coming to the initial topic of cars reading our minds, we can now better understand the underlying process behind it. Mercedes Benz has introduced the Mercedes Vision AVTR last year during a conference in Las Vegas. The self-driving car does not have a steering wheel and reminds of a spaceship rather than an average car. AVTR stands for ‘Advanced Vehicle Transformation’ but also relates to the movie AVATAR since the vehicle is a collaboration between Mercedes Benz and Disney. If you have watched the movie, you are probably familiar with the Avatars connecting with nature through their nerve ends. During the recent IAA Mobility 2021, Mercedes showed how the vehicle will be controlled by mind control and minor touch. The BCI is found in a wearable that can be calibrated to the driver taking around 30-40 seconds. Further, the driver can calibrate their body to the car by placing their palm on a control pad, recognizing the driver through their heart rate. Mercedes explained that proper focus is needed to use the BCI, so the car will not just drive a certain way after a short thought. Further, Mercedes explained that the wearable is only necessary due to the current state of the technology. In the future, however, the BCI may be part of the car’s headrest or something similar. Further, researchers are currently working on chips the size of a sand grain that can be surgically inserted into our skull making a wearable redundant.

BCIs are not only emerging technologies in the automotive sector, but also in healthcare and many other industries. This promising technology might enable paralyzed people to draw or type amongst others. Research is also investigating the use of BCIs in the military sector.

Personally, I believe BCIs to be a promising technology. It has the potential to facilitate multiple aspects of life and offer opportunities to people that are currently disabled to do certain things. Certainly, we are still far from seeing BCIs in our daily lives, however, I believe we will get there in the future. Would you get a BCI implant if the technology reaches that state? Let me know in the comments!

Sources:

Daimler. (2021, September 6). Mercedes-Benz VISION AVTR: operating the user interface with the power of thought – Daimler Global Media Site. MarsMediaSite. https://media.daimler.com/marsMediaSite/en/instance/ko/Mercedes-Benz-VISION-AVTR-operating-the-user-interface-with-the-power-of-thought.xhtml?oid=51228086In-text citation

Krames, E., Peckham, H. P., & Rezai, A. R. (2018). Neuromodulation: Comprehensive Textbook of Principles, Technologies, and Therapies (2nd ed.). Academic Press.

Neurable. (2021, August 9). From Brain Chips To Wearables: The State Of BCI Technology Today. https://neurable.com/blog/from-brain-chips-to-wearables

Norris, M. (2020, August 27). Brain-Computer Interfaces Are Coming. Will We Be Ready?RAND. https://www.rand.org/blog/articles/2020/08/brain-computer-interfaces-are-coming-will-we-be-ready.html

Please rate this

Are we living in a simulation?

22

September

2021

No ratings yet.

Have you ever wondered if what you perceive as reality is real life? Who decides what is classified as real and what is not? The majority of the population probably classifies the thought of living in a simulation as rather surreal. Nevertheless, there are several reasons why this might be true and not such an absurd thought at all. Currently, Elon Musk is one of the most famous public figures advocating the pro simulation stance. His explanation is simple; several years ago video games were as simple as two rectangles and a dot advertised as a game called pong. Today, we have virtual reality, augmented reality and photorealistic 3D simulations amongst others. Those technologies bring video games so close to reality that it becomes difficult to tell the difference. It enables several players to play real-life games simultaneously and collectively. With the speed technology has advanced over the last years, we will soon have video games indistinguishable from reality. Even if the speed of advancement decreases significantly, humanity will reach the state of creating such games at some point. This advancement could take thousands of years while still counting as revolutionary due to that length being rather insignificant on an evolution scale. Elon Musk believes this argument to be the primary reason for us living in a simulation. We cannot know if this advancement has already been made in the past. If yes, we already live in a game so close to reality that we cannot tell the difference anymore. Due to artificial intelligence, we advance throughout the game and will eventually reach the point where we create further simulations within the simulation we already live in. Musk rates our chance of not living in a simulation to be one in billions. Not only Elon Musk but also several other professionals in the field support this theory. Another famous advocate, Nick Bostrom a philosopher at the University of Oxford, believes the simulation hypothesis to be as follows. Our experiences and lives are a result of an advanced civilization setting up millions of computers where several simulations are running, with our reality being one of them. Below you can find the link to the full two-hour interview of him explaining this theory further. In addition, you can find the link to a website collecting all information that could hint to us living in a simulation. Even though there is no hands-on evidence, several arguments do sound very convincing. What do you believe? Is this world ‘real’? Leave a comment below and let me know how you feel about this!

Sources and additional information

Joe Rogan & Elon Musk – Are We in a Simulated Reality?

Why Elon Musk says we’re living in a simulation
Nick Bostrom – The Simulation Argument (Full)

Additional readings

https://www.nbcnews.com/mach/science/what-simulation-hypothesis-why-some-think-life-simulated-reality-ncna913926

https://www.simulation-argument.com

Please rate this

BIM, Meet Gertrude!

6

October

2020

Gertrude enjoying a well deserved drink during her performance. 

In August 2020, famous tech entrepreneur Elon Musk revealed his latest technological project: a pig called Gertrude. On first sight, Gertrude looks like an ordinary Pig. She seems healthy, curious, and eager to taste some delicious snacks. When looking at her, it is hard to imagine how she managed to get one of the world’s most radical and well known tech entrepreneurs so excited. Gertrude just seems normal.

This is exactly the point!

ElonMuskGotcha

Elon Musk “Gotcha”

Gertrude is no ordinary pig. She has been surgically implanted with a brain-monitoring chip, Link V0.9, created by one of Elon Musk’s latest start-ups named Neuralink.

Neuralink was founded in 2016, by Elon Musk and several neuroscientists. The short term goal of the company is to create devices to treat serious brain diseases and overcome damaged nervous systems. Our brain is made up of 86 billion neurons: nerve cells which send and receive information through electrical signals. According to Neuralink, your brain is like electric wiring. Rather than having neurons send electrical signals, these signals could be send and received by a wireless Neuralink chip.

To simplify: Link is a Fitbit in your skull with tiny wires

The presentation in August was intended to display that the current version of the Link chip works and has no visible side-effects for its user. The user, in this case Gertrude, behaves and acts like she would without it. The chip is designed to be planted directly into the brain by a surgical robot. Getting a Link would be a same day surgery which could take less than an hour. This creates opportunities for Neuralink to go to the next stage: the first human implantation. Elon Musk expressed that the company is preparing for this step, which will take place after further safety testing and receiving the required approvals.

The long term goal of the Neuralink is even more ambitious: human enhancement through merging the human brain with AI. The system could help people store memories, or download their mind into robotic bodies. An almost science-fictional idea, fuelled by Elon Musk’s fear of Artificial Intelligence (AI). Already in 2014, Musk called AI “the biggest existential threat to humanity”. He fears, that with the current development rate, AI will soon reach the singularity: the point where AI has reached intelligence levels substantially greater than that of the human brain and technological growth has become uncontrollable and irreversible, causing unforeseeable effects to human civilization. Hollywood has given us examples of this with The Matrix and Terminator. With the strategy of “if you cannot beat them, join them”, Elon Musk sees the innovation done by Neuralink as an answer to this (hypothetical) catastrophical point in time. By allowing human brains to merge with AI, Elon Musk wants to vastly increase the capabilities of humankind and prevent human extinction.

Singularity
Man versus Machine

So, will we all soon have Link like chips in our brains while we await the AI-apocalypse?

Probably not. Currently, the Link V0.9 only covers data collected from a small number of neurons in a coin size part of the cortex. With regards to Gertrude, Neuralink’s pig whom we met earlier in this article, this means being able to wirelessly monitor her brain activity in a part of the brain linked to the nerves in her snout. When Gertrude’s snout is touched, the Neuralink system can registers the neural spikes produced by the neurons firing electronical signals. However, in contrast: major human functions typically involve millions of neurons from different parts of the brain. To make the device capable of helping patients with brain diseases or damaged nervous system, it will need to become capable of collecting larger quantities of data from multiple different areas in the brain.

On top of that, brain research has not yet achieved a complete understanding of the human brain. There are many functions and connections that are not yet understood. It appears that the ambitions of both Elon Musk and Neuralink are ahead of current scientific understanding.

So, what next?

Neuralink has received a Breakthrough Device Designation from the US Food and Drug Administration (FDA), the organisation that regulates the quality of medical products. This means Neuralink has the opportunity to interact with FDA’s experts during the premarket development phase and opens the opportunity towards human testing. The first clinical trials will be done on a small group of patients with severe spinal cord injuries, to see if they can regain motor functions through thoughts alone. For now a medical goal with potentially life changing outcomes, while we wait for science to catch up with Elon Musk’s ambitions.

 Neuralink-Logo

Thank you for reading. Did this article spark your interest?
For more information, I recommend you to check out Neuralink’s website https://neuralink.com/

Curious how Gertrude is doing?
Neuralink often posts updates on their Instagram page https://www.instagram.com/neura.link/?hl=en

Want to read more BIM-articles like this?
Check out relating articles created by other BIM-students in 2020:

Sources used for this article:

4.88/5 (8)

Please rate this

Amazon Explore – Are Virtual 1-On-1 Live-streams the Future?

6

October

2020

No ratings yet.

Bored at home? Wanderlust due to the COVID19 travel restrictions? Or just keen to explore/learn something new? Did you answer one of those questions with yes? If you did, I might have found something interesting for you: ‘Amazon Explore’

Amazon (2020) just launched the beta and describes its new service as an interactive live-streaming platform that offers 1-on-1 experiences around the world. The so-called ‘experiences’ are 30-60-minute 1-on-1 live sessions with a host. The host is the person offering a specific experience. The experiences are worldwide and split into three different categories: ‘culture & landmarks’, ‘learning & creativity’, and ‘shopping’. They start at around $10 and can get quite expensive (the most expensive one I found was a tour through Prague’s Old Town for $210). And the best thing is… you can become a host yourself.

Okay, all jokes aside… and sorry for the ad. Now back to business: why do you think amazon launched a service like that? I would say that the reasons are quite obvious. I am sure that all of you watched a live-stream before. And if you did, I guess you are part of the 63% of the population aged 18-34 who enjoy watching live-stream content daily, making it (unsurprisingly) one of the most popular types of online content (Stanimirovic, 2020). Facebook, Instagram, YouTube, Twitch (also owned by Amazon), TikTok… you name it. Live-streams can be found (almost) everywhere nowadays. Accompanied by the current COVID19 pandemic, I think that its popularity will further increase. During the first COVID19 lockdown, for example, all aforementioned providers experienced exponential growth in their view counts (Stephen, 2020). I guess that’s why it is hardly surprising that the value of the live-streaming market is estimated to increase to almost $70 billion by the end of next year (Stanimirovic, 2020).

A new market? Whereas normal live-stream providers only offer one-to-many live experiences, amazon sees potential in personalized 1-on-1 experiences. And it is not the first ‘big’ company that wants to take advantage of small-scale virtual experiences. Due to COVID19 and its restrictions earlier this year, Airbnb and ClassPass (a fitness company) had to rethink their business models and launched similar services: Airbnb launched ‘virtual travel experiences’ and ClassPass personal ‘online classes’ (Porter, 2020).

I think that it has potential and can certainly benefit from network effects (more users=more value). And I am convinced that there are people out there attracted to those personalized 1-on-1 virtual experiences (if you check the stats of regular live-streaming). However, I will NOT try this service as I simply prefer real-world experiences. What about you? Would you try ‘amazon explore’?

 

References:

Amazon, (2020). [online] Available at: <https://www.amazon.com/b?ots=1&slotNum=2&imprToken=ecba6fd0-a6fc-ca8b-5b2&node=19424628011&ref=srk_stf_hro_lrn&tag=theverge02-20&ascsubtag=%5B%5Dvg%5Be%5D21259036%5Bt%5Dw%5Bd%5DD> [Accessed 6 October 2020].

Porter, J., (2020). Amazon Starts Offering Virtual Classes And Sightseeing Tours Via New Explore Platform. [online] The Verge. Available at: <https://www.theverge.com/2020/9/30/21494995/amazon-explore-virtual-classes-sightseeing-shopping-online-experiences> [Accessed 6 October 2020].

Stanimirovic, (2020). The Impact Of Live Streaming On Today’S Growingly Digital World. [online] BRIDTV. Available at: <https://www.brid.tv/how-live-streaming-is-changing-the-world-as-we-know-it/> [Accessed 6 October 2020].

Stephen, B., (2020). The Lockdown Live-Streaming Numbers Are Out, And They’Re Huge. [online] The Verge. Available at: <https://www.theverge.com/2020/5/13/21257227/coronavirus-streamelements-arsenalgg-twitch-youtube-livestream-numbers> [Accessed 6 October 2020].

 

 

Please rate this

Using AI to Build Smart Cities

6

October

2019

No ratings yet. According to data presented by the UN it is estimated that the world population will grow to approx. 9.7 billion people by 2050. We are also seeing an increasing movement towards cities and it is estimated that almost 70% of the population will be living in urban areas (Medium, 2019). The cities must, therefore, be able to host a large number of inhabitants and additional amounts of commuters. The cities need to be able to provide energy and resources to all these people, whilst also removing waste and wastewater. Traffic is another issue. Furthermore, it is anticipated that these cities, many of which will house 10 million people, will consist of mixed nationalities, cultures, and backgrounds (Medium, 2019). Administration and management are therefore also focus-areas to create peaceful, prospering cities.

Many of these problems can be tackled using AI. This blog post will present some ideas discussed by Medium (2019) that might help battle the challenges presented by the large crowds of future cities.

Smart Traffic Management: Smart traffic solutions can be used to control the traffic flow and, consequently, avoid congestion. This can consist of road-surface sensors and cameras that will collect data in real-time, and a data system that is analyzing this data and offering recommendations to commuters to limit congestion issues.

Smart Parking: Again, road sensors will collect data and further notify the users of available parking spots nearby. Imagine finding a parking spot on your app and reserving it before you leave for your destination instead of aimlessly searching around the city for a parking spot for hours – wasting time and releasing emissions for every minute.

Smart Waste Management: Waste collection and disposal is an increasingly difficult challenge for the cities. Not only are they faced with more trash, but there is also an increasing public concern about proper disposal and recycling as the majority of people get more aware of climate issues. An example of a city in the foreground of smart waste management is Barcelona, where sensors are fitted on the trash bins which notifies the collection trucks when they are being filled. AI can also be used to design smarter routes for trash collection, or even automate the process with the use of robots.

Smart Policing: This is a rather controversial topic, where cities could use data-driven strategies to predict and catch criminal actions. This has already been implemented in Singapore, where a network of cameras and sensors monitors and notifies the authorities if criminal actions are happening. This might be difficult to implement in certain cities, as many populations are more skeptical towards surveillance and has a larger focus on privacy. The idea is still interesting, though.

As most people will find themselves living in cities in the future, the authorities of the cities will be extremely important in the development of our future world. The politics in the cities might in many cases be more significant than the politics countrywide. Cities should cooperate and share their smart solutions with other cities and create a positive loop which will contribute to creating a better world for humans and the planet.

Could you think of other smart initiatives that can help cities be more sustainable and liveable?

 

 

 

References:

Medium. (2019). Artificial Intelligence for Smart Cities. [online] Available at: https://becominghuman.ai/artificial-intelligence-for-smart-cities-64e6774808f8 [Accessed 6 Oct. 2019].

Please rate this

How Sustainable is Technology Really?

6

October

2019

No ratings yet. Sustainability nowadays is the main focus of attention in society as it is the greatest challenge of our time. Global problems, from pollution to poverty and starvation, to climate change, have to be solved to create a sustainable world to live in. Technologies have created some of the problems that we face, but are also able to solve several problems.

 

It is clear that the fashion industry, for example, is lacking ethical and environmental standards, but there is not so much attention devoted to such standards in the technology sector. People don’t have knowledge about tech supply chains, for instance, the main components of a smartphone are cobalt, gold, silver, palladium, and tin, and these minerals are to a large extent mined in developing countries under poor regulatory frameworks which violate human rights. Such smartphones have a life around three years, and then they become obsolete, which is purposefully done by engineers without much regard for human or environmental effects. Consequently, there is now a 50 metric ton of e-waste per year (George, 2019).

 

One example of a company introducing ethical smartphones, is Fairphone, a Dutch company that produces ethically made phones in small quantities, which are phones that are meant to be durable, and they are made from fair trade minerals. However, this company is still small, and it is questionable whether this business model will disrupt the technology market (George, 2019).

 

This is only one example of a technology that is not working towards achieving sustainable development goals. Luckily, there are solutions to such technologies, however, these need to be developed further to disrupt the market. Also, there are also many sustainable technologies, such as homes that get their electricity from fossil fuel burning power plants and a smog-scrubbing tower, which try to solve the world’s sustainability issues (Wang, 2015).

 

George, K. (2019). The tech industry has a serious sustainability problem. Retrieved 6 October 2019, from https://www.huckmag.com/art-and-culture/tech/the-tech-industry-has-a-serious-sustainability-problem/

 

Wang, U. (2015). Top five sustainable technology trends of 2015. Retrieved 6 October 2019, from https://www.theguardian.com/sustainable-business/2015/dec/31/top-5-sustainable-technology-trends-of-2015Screenshot 2019-10-06 at 17.20.15

Please rate this

How will automation affect our jobs in the future?

10

September

2019

No ratings yet.  

 

 

1_wpac-owHsTr92FvASk1XVQ

(Medium.com ©)                                                                                                                                  Time to read: 4 min

 

As Adam McCulloch describes in his article “Automation and AI: how it will actually affect the workplace”, there are very split opinions about whether automation or Artificial Intelligence in a broader sense will either create or destroy job opportunities.

The latter, more antagonistic side of the argument claims that the use of AI for automating job routines is going to entirely replace the need for human employees. In contrast, the counter-argument to this posits the idea of job opportunities and the shift from routine labour to more meaningful jobs that cannot be replaced by machines at all.

Personally, I believe that we will see both sides materialize to some extent as we continue to develop technologies and machines with the aim of mimicking both, physical and mental human activities. At the risk of stating the obvious, one reason for which I believe that AI and process automation will create, rather than destroy, job opportunities in the near future is that there are more forces fuelling the demand for automation than opposing it.

Industry and government bodies are realizing the gain in productivity that can be achieved by automating routinized tasks and are therefore unlocking large amounts of money to be dedicated to the development of automation technologies. This will most certainly create job opportunities as the supply of engineers and managers with experience in this field is currently drastically behind the demand for such technologies and business models.

Forces opposing the development of automation technologies nevertheless do exist, urging for the development of policies and regulations that shall act to safeguard the human workforce. A good example of one player seeking to oppose automation are labour unions, who act on the fear that humans and machines will compete against each other rather than work together in a symbiotic relationship.

Blue-collar automation requires state-of-the art technology which at this point in time, remains expensive for companies to implement. For this sole reason, I believe that the fear of destruction of blue-collar positions due to automation is not yet justifiable on a global scale, as many countries lack the economic resources and/or incentives to adopt the required technology. Even more developed countries are heavily reliant on a cheap human workforce and keep outsourcing blue-collar work to less developed countries rather than acquiring robots.

White-collar automation or robotic process automation (RPA) refers to the automation of some routine desk-job tasks that are highly standardized within the set of a white-collar worker’s various responsibilities. It is perhaps more easy and less costly to implement than blue-collar automation, as it does not require the development and implementation of physical mechanical robots (e.g.: anyone who has a basic grasp of programming can write programs that automate their excel tasks for example). In this scenario I believe that automation will free up white collar workers’ time and energy to be spent on different, more thought-intensive tasks.

I believe that much of the economic and sociological research of the first and second industrial revolutions do equally apply to what is now often referred to as the third and fourth industrial revolution. John Maynard Keynes for example already thought that the impact of the first industrial revolution on society would be that of a drastically shortened work-week in the long-run. Today we can observe that this theory has in fact not (yet?) materialized.

To conclude, and again at the risk of stating the obvious, it is us humans who are at the source of automation and we seem to be in a period of technological breakthroughs (AI, blockchain, quantum computing, IoT, etc…) which will impact many more people than are currently developing it, and hence deeply understanding it. As more people realize they will be impacted by such technological breakthroughs, a bandwagon effect of decision making involving a highly diverse set of stakeholders will develop to steer the direction of this new industrial revolution. Yes, I believe that the potential for replacing our jobs in the very long-term exists, however, whether that will happen depends on how we and our decision-makers want to spend our time.

 

 

What do you think?

 

 

References        

 

Bessen, J. and Kossuth, J. (2019). Research: Automation Affects High-Skill Workers More Often, but Low-Skill Workers More Deeply. [online] Harvard Business Review. Available at: https://hbr.org/2019/02/research-automation-affects-high-skill-workers-more-often-but-low-skill-workers-more-deeply [Accessed 10 Sep. 2019].

 

Book, A. (2018). Should I Panic About Automation Now Or Later?. [online] Hackernoon.com. Available at: https://hackernoon.com/should-i-panic-about-automation-now-or-later-82a4323f1dc7 [Accessed 10 Sep. 2019].

 

Chui, M., Lund, S. and Gumbel, P. (2019). How will automation affect jobs, skills, and wages?. [online] McKinsey & Company. Available at: https://www.mckinsey.com/featured-insights/future-of-work/how-will-automation-affect-jobs-skills-and-wages [Accessed 10 Sep. 2019].

 

McCulloch, A. (2019). Automation and AI: how it will actually affect the workplace – Personnel Today. [online] Personnel Today. Available at: https://www.personneltoday.com/hr/analysis-ai-automation-impact-on-jobs-hr-employment/ [Accessed 10 Sep. 2019].

 

Sivertsen, R. (2018). The Fourth Industrial Revolution – Where Are You Going With This? – Ross Sivertsen – Systems Sherpa. [online] Ross Sivertsen – Systems Sherpa. Available at: https://ross-sivertsen.com/the-fourth-industrial-revolution-where-are-you-going-with-this/ [Accessed 10 Sep. 2019].

 

 

Please rate this

Can Ethics Catch Up To The Onward March Of Artificial Intelligence?

11

September

2018

5/5 (3) Artificial intelligence is currently experiencing great technological advancements, but can the field of ethics keep up with it before it’s too late? Can we prevent disaster and enter a new golden age?

 

Dear god I desperately hope so!

 

Isaac Asimov devised the Three Laws of Robotics in his 1942 short story The Runaround; laws which governed robot behavior as a safety feature for mankind. Much of his following work on the subject of robots was about testing the boundaries of his three laws to see where they would break down, or create unanticipated behavior. His work implies that there are no set of rules that can account for every possible circumstance.1

1942 was a long time ago, when artificial intelligence was but a twinkle in the eyes of computer scientists, programmers and nerds. While we still have a ways to go before we achieve singularity,2 the point where AI achieves greater general intelligence than humans, we can’t deny that AI research and application have come a long way. Programs like IBM Watson, a healthcare AI that successfully diagnosed leukemia in a patient when doctors couldn’t3 and beat opponents on the game show Jeopardy!,4 and the onset of self-driving cars reinforce that fact.

However, Nick Bostrom argues in his paper “Ethical Issues in Advanced Artificial Intelligence” that artificial intelligence has the capability to bring about human extinction. He claims that a general super-intelligence would be capable of independent initiative as an autonomous agent. It would be up to the designers of the super-intelligence to code for ethical and moral motivations to prevent unintended consequences.Sadly, the sheer complexity and variety of human beliefs and values makes it very difficult to make AI’s motivations human-friendly.6

Unless we can come up with a near-perfect ethical theory before AI’s reach singularity, an AI’s decisions could allow for many potentially harmful scenarios that technically adhere to the given ethical framework but disregard common sense.

Many of the large tech companies have teamed up to address the issue by working together with academia to do research and organize discussions, but it is still uncertain whether they’ll achieve their goals before somebody lets the genie out of the bottle. I remain hopeful, but just in case:

 

I, for one, welcome our new robot overlords.

 

 

 

1 Asimov, Isaac (2008). I, Robot. New York: Bantam. ISBN 0-553-38256-X.

2 Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.

3 Ng, Alfred (7 August 2016). “IBM’s Watson gives proper diagnosis after doctors were stumped”NY Daily NewsArchived from the original on 22 September 2017.

4 Markoff, John (16 February 2011). “On ‘Jeopardy!’ Watson Win Is All but Trivial”The New York TimesArchived from the original on 22 September 2017.

5 Bostrom, Nick. 2003. “Ethical Issues in Advanced Artificial Intelligence”. In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.

6 Muehlhauser, Luke, and Louie Helm. 2012. “Intelligence Explosion and Machine Ethics”. In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.

 

Please rate this

Will your clothes replace your personal trainer?

15

October

2017

No ratings yet. A lot of industries are disrupted by new companies, products and services. Some of the most well-known examples are Airbnb and the hotel industry, Uber versus taxi companies and Netflix as opposed to video rental businesses. However, are industries are experiencing this as well. One example of this is the sports and health industry. It is a bit different from the examples above, because there is not one company which is disrupting of taking over the entire industry, but there certainly is a shift here.

Apps
This was first caused by for example food, health and sports apps, which help you track your calorie intake and macro-nutrients, give insight in your sleep cycle, count your steps, plan your workout and check your progress. Examples of these apps are Runtastic, Endomondo, MyFitnessPal, GoogleFit and Apple Health. These apps can also be connected with wearables, like the Apple Watch Sport and Fitbit. However, one of the newest technologies in the sports and health department is another type of wearables: clothes! Not normal ones, but smart clothing!

Smart clothing
A lot of companies are developing smart clothing, which will help you track a whole lot of things. Not only can these track the things your smartwatch can, like measure your heart rate and count your steps, these clothes can do much more. For example, a shirt that measures your body temperature, workout intensity, recovery, fatigue levels, but also air quality and UV levels. Or running shorts that can not only count steps, but also include cadence, ground contact time, pelvic rotation and stride length. This can help you improve your running form and reduces the chances of injury. There are also shirts that detect which of your muscles are working and transfer this workout data to a smartphone, to see if you are favoring one side over the other and if you activate the right muscles with your exercises. There are even yoga pants which pulse at the hips, knees and ankles to encourage you to move and/or hold positions and give you additional feedback through the app afterwards. With all these metrics, who still needs a personal trainer?

 

Sources:
https://www.digitaltrends.com/health-fitness/athos-smart-clothes/
https://www.wareable.com/smart-clothing/best-smart-clothing
https://www.digitaltrends.com/wearables/smart-clothing-is-the-future-of-wearables/

Please rate this