An AI driven apocalypse?! Responsible and ethical programming

25

September

2020

No ratings yet.

Movies have often filmed doomsday scenarios where Artificial Intelligence (AI) driven robotics impose a threat upon humanity. A classical example is the movie Terminator 2: Judgement Day, where Skynet attempts to rule Earth using AI driven killer robots. These depictions have led to debates where optimists and pessimists argue whether embracing AI is beneficial for society. Pessimists would argue that a brief software malfunctioning may lead to the robot willing to decimate the owner. However, perhaps it is too simplistic to blame human-build AI robotics for their malfunctions.

A critical element to consider is the risk of individuals exploiting AI technology for harmful needs. Several global leaders in AI such as Google recognize the dire need for preventive measures and therefore have established countermeasures such as Adversarial Logit Pairing (ALP) and CleverHans. Nevertheless, the fear remains that individuals with harmful intentions possess the capabilities to assert disaster. To minimize the risk, a sophisticated correction mechanism must be imposed to eradicate harm. Several institutions have attempted to establish a responsible framework for the creation and usage of AI. For example, the Institute for Ethical AI and Machine Learning emphasizes the importance of conscious human augmentation (assess the impact of one’s input upon the output created by AI).

Besides the possible explicit harmful intentions, harm caused by AI can also arise due to human error embedded in the coding. As expressed by software developers, the likelihood of AI malfunctions rises as the software grows in complexity, implying a near impossible likelihood of perfect functioning software. An example is the test deployment of Tesla’s autonomous vehicle which resulted in a collision. The AI was unable to distinguish an object from the environment and this ability is dependent whether the developers have provided sufficient instructions to the system. Unfortunately, the collision is the physical evidence of embedded human error.

To conclude, Hollywood’s depiction of AI driven robot killers appears misled. Rather, the possible “threat” imposed by AI is likely sourced from humans with harmful intentions or by coincidental human error. Nevertheless, it remains important for organizations willing to merge society with AI to prevent human misconduct and identify current/future human errors inherited by AI.

Sources:

https://ai.google/static/documents/responsible-development-of-ai.pdf

https://ethical.institute/principles.html

https://humanerrorsolutions.com/problems-and-solutions-ai-and-human-error/

https://www.information-age.com/artificial-intelligence-set-fix-human-error-123466675/

https://thenextweb.com/neural/2020/09/18/a-beginners-guide-to-the-ai-apocalypse-killer-robots/

Please rate this

Excessive gambling or running ahead? Tech automotive ventures and disruption hype.

9

September

2020

No ratings yet.

The traditional automotive industry is being led by the large rigid incumbents including Toyota, Ford and Daimler which are commonly characterized with large scale manufacturing, bureaucracy, and fossil fuel emissions. With global communities demanding change, ventures such as Nikola, Polestar and NIO have emerged who are utilizing agility and disruptive technologies to challenge the status quo of the traditional incumbents. Several disruptive trends carried by these ventures include autonomous driving, sharing economy and connectivity. However, a fundamental concern in the new age of the automotive industry is that the excitement created by these ventures has left their valuations surging towards unprecedented heights. For example, Nikola is valued similarly as Ford on the stock market yet consider that Nikola has not sold a single vehicle.

The excessive valuations have casted doubts whether technology automotive ventures are capable to sustain growth regardless of the hurdles. For example, ventures who are developing autonomous vehicles are facing fundamental difficulties such as:
1. Establishing a common notion of training, testing, and validating machine learning.
2. Developing reliable sensors. The current sensors deployed by autonomous vehicle are not sufficiently independent nor safe regardless of the environment (e.g. rain and dust).
3. Establishing regulation. No rules nor regulations are set for any autonomous system and so are unlikely to be determined in the near future.
These hurdles may render commercial viability in the short run. If commercialization remains futuristic, excessive valuation led by hype and expectations may wear out and steer technology automotive ventures towards failure.

Whilst claims made by the few (Elon Musk) expect autonomous driving by the end of 2020, hurdles preventing further progress may tumble valuations and limit future funding availability. Hence, should we risk tech automotive ventures becoming over-valued and collapsing from the pressure? Or relief pressure by treating them as “normal” automotive firms regardless of society’s demands for futuristic transportation?
References:
View at Medium.com
https://thenextweb.com/cars/2020/08/19/autonomous-cars-5-reasons-they-still-arent-on-our-roads-syndication/
https://thenextweb.com/shift/2020/09/08/gm-says-its-going-to-build-nikola-badger-hydrogen-ev-truck-tesla/

Please rate this