Regulating Artificial Intelligence

8

October

2016

No ratings yet.

Artificial Intelligence, a term originally coined in the 1950s, received increased attention within the last year due to technical progress within machine learning and related technologies and an increased number of practical applications building on it.

 

This technical development has led to many ethical concerns being raised.

Some are derived from current issues within software development like life/death decisions made by autonomous vehicles, the adoption of stereotypes through biased training data or the treat of rising unemployment through automatization.

Other raised concerns are more theoretical or futuristic in their nature and deal with scenarios of machines taking over as displayed in science fiction movies like Terminator or the Matrix Trilogy. Another well discussed thought is the “technical singularity” a term describing a future after artificial intelligence reaches above human level  cognitive capabilities.

 

No matter if for current or future issues, there seems to be a need for guidelines or rules that deal with these questions. Few political voices have raised the topic already leading to the government of the United States investigating the topic for potential needs for regulation at the moment.

 

But private entities are addressing these moral concerns as well.

 

Elon Musk, CEO of Tesla and Space X, and Sam Altman of Y Combinator recently found the  non-profit artificial intelligence research company OpenAI in order to “build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible”. Since the launch in April they have been publishing their work in the shape of research articles and blog posts and released OpenAI Gym, a toolkit for developing and comparing reinforcement learning algorithms. The long term of the non-profit is to develop safe and open-source algorithms that lower the risk of disasters through mistakes in AI creation and the prevention of monopolies within knowledge distribution to certain big companies.

 

Exactly these big companies however recently came together to create the “Partnership on AI”. Facebook, Amazon, Google, IBM and Microsoft, in an apparent act of self-governance, formed the group to exchange knowledge, create best practices and publish research especially in the fields of ethics, inclusivity and privacy. To prevent concerns or conspiracy theories, the partnership plans to make discussions and minutes from meetings publicly available.

 

It is currently too early to judge the output created by these organizations, but it certainly will be interesting to keep an eye on them and track the developments leading to human-friendly decisions and algorithms or to potential problems in the future.

 

Sources:

US Government

https://www.whitehouse.gov/blog/2016/05/03/preparing-future-artificial-intelligence

 

Open AI

https://openai.com/

https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/

 

Partnership on AI

http://www.partnershiponai.org/

https://techcrunch.com/2016/09/28/facebook-amazon-google-ibm-and-microsoft-come-together-to-create-historic-partnership-on-ai/

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *