Artificial Intelligence in warfare – threat or opportunity?

29

September

2019

No ratings yet.

 

aop-tekoaly-sota-tulevaisuus

The US Air Forces published a picture of the future of warfare. It depicts the usage of satellites which gather big data to be one step ahead of the opponent.

“Guns do not kill, people do.” An argument heard many times from pro-gun activists but in the future, the argument might be even less truthful than before.

You might have heard from AI-infused drones which autonomously decide who to target, and who to save. Claims such as these might generate opinions that AI should not be integrated into weapons and modern warfare equipment at all.

This black-and-white setting might turn into a grey one if on the other hand, it is possible to use AI to increase the accuracy of weapons, decrease the number of explosives used, and avoid civilian casualties.

Many would agree that AI-infused systems should not be able to start a deadly strike, but humans should be involved in that decision-making loop. However, many would also want to exclude humans from the loop as people tend to slow down processes.

Fully autonomous weapons would be dangerous, as machine learning neural networks are like the human brain – systems, which are difficult or even impossible to understand by humans.

People also tend to resent new technologies even when the advantages are larger than the disadvantages. Just think about self-driving cars which are facing a lot more regulation than human drivers. However, AI is not feasible in all of the solutions, as the margin of error needs to be 0 especially with weapons of mass destruction.

Talking about drones and terminators is trendy and creates click baits, but the real advantage of AI and digitalization lies in the processing of big data. Faster processing of information increases situational awareness and makes decision-making faster. Think about planes, which gather and process enormous amounts of information and the pilot himself could not make sense of all of this without the help of the machine. However, decision-making is still the responsibility of a human.

On the other hand, AI and digitalization may increase the possibility of vulnerability and decrease cybersecurity. The disadvantage of information is that it can be leaked or hacked. Especially machine learning is exposed to manipulation, as the machine cannot tell how it came to a certain conclusion. At least with people, you can ask their reasoning with the problem. Fortunately, it is relatively easy to modify systems in a way which makes them less vulnerable.

 

What do you think the role of information is going to be in the future? Do you think terrorists and other external forces will try to leverage the new technologies? What kind of problems do you think we could solve with AI and machine learning?

 

Please rate this