Algotrading – what happens if it all breaks down?

15

October

2018

5/5 (1)

It has been a bit over a year when I was working on the trading floor in the Investment Banking division of a bank right in the heart of London. You can picture it like it is seen in movies: sales and traders running around screaming numbers and cheering over the whole floor for high trades they closed and big amounts of money they shifted between different financial parties. A glance into the future will show the dramatic shift of that picture and promises empty rows on the trading floor and nothing of the kind described anymore.

One example my former boss introduced me to when I just started was that in 2016 already 99.7% of Forex (foreign exchange) trading was completed by algorithms without human beings interfering anymore. To be completely honest, FX trading is one of the easiest financial products to write an algorithm for. However, more and more highly sophisticated future trading transactions placed by human beings will be substituted by code as well. Therefore, welcome to algotrading.

Algotrading is the practice of placing buying and selling orders into different models, which then trigger an order based on a set of predefined objectives of the algorithm behind that model. One of the most common methods in algotrading is High Frequency Trading (HFT). In HFT a supercomputer receives electronic information and initiates an order, before human traders can even start processing the information. Methods such as the above mentioned would be the main reason why future trading floors will be almost empty. The only exception would be traders working on highly sophisticated financial products, which involve human interaction with customers.

A look in the wide future shows us barely any interaction in trading between human beings anymore. The algorithms and code lines written by well-skilled coders will have almost all of the control over the worldwide financial market. But what happens if our economy is hit by an economic shock like we experienced in the financial crisis in 2007? Will the implications be even more disastrous when hardly any human interaction is involved anymore? Is there a way to align the concept of algotrading and HFT with sustainable, risk preventing finance? In my opinion, a lot still needs to be taken into consideration. If not, save your cash at home, because the next money crisis will be coming…

 

GSTAR.AI. (2018, May 27). The impact of algorithmic trading on the financial markets. Medium.

DebD. (2010, August 25). Algorithmic Trading – Taking the Human out of the Equation. Dazeinfo.

Please rate this

Autonomous “killer” drones as war weapons – What about ethics?

23

September

2018

5/5 (5)

We all heard that question relating to one of our newest technologies before: In nowadays society is Artificial Intelligence a blessing or a curse? Many would say it makes life easier and more convenient, looking at Siri or Google for example, and that it will keep on improving and taking over difficult tasks in an efficient manner, that human beings cannot compete with. However, some people tend to be skeptic about AI and what it might be able to do in the future. And I do not mean the fact that it will replace millions of people’s work place, no. I mean the fact that AI will be able to kill people and function as a real war weapon.

By using their own decision-making, autonomous AI driven “killer” drones will be able to target humans they decide they want to kill. Would that be a blessing or a curse for us as human beings? The intuitive answer would be that this is a horrible scenario. However, it is known that warriors and soldiers suffer from psychological damages after they return from war. One huge factor influencing these damages is the fact that they have to harm and eventually kill people. Even if it is not actually them pulling the trigger, by giving the command to a drone, the sole knowledge of the fact that they gave permission to kill someone causes psychological distress. If autonomous “killer” drones would take over the part of killing armed opponents by knowing themselves who they have to target; would that diminish the post dramatic war effects on fighters? And even if a drone could distinguish between “enemies” and “non-enemies”, will they be able to know which attack is appropriate and which one is not?

One of the most important ethical questions to ask here would be: Who is responsible for killing someone, if the execution was made by an AI driven “killer” drone? From my point of view, it does not make sense to put drones behind bars…? What do you think?

 

Autonomous military drones: No longer science fiction. (2017, July 28). NATO Review. doi:https://www.nato.int/docu/review/2017/also-in-2017/autonomous-military-drones-no-longer-science-fiction/EN/index.htm

Stroud, M. (2018, April 12). The pentagon is getting serious about AI weapons. The Verge. Retrieved September 23, 2018, from https://www.theverge.com/2018/4/12/17229150/pentagon-project-maven-ai-google-war-military

Please rate this