The Terrifying Rise of Deep-Fake Content

17

October

2018

No ratings yet.

Earlier this year the famous actress Drew Barrymore had to deal with some bizarre fabrications of her life that had been published in the magazine of Egyptair: “HORUS”. Celebrities often have to deal with stories about their life that are based on half truths or even lies, but the lengths to which the interviewer went are pretty scary. They accurately photoshopped an original photo of Barrymore holding the magazine “Nisf Al-Donia” and swapped it with their own magazine and unashamedly published it in Egyptair’s magazine. The only reason why this has been discovered is because of the content, Barrymore’s forged answers, did not reflect Barrymore’s life at all. But what if the interviewer had been a bit more clever. Than this magazine article had been gone unnoticed, and had been passed on to passengers without any remarks.

This brings me to the topic I want to address in this blog post. Something that I have been worrying about for quite some time now, that is deep-fake content. Deep-fake learning is an AI based human image synthesis technique used to combine existing with fake images, audio or video into source material. Creating fake content that looks like reality. Deep-fake is mostly used to create fake celebrity or revenge pornography [1]. Of course there are also less harmful use-cases of deep-fake content. For instance to create comedy sketches that are called derpfakes (see below).

From a technological point of view, deep-fake techniques are a marvel of engineering. Pushing the boundaries of what can be done with graphical processors and algorithms. However, deep-fake technologies are sadly enough mostly used in either pornography or even worse, revenge pornography [2]. The latter is really important because celebrities are often well protected due to their popularity but regular people like you and me will find themselves in a much trickier situation. In the UK creating harmful deep-fake material is considered a crime but in other EU member states this is not the case. Recently the ministry of Defence of the United States developed a tool that is designed to catch deep-fakes [3]. But governments are still hesitant on policy making to make deep-fake crimes are specific type of crime. The public is not enough aware of the rising technological possibilities of deep-fakes and thus governments do not make it an priority either. With this blog post I have hoped to give you some insights into this topic and make you aware of the dangers, and convince you that this should be explicitly be made punishable.

For females out there reading this post, please be careful with what you post on social media and how accessible your content is. Numerous studies have shown that females are more often victims of deep-fake content than men.

1 : “What Are Deepfakes & Why the Future of Porn is Terrifying”. Highsnobiety. 2018-02-20. Retrieved 2018-02-20.
2 : https://tweakers.net/nieuws/134449/vervangen-van-gezicht-in-pornovideos-met-ai-neemt-grote-vlucht-door-tool.html
3 : https://www.technologyreview.com/s/611726/the-defense-department-has-produced-the-first-tools-for-catching-deepfakes/

Please rate this

Changes In The Dealingroom, AI Is Coming!

12

September

2018

No ratings yet.

In 2017 UBS, a Swiss investment bank, sold off their trading room in Stamford Connecticut. Their trading room was among the largest in the world and at its peak housed over more that 5000 traders [1]. Some of you might correctly point out that between 2006 and 2016 the subprime mortgage crisis happend, and you’d be right. However, that is not the only reason why dealing rooms are becoming less crowded. What eventually happend to UBS’s jewel illustrates another movement at dealing-rooms around the world, the rise of algorithmic trading.

When I reach out under my desk I find a remnant of the past, an old black phone trough which orders where manually placed on global markets. I look over my desk I imagine what this room would have looked like 15 years ago, however, in reality I find myself surrounded by algorithmic trading platforms and automated order systems.

When looking at other big financials we see that algorithmic trading becomes more and more standardised in an attempt to increase margins on trades as regulations strongly increased in their complexity. Last year JPMorgan, the worlds largest investment bank by market capitalisation [2], successfully launched their digital robot to execute trades; LOXM.

LOXM is one of the first fully implemented machine learning based bot to execute trades. JPM’s bot uses historical trading data in its decision making process, and continues to learn with every trade [3]. Making it unique trading tool in the industry. LOXM was designed to beat the industry’s benchmark by leaving behind rudimental algorithms by shifting towards deep reinforced learning that is capable of executing more complex trades. [4].

Still I am a bit sceptical for these bots. You probably have seen that video where two Alexa (Amazon’s Siri) communicate with each other [5], see video below.

I wonder what kind of situation the global capital markets create if banks and other financial institutions decide to completely shift towards implementing AI bots to track markets and execute trades. Will we see Alexa like situations?

1: https://nypost.com/2017/04/19/ubs-has-officially-ditched-its-massive-trading-floor/
2: http://banksdaily.com/topbanks/World/market-cap-2018.html
3: https://www.ft.com/content/16b8ffb6-7161-11e7-aca6-c6bd07df1a3c
4: https://knect365.com/quantminds/article/10d3b420-fe65-4269-b1da-ab555a509958/the-latest-in-loxm-and-why-we-shouldnt-be-using-single-stock-algos
5: https://www.youtube.com/watch?v=8zR0Hxojce4&t=65s&frags=pl%2Cwn

Please rate this