Deepfake Fraud – The Other Side of Artificial Intelligence

8

October

2021

Dangers of AI: How deepfakes through Artificial Intelligence could be used for fraud, scams and cybercrime.

No ratings yet.

Together with Machine Learning, Artificial Intelligence (or: AI) can be considered one of if not the hottest emerging innovations in the field of technology nowadays (Duggal, 2021). AI entails the ability of a computer or a machine to ‘think by itself’, as it strives to mimic human intelligence instead of simply executing actions it was programmed to carry out. By using algorithms and historical data, AI utilizes Machine Learning in order to comprehend patterns and how to respond to certain actions, thus creating ‘a mind of its own’ (Andersen, n.d.). 

History

Even though the initial days of Artificial Intelligence research date back to the late 1950s, the technology has just recently been introduced to the general mass on a wider scale. The science behind the technology is complex, however AI is becoming more widely known and used on a day-to-day basis. This is due to the fact that computers have become much faster and data (for the AI to derive from) has become more accessible (Kaplan & Haenlein, 2020). This allows for AI to be more effective, to the point where it has already been implemented in every-day devices i.e. our smartphones. Do you use speech or facial recognition for unlocking your phone? Do you use Siri, Alexa or Google Assistant? Ever felt like advertisements on social media resonate a bit too much with your actual interests? Whether you believe it or not, it is highly likely that both you and I come into contact with AI on a daily basis.

AI in a nutshell: How it connects to Machine/Deep Learning

That’s good… right?

Although the possibilities for positively exploiting AI seem endless, one of the more recent events which shocked the world about the dangers of AI is a phenomenon called ‘deepfaking’. This is where AI utilizes a Deep Learning algorithm to replace a person from a photo/video with someone else, creating seemingly (!) authentic and real visuals of that person. As one can imagine, this results in situations where people seem to be doing things through media, which in reality they have not. Although people fear the usage of this deepfake technology against celebrities or high-status individuals, this can – and actually does – happen to regular people, possibly you and I.

Cybercrime

Just last month, scammers from all over the world are reported to have been creatively using this cybercrime ‘technique’ in order to commit fraud against, scam or blackmail ordinary people (Pashaeva, 2021). From posing as a wealthy bank owner to extract money from investors, to blackmailing people with videos of them seemingly engaging in a sexual act… as mentioned before, the possibilities for exploiting AI seem endless. Deepfakes are just another perfect illustration of this fact. I simply hope that, in time, the positives of AI outweigh the negatives. I would love to hear your perspective on this matter.

Discussion: Deepfake singularity

For example, would you believe this was actually Morgan Freeman if you did not know about Artificial Intelligence and deepfakes? What do you think this technology could cause in the long term, when the AI develops itself into a much more believable state? Will we be able to always spot the fakes? What do you think this could lead to in terms of possible scamming or blackmailing, if e.g. Morgan Freeman were to say other things…?

References

Duggal, N. (2021). Top 9 New Technology Trends for 2021. Available at: https://www.simplilearn.com/top-technology-trends-and-jobs-article

Andersen, I. (n.d.). What Is AI and How Does It Work? Available at: https://www.revlocal.com/resources/library/blog/what-is-ai-and-how-does-it-work

Kaplan, A., & Haenlein, M. (2020). Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Business Horizons, 63(1). https://doi.org/10.1016/j.bushor.2019.09.003

Pashaeva, Y. (2021). Scammers Are Using Deepfake Videos Now. Available at: https://slate.com/technology/2021/09/deepfake-video-scams.html

Please rate this

Author: Roël van der Valk

MSc Business Information Management student at RSM Erasmus University - Student number: 483426 TA BM01BIM Information Strategy 2022

What is all this data doing in my protest?

8

October

2020

No ratings yet.

Over the last few years there have been a plethora of protests throughout the world. From the strikes in Bolivia, to the currently ongoing Hong Kong extradition protests as well as Black Lives Matter protests all over the world. Although the people in all these protests are not directly connected with regards to their mission, there is one thing that all of them are subject to, and that is data collection. To anyone that has been following data collection practices over the past decade, it should be of no surprise that also data of protests is being used, monitored, evaluated, and profited off by a variety of parties. But who exactly are these parties, and what do they gain from analyzing the data that is gathered during protests?

Probably the most obvious parties that track data during protests are news and research firms, whom use data collection in order to provide insights into such events. For example, the company MobileWalla, that usually does not publicly bring out their data collection results, has provided an in-depth demographic (and more) overview of Black Live Matters protesters on multiple U.S. cities (Doffman, 2020). By tracking phones of individuals they are able to provide insights not only into demographic factors (like race and gender), but also whether protestors came from inside or outside the cities in which they protested.

Another group that is able to profit off of data-collection practices during protests are political movements. One example is Vestige Strategies. whose aim it is to promote the election of African-Americans into governmental functions. They used geofencing during George Floyd protests in order to target specific audiences for voting registration promotions (Mann, 2020).

The aforementioned two parties are not necessarily negatively influencing protesters (apart from possible privacy concerns). However, what happens if the party opposing protesters makes use of their data? The CCP, for example, has been tracking Hong Kong protesters using a variety of data- and AI related practices. Using face recognition, protest leaders are targeted, leading to a growing numbers of protestors trying to conceal their faces (Mozur, 2019). In the US, law enforcement also has been implementing data practices to track protests. The company Dataminr has been providing local law enforcement with Twitter (meta)data under the guise of “delivering breaking news alerts” (Biddle, 2020). This allows them to, for example, track protest locations.

Now it might seem that protesters themselves do not have anything to gain from the data that becomes available during protests. However, even protesters themselves have started implementing data-driven practices. The crowdsourcing app HKmap.live is used by Hong Kong protestors in order to track police activity (He, 2019). Thus, even protestors themselves are able to utilize data in their advantage.

This blogpost was not written to necessarily criticize the usage of data collected during protests. Its aim is merely to shed a light onto how data has become of importance even in areas one might not expect it. Also, with the large amount of protests happening these days, those that get involved should be aware of how their data might be used.

Sources:

Biddle, S. (2020). Police Surveilled George Floyd Protests With Help Form Twitter-Affiliated Startup Dataminr. The Intercept_. Retrieved from: https://theintercept.com/2020/07/09/ twitter-dataminr-police-spy-surveillance-black-lives-matter-protests/

Doffman, Z. (2020). Black Lives Matter: U.S. Protesters Tracked By Secretive Phone Location Technology. Forbes. Received from: https://www.forbes.com/sites/zakdoffman/2020/06/26/secretive-phone-tracking-company-publishes-location-data-on-black-lives-matter-protesters/#1b9ab67c4a1e

He, L. (2019). Apple removes app used by Hong Kong protesters to track police movements. CNN Business. Retrieved from: https://edition.cnn.com/2019/10/10/tech/apple-china-hkmap-app/index.html

Mann, S. (2020). Political groups use the cellphone data of protestors to better reach their target audiences. Just the News. Retrieved from: https://justthenews.com/politics-policy/privacy/political-groups-use-cellphone-data-protestors-better-reach-their-target

Mozur, P. (2019). In Hong Kong Protests, Faces Become Weapons. The New York Times. Retrieved from: https://www-nytimes-com.eur.idm.oclc.org/2019/07/26/technology/hong-kong-protests-facial-recognition-surveillance.html

Please rate this

Differential privacy – A sustainable way of anonymizing data?

5

October

2020

No ratings yet. Since a lot of blog contributions mention the increase of data collection, data analytics, and the potential threat to privacy, I thought it would make sense to introduce the technique of differential privacy which is currently on the rise in the US. Apart from the US Consensus Bureau, Apple, and Facebook are in the front row of exploring capabilities and potentials of this technique.

 

What does differential privacy mean?
Differential privacy describes a technique to measure the privacy of a crucial data set.

 

Differential privacy in action
In 2020, the US government is facing a big challenge. It needs to collect data on all of the country’s 330 million residents. At the same time, it must ensure to keep all the identities private. By law, the government needs to ensure that the data collected cannot be traced back to any individual within the data set. The data collected by the US government collects is released in statistical tables for academics and policymakers to analyze when conducting research or writing legislation.

To solve the need for privacy, the US Census Bureau presented a technique, to alter the data collected, making it impossible to trace it back to the individual, without changing the overall information provided through the data set. The Census Bureau technique is a mathematical technique, to inject inaccuracies, or ‘noise’, to the data. That way, some of the individuals within the data might get younger or older, change in ethnicity or religious believes, while keeping the total number of individuals in each group (i.e. age/sex/ethnicity) the same. The more noise injected into the data sets, the harder the activity to de-anonymize the individuals.

This mathematical technique is also used by Apple and Facebook, to collect aggregated data without identifying particular users of products and services.

However, this activity also poses some challenges. Injecting too many inaccuracies can render the data useless. A study of the differentially private data set of the 2010 Census showed households that supposedly had 90 people, which cannot be true. However, since the owner of a data set can decide to which level the ‘noise’ should be injected, that challenge shouldn’t pose too much of a problem. Further, the more noise is included, the harder it gets to see correlations between data attributes and specific characteristics of individuals.

If a further analysis of differentially private data sets proves the technique to ensure required privacy, especially for governmentally created data sets, it is likely that other federal agencies or countries will use the methodology as well.

 

 

From my point of view, differential privacy as used for governmentally created data sets seems to a big step towards getting a clearer view about the status quo of a country, thanks to increased privacy and therefore increased trust by residents as well as probably increased participation in the process of data collection.

However, based on the complexity of the technique, to me it seems unlikely, that differential privacy will be used widely within companies (for the moment). Losing the ability to analyze data in detail due to increased privacy for the user and therefore lost correlations within data sets is a payoff I do not think a lot of companies are willing to take. Especially, since a lot of smaller companies are just starting to analyze the data they are collecting.
Right now, research shows that only big multinationals with high R&D budgets are able to sustainably increase privacy through differential privacy without losing too many insights derived from the data collected.

 

What do you think
Can differential privacy be a step in the right direction? Or should governments limit companies in the collection, aggregation, and analysis of data to increase privacy for the customers?

 

Sources:
https://aircloak.com/de/wie-funktioniert-differential-privacy/
https://hci.iwr.uni-heidelberg.de/system/files/private/downloads/182992120/boehme_differential-privacy-report.pdf
https://www.technologyreview.com/10-breakthrough-technologies/2020/#differential-privacy
https://towardsdatascience.com/understanding-differential-privacy-85ce191e198a?gi=9d3ad94ea2e4

Please rate this