Nobody likes to lose power

15

October

2019

No ratings yet.

On October 03, 2019 the European Court of Justice ruled that national judges can order Facebook to remove content internationally. With the ruling individual countries gained power to extend the reach of its online related ruling across its physical boarders, setting new benchmark. It applies to content that is defamatory or illegal and cannot be appealed (Satariano, 2019).

In a statement Facebook raised concerns about “questions around freedom of expression”, as well as “the role that internet companies should play in monitoring, interpreting and removing speech that might be illegal…” (Satariano, 2019). There have been ongoing debates about regulating content on social media in the past, who should be responsible and, to which laws such regulations should adhere.

The ruling is definitely a set-back for the current self-regulatory approach for social media platforms. While Facebook argues that it is not responsible for the content on its page, created by billions of users, it increasingly started to remove extremist content and content classified as hate speech. Also other social networks like YouTube follow a self-regulating approach, which has been criticized for not being transparent. Only from April to June this year, YouTube had removed over 100,000 videos, 17,000 channels and over 500 million comments violating its hate-speech regulations (Yurieff, 2019).

Platform owners having the power to decide which point of view is broadcasted and which is “deplatformed” has been ironically been subject to the same critique as now raised against the new ruling: Free speech.

The ruling comes just a week after the European Court of Justice limited the power of the European online privacy law in terms of its geographical reach (Kelion,2019). In my opinion, the ruling in the recent Facebook case will not be practical applicable besides exceptional cases. Imposing the political laws on speech from one country to another is not feasible to implement and contradictory.

 

References

Kelion, L. (2019). Google wins landmark right to be forgotten case. BBC News. Retrieved from https://www.bbc.com/news/technology-49808208

Satariano, A. (2019). Facebook Can Be Forced to Delete Content Worldwide, E.U.’s Top Court Rules. The New York Times. Retrieved from https://www.nytimes.com/2019/10/03/technology/facebook-europe.html

Yurieff, K. (2019). Google CEO on fixing YouTube’s hate and harassment problem. CNN Business .Retrieved October 15, 2019, from CNN website: https://edition.cnn.com/2019/09/03/tech/youtube-hate-speech/index.html

 

Please rate this

Powerful and unreleased– Learnings from GPT -2

15

October

2019

5/5 (1)

Whenever you read an article, blog post or book, you would assume another human set down, took time and wrote the piece. Our next generation will not take this for granted anymore.

In February 2019 OpenAI, a non-profit organization researching AI, announced that it created a language model able to create realistic text in various forms, including news, articles and fiction in an unprecedented way. Based on an input-text, the model generates coherent text over a page or more, adapted to the writing style of the input (an example can be found here). Concerned about possible malicious OpenAI decided to not publish the model with its 1.5 billion parameter and only releasing a much smaller version of the model (Radford, 2019).

Natural Language Processing (NLP) models are trained using large amount of (online) text, in case of GTR-2 about 40 GB. During the training the model learns to predict the next word in a sentence. With better network architecture, larger amount of data and improved training methods, the accuracy of such models steadily increases (Radford, 2019).

Six months after the announcement OpenAI posted a follow-up. In the meantime, two larger versions with more parameter of the GPT-2 model had been released. In the statement OpenAI revealed that they spoke to 5 other teams, which were able to replicate the model. They also present studies that suggesting that humans can be convinced by AI generated, synthetic text and admit that detection of such synthetic text is very difficult (Clark, 2019).

The so far cautious release strategy of OpenAI did not prevent others to replicate the model and even train more complex models. In August NVIDIA announced the training of a language model with 8.3 billion parameters, much larger than Google AI’s BERT or GPT-2 (NVIDIA, 2019).

With the incredibly fast development in the last two years and the astonishing increase in size and performance of current language models, I suggest that the deployment of synthetic generated text with its up- and downside will be highly relevant within the next 3 years. The possible implications are huge, the public debate is just starting and, as shown in the case of GPT-2, further progress is hard to delay.

 

References:

Clark, J. (2019). GPT-2: 6-Month Follow-Up. Retrieved October 14, 2019, from OpenAI website: https://openai.com/blog/gpt-2-6-month-follow-up/

NVIDIA Newsroom. (2019). NVIDIA Achieves Breakthroughs in Language Understanding to Enable Real-Time Conversational AI. Retrieved October 14, 2019, from NVIDIA Newsroom Newsroom website: https://nvidianews.nvidia.com/news/nvidia-achieves-breakthroughs-in-language-understandingto-enable-real-time-conversational-ai

Radford, A. (2019). Better Language Models and Their Implications. Retrieved October 14, 2019, from OpenAI website: https://openai.com/blog/better-language-models/

Please rate this