Down the YouTube Rabbit Hole

7

October

2020

5/5 (1)

 

Over the past few weeks, a lot has been said (including on this blog) about how social media has been impacting the offline world in a negative way. After watching “The Social Dilemma”, which launched on Netflix last September, we started to think about how these platforms are selling our attention as a commodity and leading to an increasingly polarized society, harming democracies around the world. Some people decided to take it one step further and deleted accounts, turned off notifications and stopped clicking on recommended content – just as suggested in the documentary by the whistleblowers who helped creating these platforms. I was one of those people – until I wasn’t anymore!

Interestingly enough, shortly after watching the documentary I started to receive tons of recommendation of content that addressed the same issues, especially on YouTube and Facebook. Isn’t it funny how the algorithm can work against itself? In the beginning, I was decided not to click on any of the suggested videos even though the content seemed quite interesting. Instead, I decided to do my own research on topics such as data privacy, surveillance capitalism or ethical concerns when designing technology. However, the more research I would do the more recommendations I would get – unexpected, uh?

So, one lazy Sunday afternoon I gave in to temptation and clicked on a video that was recommended to me by YouTube – it was a really interesting Ted Talk by techno-sociologist Zeynep Tufekci, which dug a little deeper into some of the question raised in “The Social Dilemma”. Needless to say, one hour later I had already watched 5 more Tedtalks – I admit it, I felt into the Youtube Rabbit Hole!

However, I cannot say that I regret my decision as I gained really interesting insights from these recommendations. After all, that’s how this recommendation system is supposed to work, right? In particular, I was a able to find some answers to a question that had been in my mind for a while: “But what can we do to stop the negative effects of social media while still valuing freedom of speech as a pillar of the internet?”

Even though a lot has been said about the threats arising from the widespread use of social media, I haven’t come across tangible solutions for this issue. Sure, we can turn notifications off, but that won’t tackle the problem at its core! But in two very enlightening Ted Talks by Claire Wardle (misinformation expert) and Yasmin Green (research director a unit of Alphabet focused on solving global security challenges through technology) I was able to find some clarity. According to them, there are three areas that we can act upon to create a better digital and physical world:

  • Tech Companies – first of all, if any advances are going to be made, we need technology platforms to be on board. As an eternal optimist, I do believe that tech leaders are aware of the challenges they face and are certainly trying to find solutions. As Yasmeen Green explains, Google already successfully developed what they called the “Redirect Method”, which targeted people who made searched related to joining terrorist groups. For example, when a Google search about extremist content was made the first result would be an add inviting them to watch a video about more moderate content. Furthermore, the targeting would not be made based on the user profile, but on the specific question that was asked. What if we could use the “Redirect Method” to stop the spread of conspiracies theories or misinformation about climate change? It would be great for society, although probably not so profitable for the tech giants ?
  • Governments – Although tech companies have their fair share of responsibilities, at the moment they are “grading their own homework” and regulating themselves, making it impossible for us to know if interventions are working. That’s where governments come in place. But a challenge this big doesn’t simply call on local or even national regulators. What we really need is global response to regulate the information ecosystem. Or, as Brad Smith (Microsoft’s President) puts it, we need a “Digital Geneva Convention” that holds tech platforms accountable and prevents coordinated social attacks on democracy.
  • We the People – While we would love to place our hopes on Governments to solve this situation for us, it is undeniable that most lawmakers are struggling to keep up with a rapidly changing digital world. From time to time, a US Senate Committee investigating tech companies will originate a few memes as we see that lawmakers have a difficult time understanding what they’re talking about – I will leave you my favorite down below! That’s why we need to take the matter into our own hands and a way to do it is, as Claire Wardle puts it “donate our social data to science”. Millions of datapoints on us are already collected by social media platforms anyway, but what if we could use them to develop a sort of centralized open repository of anonymized data, built on the basis of privacy and ethical concerns? This would create transparency and allow technologists, journalists, academics and society as a whole to better understand the implications of our digital lives.

Overall, I recognize that these solutions are not perfect or complete. But I do believe that they provide a starting point to “build technology as human as the problems we want to solve”.

 

 

Sources

Smith, B., 2017. The Need For A Digital Geneva Convention – Microsoft On The Issues. [online] Microsoft on the Issues. Available at: www.blogs.microsoft.com [Accessed 6 October 2020].

Shead, S., 2020. Netflix Documentary ‘The Social Dilemma’ Prompts Social Media Users to Rethink Facebook, Instagram And Others. [online] CNBC. Available at: www.cnbc.com [Accessed 6 October 2020].

Green, Y., 2018. Transcript Of “How technology can fight extremism and online harassment”. [online] Ted.com. Available at: www.ted.com [Accessed 6 October 2020].

Wardle, C., 2019. Transcript Of “How you can help transform the internet into a place of trust” [online] Ted.com. Available at: www.ted.com [Accessed 6 October 2020].

Tufekci, Z., 2017. Transcript Of “We’re building a dystopia just to make people click in ads” [online] Ted.com. Available at: www.ted.com [Accessed 6 October 2020].

Please rate this

Can we ensure privacy in the era of big data? – Great power, great responsibility.

14

October

2018

No ratings yet.

In the age of social media and online profiles, maintaining privacy is already a tricky problem. Companies collect more and more data of its customers through internet, and with the help of AI programs, analyzing our data gets faster and more sophisticated, making it a commodity for companies and a liability for us.

There are a numerous small examples of questionable data use, most of the time harmless. But what happens when governments or potential employers can gather what seems like innocent and useless information to uncover your most intimate secrets – like health issues even you didn’t know about yet? Furthermore, a lot of people are unaware of the value of their data, exposing them to cases of identity theft and data fraud.  People use various technical products and most of the time people sign in without reading the terms and conditions stating how their private information will be used. It looks like without the meaningful data literacy, people will keep sharing their private information online, while being oblivious of the impact of their data being made available in this way.

Various  scientists and professors already spoke out their concern of the loss of privacy, stating now is the time to insist on the ability to control our own data.

The rules and regulations for data protection tend to be very lax in a lot of countries. most companies do not invest enough in ensuring the protection of their users since there are no real consequences for the mishandling of private or personal information. A dilemma here is the regulation, collection storage and trading of data when companies and operations operate across multiple continents and jurisdictions.

concludingmany challenges remain in how best to use these massive datasets while ensuring data security and privacy. It is important that all parties – companies, individuals and governments – take responsibility to help and try solve this big problem, before the consequences can no longer be overseen.

What are your thoughts about this topic? What does privacy mean to you? How important do you find it to have control over your data? Do we need new laws or corporate policies? How can we ensure our data does not get used for nefarious purposes?

 

Conn, A.(2017). Can We Ensure Privacy in the Era of Big Data?.[online] Future of Life Institute. Available at: https://futureoflife.org/2017/02/10/can-ensure-privacy-era-big-data/?cn-reloaded=1 [Accessed 3 Oct. 2018].

Kwamboka, L.(2017). Privacy in The Era of Big.[online] Medium. Available at: https://medium.com/read-write-participate/privacy-in-the-era-of-big-data-45d5eb1cea75 [Accessed 3 Oct. 2018].

Porter, C.(2014). Big data and privacy: every click you make.[online] the Guardian. Available at: https://www.theguardian.com/technology/2014/jun/20/little-privacy-in-the-age-of-big-data [Accessed 3 Oct. 2018].

Schmitt, C.(2018). Security and Privacy in the Era of Big Data.[online] Renci.org. Available at: https://www.renci.org/wp-content/uploads/2014/02/0313WhitePaper-iRODS.pdf [Accessed 3 Oct. 2018].

Please rate this