The Perils of Technology

5

October

2021

No ratings yet.

This blog post will shine light on the terrible status quo we now face in regard to technology, specifically mobile phones and social media. First there will be an overview of the state of affairs in technology (and hence our) land. Subsequently, a more ideal state will be suggested and described, including macro and micro steps to make this happen.

Recently a new documentary was launched called The Social Dilemma. Here it was elucidated how big tech firms are willfully and firmly misleading and manipulating their users. For the most part, this happens completely unbeknownst to them. Despite the revelations of The Social Dilemma, it did not seem to have sparked much debate about the role and clout of big tech and social media in society. The prime goal for such firms is to maximise revenue, which is understandable given the capitalist system they reside in. The ideal way of maximising revenue is by maximising our time-on-sight, that is the time we spent on a particular app or mobile phone. One explanation for maximising revenue by maximising time-on-sight is because then more data can be collected about the user, which can be sold: the more data collected, the more can be sold, the more revenue. How do such firms aim to maximise our time-on-sight? Consider the following two examples:

  • By creating phones and developing applications that work as slot machines (Harris, 2017; TED, 2016). For example, notice that when you open Facebook there is a little time delay to see the newsfeed; this is delaying the dopamine reward, thereby making the application more addictive (Harris, 2017).
  • By using well-documented and extensively researched principles of persuasion as a means to keep our attention where they want it to be (Harris, 2017). Consider for example the principle of reciprocity (Cialdini, 2007), which states that we feel more inclined to return favours if have received them from others. For example, consider the Snapchat streak that its users accumulate. We might not think this consciously but unconsciously such streaks trigger reciprocity (Harris, 2017), thereby increasing our time-on-sight.

On a macro level, it seems that a radical paradigm shift is needed in order for the aforementioned situation to change in any significant way. For example by altering the way the big firms like Facebook and Samsung design their phones and applications (TED, 2016). Frankly, it seems unrealistic to expect any of such changes soon hence it falls to us to modify our relationship to our phones and applications. In addition to having an awareness of the addictive potential of both, we can consciously decide to stick one day per week without our phones. We can encourage our friends at gatherings, dinners, and parties to refrain from using smartphones; to truly be connected to each other without being connected to the internet.

References

Cialdini, R. B. (2007). Influence : the psychology of persuasion. Collins.

Harris, S. (2017, April 14). Making Sense Podcast #71 — What is Technology Doing to Us? Sam Harris. https://samharris.org/podcasts/71-technology-us/

TED. (2016). How better tech could protect us from distraction | Tristan Harris. In YouTube. https://www.youtube.com/watch?v=D55ctBYF3AY

Please rate this

Artificial Intelligence and the future of the human condition

16

September

2021

No ratings yet.

The rise and development of artificial intelligence (hereafter AI) has been observed with both excitement and anxiety. There appears to be a strong divide between the group of hardcore proponents for AI and the group consisting of those who are seriously worried about developing forms of AI that have the capacity to eradicate humanity. This blog post takes the latter stance and aims to shine some light on the inherent dangers of and our misguided intuitions towards AI.

Let us first start by outlining a major concern regarding AI: One day we will build intelligent machines that are smarter than us humans (Bostrom, 2017). There is a number of objections that are typically raised when one supports the aforementioned statement. Some say that this is unlikely to happen yet this claim is patently – and dangerously – false. There are only three assumptions one needs to make to arrive at the conclusion that in the future there will be superhuman forms of AI (Harris, 2016).

  1. Intelligence occurs as a result of information-processing in a physical system (e.g. in a computer);
  2. The improvements we make to our machines will continue, for example because the incentive to create better machines is extraordinary (e.g. consider how much more value can be extracted out of data sets with current computers compared to computers ten years ago; the difference is total);
  3. Humanity has not yet reached the peak of intelligence (e.g. there are problems that are in desperate need of being solved, such as climate change or diseases).

Unless one can find a problem with either of these statements, the inevitable conclusion is that there will come a day when our intelligent machines will outsmart us. Importantly, no serious scientists have been identified who disagree with one or more of the assumptions (Harris, 2016). So what? Such objections attest to an alarming degree of naivety and a failure to seriously contemplate the possible outcomes.

A variety of problems have been anticipated in regard to this scenario (for a tough yet complete read on these problems, consider picking up Nick Bostrum’s book named Superintelligence). For example, there is the well-known alignment problem, in which the goals, values, and motivations of the super intelligent machine are not aligned with those of us earthlings (Bostrum, 2017). Consider giving the AI a task like ‘’Solve the climate change problem’’. How can we know that the AI will not decide to destroy humanity, for our species heavily contributes to this problem?

Common sense cannot be assumed; a sense of right of wrong cannot be assumed; blindly expecting alignment between the AI’s strategy and the best and highest interests’ of humanity cannot be assumed (Harris, 2016). How will we program human values into the AI? And what kind of values, exactly? Can we believe benign motivations, goals, and values will be installed into the AI when a foreign power like Russia or Afghanistan would program these values?

These are all deep questions and the answers are not straightforward nor exhaustive. It is time for us to wake up and open our eyes to both the avoidable dangers as well as the potentialities of such God-like machines.

References

Bostrom, N. (2017). Superintelligence : paths, dangers, strategies. Oxford: Oxford University Press.

‌Harris, S. (2016). Can we build AI without losing control over it? | Sam Harris. YouTube. Available at: https://www.youtube.com/watch?v=8nt3edWLgIg [Accessed 16 Sep. 2021].

Please rate this