It’s not uncommon to see Elon Musk’s name in the context of a rather ethically controversial topic, and he kicked off 2018 in a similar fashion. Namely, Musk announced his company Neuralink, and the company’s vision to ensure that humans will overcome obsolescence with the exponential advancements of AI. He stressed how AI will soon be able to surpass humans in almost every cognitive front, and that only adopting the lifestyle of a cyborg will ensure we stay in the race. How does he want to do this? Well, he wants to biologically engineer our brains to connect with AI functional computers.
We are already significantly smarter through the use of our phones and computers on a daily basis, by already a significant order of magnitude relative to humans only 100 years ago. We are part cyborg currently, he believes, but we can go above and beyond that to develop our brains to superhuman levels.
Cognitively enhancing ourselves will obviously come with numerous ethical implications, and significant dangers. Creating these “superhumans” would quite rapidly spill-over in a military environment, where soldiers would be able to detect objects we currently aren’t able to identify. Our thoughts would be enhanced in so many different ways, which would beg the question: are we still ourselves with the implanting of such technologies? And if not, are we ourselves with the current access we have to technologies? Where would the line be drawn?
People with disabilities and medical conditions would be able to benefit significantly from such inventions. We already use links between the brain and robotic arms to have amputees be able to use limbs they thought to have permanently lost.
While this would need to be significantly regulated, I believe the opportunities significantly outweigh the possible drawbacks. Our development would exponentially increase, at a rate so rapid, that we would be able to function on superhuman levels and solve many of our current crises. Disparities between different communities would, however, increase. Finding the right balance would thus be the ultimate question, with the technological capabilities already right at our doorstep.
Sources:
https://www.theguardian.com/technology/2018/jan/01/elon-musk-neurotechnology-human-enhancement-brain-computer-interfaces
https://www.indy100.com/article/elon-musk-products-ideas-superhuman-technology-neuralink-invention-8540021
Hey Alexander, thank you for your article, I am too very interested in neurotechnology. The ethical issues you bring up reminds me of this science fiction tv series called Black mirror showcasing the societal implications of biotechnology and artificial intelligence in the near future. Although it seems trivial to consider science fiction seriously yet, the genre more than often accurately reflect ongoing cultural anxieties. Rather than predicting how the future could look like, science fiction encourages viewers to reflect on how technology can lead to political, economic and societal changes. It is as an invitation to a very serious debate on our future, that is far more stimulating than a textbook or a politician’s speech.
Equipping ourselves with biotechnology is unavoidable and attaining the singularity (the hypothesis that the breakthrough of Artificial Super Intelligence will revolutionize human civilization) by 2045 is not that delusional. For the time being, we should really think of what real consequences this kind of technology can bring, and take into account the warning signs science fiction has nailed down for us.
https://www.nytimes.com/roomfordebate/2014/07/29/will-fiction-influence-how-we-react-to-climate-change/science-fiction-reflects-our-anxieties
https://www.wired.com/2018/09/geeks-guide-yuval-noah-harari/
http://content.time.com/time/interactive/0,31813,2048601,00.html