Roboethics: Are robots like Tesla Optimus a tread to humanity?

6

October

2022

No ratings yet.

One of the most genius people on this earth, Elon Musk, came out with the news this week that a Tesla robot will be on the market in 3-5 years. This AI-driven robot will be called Tesla Optimus and should cost around $20000. The purpose of the robot is to help with everyday tasks, such as delivering parcel or watering plants (McCallum, 2022).
That Tesla is coming out with an AI-driven robot seems strange, as Elon Musk has often spoken out about the dangers of Artificial Intelligence, saying, for example, that robots will one day be smarter than humans. He even calls AI as humanity’s “biggest existential threat” (BBC News, 2017). Yet he says the Tesla Optimus will not be a danger to humanity because Tesla adds safeguards, such as a stop button (McCallum, 2022). It is therefore good to think about where the boundaries are with regard to designing humanoid robots.

Despite robots only starting to become truly realistic in recent years, Isaac Asimov (1941) wrote about ‘The Three Laws of Robotics’ over 80 years ago:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Later, the EPSRC (Bryson, 2017) added the following five principles:

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

These laws and principles indicate that robots are there to help people and not to hurt people. In addition, humans should always retain power over robots and not the other way around. This seems logical, but with the rapid growth rise of AI, robots may one day become smarter than humans. Therefore, I think this is the time when there should be strict and clear laws around designing robots. Robots should always be limited so that they can never be smarter than humans.
If proper regulations are put in place, I think robots can be of great value to humanity. Think for example of humanoid robots in healthcare, these robots can ensure that more people can receive good quality care at the same time. I am curious to see how AI driven robots will evolve in the coming years, at least we can say that robots are no longer the future, but they are the present!

Bryson, J. J. (2017, April 3). The meaning of the EPSRC principles of robotics. Connection Science, 29(2), 130–136. https://doi.org/10.1080/09540091.2017.1313817

Asimov, I. (1941). Three laws of robotics. Asimov, I. Runaround.

McCallum, B. S. (2022, October 1). Tesla boss Elon Musk presents humanoid robot Optimus. BBC News. Retrieved October 6, 2022, from https://www.bbc.com/news/technology-63100636

BBC News. (2017, August 21). Musk warns of “killer robot” arms race. Retrieved October 6, 2022, from https://www.bbc.com/news/business-40996009

Please rate this

Leave a Reply

Your email address will not be published. Required fields are marked *