Boston Dynamics, a company founded as a spin-off from the Massachusetts Institute of Technology (MIT), has announced this week the official launch of their robot dog called Spot. This company has been working on robotic designs ever since 1992 focusing on the needs of the U.S. army, facilitating quadruped robot designs to enable realistic human simulations.
These dog looking bots will probably see their first steps at construction sites since they are able to carry heavy items, open doors, walk on rough terrain and operate under extreme weather conditions (-20 to +45 degrees celsius). They would come in very handy in battlefields as well, being used as searching devices, without putting human lives at risk or even accompanying military resources while carrying equipment. During a presentation in April, Boston Dynamics has also shown how multiple Spots can work together to move trucks or cooperate to get objects from one room over to a different one thanks to the platform designed which integrates hardware with software that can be edited by Spot’s owners. Each user can write code lines to get Spot to do several different tasks as seen on many of the online videos that got popular in the past few months – Spot has been seen organizing dishes in the washing machine, drifting through obstacles, bring the owner a cold beer and even twerking, making some of these videos viral even outside of the tech community. Although Spot is only in its initial phase, these kinds of robots could serve society through patrolling communities, recording and monitoring human behavior and identifying potentially risky situations through an initial process of “teaching” spot human behavior and their plausible outcomes.
Nevertheless, since this platform allows each user to work on the code to have Spot behave differently and perform different tasks, the sky is really the limit of how Spot will interact with the real world. And this leads to the question about who in the end is responsible for the actions of this robot? How can Boston Dynamics monitor the instructions each user gives to Spot? What is a good model of platform governance for this kind of business? In the end, a situation like the one shown on Black Mirror’s episode “Metalhead” were these robots are configured to hunt down humans does not seem to be unfeasible… While regulators and technology companies figure out how to manage the possible flaws in their innovations, society will have to do their homework and rethink on how we should use these robots for the good of us all.
Wakefield, J. (2019). Robot dog Spot on sale for ‘price of a car’. [online] BBC News. Available at: https://www.bbc.com/news/technology-49823945.
Ellis, C. (2019). Boston Dynamics’ uncanny robot ‘dog’ Spot is now strutting into workplaces. [online] TechRadar. Available at: https://www.techradar.com/news/boston-dynamics-uncanny-robot-dog-spot-is-now-strutting-into-workplaces.
Hi Ernst,
Interesting article you wrote here. It made me think.. I see a lot of possibilities where robots can come in handy, also what you already mentioned in war for example. Why risk real lives if can just send robots? However, a lot of people in the world do mind this, see: https://www.stopkillerrobots.org. Is it even ethical to have autonomous unidentifiable programmed object fighting for example? Seen the recent attacks on two oil facilities in Saoudi-Arabia, caused by a drone that was unidentifiable.. This could also be done by robot dogs in the future?
Next to that, you raised a really important question, who is in the end responsible for behaviour of such robots? Especially when different people can write lines of codes for these sorts of robots. This is really hard to determine and to control I would say, maybe blockchain technology could help to verity the commands that are put into the robot?
These kind of technologies play an important role in our society, but it also shows serious the vulnerabilities of the digital age, since connectivity and teams play a more crucial role in producing goods and services what also causes problems in the end for who is responsible if something went wrong (accidentally or not) The Volkswagen scandal for example: https://www.nytimes.com/2017/01/13/business/volkswagen-diesel-emissions-executives.html. I think that governments have a really important role here to determine the boundaries of these upcoming technologies. Seen the GDPR, a lot more of these kind of regulations will follow I think. Especially, when technologies innovate at the pace of today.