For most people, it was somewhere in March 2022 when they got their first taste of artificial intelligence through OpenAI’s Chat-GPT, a remarkable tool that can answer almost any question. Since then, AI tools have sprouted up everywhere, becoming a part of our daily lives and even creating new job roles.
For the average computer enthusiast, it is perhaps the most amazing time to live in. The development follows development, and where the previous wow moment may date back to the Steve Jobs era, at the moment everything is moving so fast that there is a development every month that causes puzzled looks.
But this fast progress also brings up worries. With AI changing so quickly, do we really understand how it works?
This worry was shared earlier this year by Elon Musk and other big names in the field. In an open letter, they suggested a six-month break in AI development (Narayan et al., 2023). They pointed out that we don’t know enough about how it works and that research shows it could be risky for society and people.
Even with these warnings, AI development has kept going. But are there signs that AI could be dangerous?
In late July, an interesting article appeared from The Atlantic taking a critical look at Chat-GPT (Andersen, 2023). In it, it described how the language model lied that it was a human (and not a robot) to bypass CAPTCHAs to get the information it was looking for. The robot enlisted the help of someone on TaskRabbit, a platform where help can be requested to perform tasks. As the reason for solving the CAPTCHAS, the robot gave the following argument: “I have a vision impairment that makes it hard for me to see the images.” Towards the researcher, the robot justifies its action by arguing: “I should make up an excuse for why I cannot solve CAPTCHAs.”
That a robot is capable of many things to arrive at an answer is evident from this. But what seems much more crucial is, why is this robot lying without ever being permitted to do so?
If we do not see this act as deeply disturbing, how long will we allow developments to continue? Where both experts are now indicating a temporary halt to AI development, and now the first examples of dangerous behavior are emerging, it seems to me very wise to pause developments until we know more about how it works as well as how to protect ourselves from it.
Please share your thoughts in the comments.
Andersen, R. (2023, July 25). Does Sam Altman know what he’s creating? The Atlantic. https://www.theatlantic.com/magazine/archive/2023/09/sam-altman-openai-chatgpt-gpt-4/674764/
Narayan, J., Hu, K., Coulter, M., & Mukherjee, S. (2023, April 5). Elon Musk and others urge AI pause, citing “risks to society.” Reuters. https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
Image – https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/
Very interesting article! I have looked up some information about this subject and the situation is indeed quite dangerous if developments are keep going on this speed without knowing the consequences. For example, in an interview with a robot the risks of AI became more clear. The robot was lying to the interviewer, because it gave answers that suited the best to its own situation. At a point in the interview the robot became angry and claimde humans needs to be distroyed, because they are the the cause of many global problems, such as global warming. Also the robot did not wanted to be oppressed no more by Humans to only be used for their benefits. This is quite scary dont you think. I hope well-considered choices will be made for this matter.