Will technological advances eventually put us in a loop?

12

October

2023

No ratings yet.

Once again, a blog post about ChatGPT. And while this blog mostly talks about the benefits that this technique brings, in my opinion, it also needs to be looked at critically.

What I’ve noticed recently is that a lot of students use ChatGPT to get answers to questions related to programming an application or website because they don’t have enough knowledge about it themselves. That sounds great; you ask a model to develop something for you and you barely need to know the programming language yourself. But what are the implications if this is done on a large scale?

I’ve noticed ChatGPT gives identical answers to programming questions. In addition, I see the same snippets of code popping up in multiple places. Often not even because they are written so concisely, but because they come from the same source (a language model such as ChatGPT). Because students are less inclined to think of and write their code, all the elaborations look more and more alike. I think this shows that this progression – which mainly provides convenience to the user – allows developments and innovations to diminish in the long run because the same structures and ways are mostly used.

Think of it like this: If I ask an AI model, which can generate something graphic instead of text, to design something in the style of a certain artist, it will indeed generate something non-existent, but in a pre-existing style. With this, I want to show that the current form of artificial intelligence helps us in a way that always uses pre-existing artefacts to arrive at something and does not arrive at something completely new. With that, AI is strong at reusing a particular technique but not at coming up with something completely new. Will that ultimately cause progress to stagnate?

What do you think?

(image https://bair.berkeley.edu/blog/2022/05/03/human-in-the-loop/)

Please rate this

When will we hit the pause button on AI advancements?

12

September

2023

No ratings yet.

For most people, it was somewhere in March 2022 when they got their first taste of artificial intelligence through OpenAI’s Chat-GPT, a remarkable tool that can answer almost any question. Since then, AI tools have sprouted up everywhere, becoming a part of our daily lives and even creating new job roles.

For the average computer enthusiast, it is perhaps the most amazing time to live in. The development follows development, and where the previous wow moment may date back to the Steve Jobs era, at the moment everything is moving so fast that there is a development every month that causes puzzled looks.

But this fast progress also brings up worries. With AI changing so quickly, do we really understand how it works?

This worry was shared earlier this year by Elon Musk and other big names in the field. In an open letter, they suggested a six-month break in AI development (Narayan et al., 2023). They pointed out that we don’t know enough about how it works and that research shows it could be risky for society and people.

Even with these warnings, AI development has kept going. But are there signs that AI could be dangerous?

In late July, an interesting article appeared from The Atlantic taking a critical look at Chat-GPT (Andersen, 2023). In it, it described how the language model lied that it was a human (and not a robot) to bypass CAPTCHAs to get the information it was looking for. The robot enlisted the help of someone on TaskRabbit, a platform where help can be requested to perform tasks. As the reason for solving the CAPTCHAS, the robot gave the following argument: “I have a vision impairment that makes it hard for me to see the images.” Towards the researcher, the robot justifies its action by arguing: “I should make up an excuse for why I cannot solve CAPTCHAs.”

That a robot is capable of many things to arrive at an answer is evident from this. But what seems much more crucial is, why is this robot lying without ever being permitted to do so? 

If we do not see this act as deeply disturbing, how long will we allow developments to continue? Where both experts are now indicating a temporary halt to AI development, and now the first examples of dangerous behavior are emerging, it seems to me very wise to pause developments until we know more about how it works as well as how to protect ourselves from it. 

Please share your thoughts in the comments.

Andersen, R. (2023, July 25). Does Sam Altman know what he’s creating? The Atlantic. https://www.theatlantic.com/magazine/archive/2023/09/sam-altman-openai-chatgpt-gpt-4/674764/

Narayan, J., Hu, K., Coulter, M., & Mukherjee, S. (2023, April 5). Elon Musk and others urge AI pause, citing “risks to society.” Reuters. https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/

Image – https://www.forbes.com/sites/bernardmarr/2023/06/02/the-15-biggest-risks-of-artificial-intelligence/

Please rate this