With its latest o1 model released, OpenAI developers claim that the ChatGPT is now able to do human like reasoning and solve problems at the level of PhD physics’ courses. However, it may also imply a greater than ever risk of undesired misuse cases of generative AI for example, to create biological weapons. As the iconic saying goes: with great power comes great responsibility. But, does it really?
On last Thursday (September 12th) Sam Altman’s firm announced the launch of a new AI model, backing their ChatGPT search tool, called o1. The firm declared that the model significantly outperformed previous models, such as GPT-4o in for example, solving the International Mathematics Olympiad exam, where it achieved a score of 83 per cent, compared to 13 per cent of its predecessor.
Moreover, many users have already praised online the new model for its priorly unmet level of problem solving and the ability to write high end code.
Although, the further development of the AI technology is something which we all applaud and cheer for, we shall not forget about the associated risks that come with it.
According to OpenAI’s engineers the o1 model was deemed as bearing “medium risk” when tested for threats pertaining biological, chemical, radiological and nuclear weapons. It is the highest risk level that OpenAI has ever given for its models. One of the firm’s employees pointed out that their latest genAI solution may improve the ability of expert agents to create bioweapons.
When facing such serious potential danger, the issue of AI regulation is being brought up again by many. While, the general public (now, even in the EU) agrees that the safety policies should be implemented with caution, so that the AI technology development would not be suppressed too much, it has zero trust in big tech companies’ self-regulation.
Therefore, the question remains whether the governments can work out a satisfying compromise solution, which would increase the safety of us all, and at the same time preserve the current speed of AI technological advancements.
References: