Just a week ago, OpenAI, the developer of the well-known generative AI chatbot, ChatGPT, has released the newest function of the chatbot. It is now able to access the internet and is therefore able to provide real-time information to its users. Prior to this newest capability, ChatGPT’s access was limited to information up to September 2021 (Vleugels & van Wijnen, 2023).
Users can now give prompts asking about current events, consult news websites or provide help with technical research. Not only will the output be more relevant and current, but it will also be accompanied by direct links to its sources. For now, this feature is only available for ChatGPT Plus subscribers but will soon be accessible to every user. Although this sounds very promising, there are some dangers involved, as is the case with almost every new AI update. What happens, for example, when you post something online but remove it moments later? It is unclear how OpenAI works and whether this is in accordance with the law. How do they process such data? Questions like these arise when tech companies are not fully transparent about their processes (Vleugels & van Wijnen, 2023).
Besides this recent update, OpenAI has also released voice and image capabilities in ChatGPT. These allow you to have a more intuitive type of interface and gives users more ways to utilize ChatGPT. You can for example take a picture of what is in your fridge and ask it to plan a meal, or let it help you with homework. With the voice capabilities, it is possible to have an actual conversation on the go, let it tell you a bedtime story or propose a debate topic (OpenAI, 2023). Again, with these updates, there are some risks attached. Can you rely on ChatGPT’s interpretation of an image when there are high stakes involved? How can they avoid malicious use of the voice technology when people try to impersonate others?
OpenAI is aware of these risks and that is why they make their tools gradually available. This allows them to make improvements and mitigate risks. However, in my opinion, they should only release those updates when they are one hundred percent sure that it works correctly and cannot be used maliciously. I think it is always better to be safe than sorry. On the other hand, OpenAI can only fix these problems once it has been made aware of the pitfalls, and that only happens when the tools are used. What are your thoughts on this? Do you support OpenAI’s way of working, which is to make the tools gradually available?
References
OpenAI. (2023, September 25). ChatGPT can now see, hear, and speak. https://openai.com/blog/chatgpt-can-now-see-hear-and-speak
Vleugels, A., & Van Wijnen, J. F. (2023, September 28). ChatGPT geeft nu actuele antwoorden, maar niet op de prangendste vraag. FD.nl. https://fd.nl/tech-en-innovatie/1491136/chatgpt-geeft-nu-actuele-antwoorden-maar-niet-op-de-prangendste-vraag?utm_medium=social&utm_source=app&utm_campaign=earned&utm_content=20230929&utm_term=app-ios&gift=1RO9r
I really like your post! I like how you kind of gave a short news update on ChatGPT and what they are currently developing and rolling out, it provides a nice insight into the actualities. In my opinion the road they are taking with image recognition and voice capabilities is quite scary. As we have already seen multiple times through not only awareness posts about deep fakes but also extremely difficult to distinguish malicious deep fakes, voice capabilities can pose quite the threat.
On the other hand, it is impossible to deny the enormous positive possibilities voice and image capabilities in generative AI possess, but I think it is rather difficult to strike the right balance. Moreover, as OpenAI is nowhere near open source I find it hard to support their way of working, even if gradually rolling out updates is probably the only way to go, as you mentioned.
Thus, I think that until we are able to find this balance between possible malicious use and beneficial use, either through legislation or something else, and until OpenAI becomes open source and provides full access to their algorithm, we should prohibit any potentially dangerous updates and innovations in the field of GEN AI.
Thanks for your post, and I think it opens an interesting discussion.