One weird recipe at a time.
Until a month ago, I had not used generative AI in any meaningful way in my life. Sure, I had played around with new apps like ChatGPT and Claude when they were released (I wanted to see how much fun talking to these notoriously lying, flirty, and joking AI chatbot could really be). I challenged Midjourney to make visual representations of my really dumb and nonsensical dreams as a fun exercise. And finally I was curious, as a music lover, about how good Google’s MusicLM was in creating a melody that was not only realistic, but also maybe beautiful (the output was quite pleasant to my surprise). But I refused to use any of these products in any meaningful way.
My big concerns were regarding their actual usefulness. Yes, whatever information these models provided was excellent in “passing” as correct (both factually and contextually), but we all knew that it was far from 100% accurate. Even Chat GPT’s latest reasoning model o1 has a “unique capacity to ‘scheme’ or ‘fake alignment.’” according to independent AI safety research firm Apollo Research. For me, using AI always felt like working with a lying sociopathic co-worker. Rather than living in the paranoia about where AI could be lying to me and going through its output with a fine toothed comb, it seemed easier to do the work myself. Similarly, or even more importantly to me, the fact that these models are trained on data not legally obtained by companies meant that their output is basically stolen.
As someone who has zero artistic bones in his body, I am in awe of what creatives in every field are able to bring to this world. Their work being used without their permission upset me as much as it would if someone were to steal my assignments (and my assignments are not even that good!). But a few things recently changed, that has finally started me on my journey to coming to terms with, and even starting to like, generative AI. Firstly, I wanted to start eating healthy.
The balance of my diet has never concerned me in any way, but my friends recently made a big fuss on my daily waffles and Nutella consumption (I still argue its not that unhealthy), and with a generally higher concern about health since the pandemic, I decided to make a change. But I still did not care enough about this issue to put in the work to understand what carbs are, how much is too little or too much protein, and what I need to do with calories. This is when my friend suggested I should just ask Gemini to develop my meal plan. The stakes being as low as they were, I gave it a try, it won me over. It was simple, it was easy, and it was unimportant! I am sure a nutritionist would have found problems with some of the things that were being suggested to me in my meal plan, and I don’t know if the recipe I made Gemini create of mussels with chicken stock, broccoli and Bolognese is an actual things humans anywhere in this world eat, or just Gemini going “Sure, put it all in in the pan, what could go wrong!”. It has had a practical impact in my life and has almost become part of my weekly routine.
The next big incident in my journey to AI adoption was my need to study RStudio. I have never learned coding, and it became quickly apparent to me that the traditional way of learning to code using books, articles or YouTube videos will take me too long. On an evening when I was too tired after class and not at all in the mood to watch another instructional video, I asked ChatGPT to help me code something. I get the code, input it in R, and it did not work. There is this error that kept popping up. This is where generative AI really surprised me. I asked ChatGPT why I was getting that error, and not only was it able to pinpoint what the exact mistake that I was making when loading the library in R, but also explain why it was a mistake in the first place! Moreover, when I asked how this code worked, ChatGPT broke down every-single-field-and-bracket in that code to explain to me what every element was meant to do, and how I could tinker with and alter those elements to play around with the output. It was the closest I have felt since high school when my teachers would take out time to sit down with me one-on-one and explain difficult concepts with which I was struggling. For a software to be able to imitate even a little bit of some of the best teachers I have had, is really an impressive feat.
However, the code it provided me with was still trained on data from coders who were not asked for their permission or compensated for their work, so the ethics of using this should still have been absolutely not ok, but I did not feel as bad about this. I wondered why I felt more comfortable using generative AI for coding, even though I would never use it to create art to publish under my name. Is it that I do not prescribe the same value to coding as I do to art? Or maybe, it is the fact that I know that my coding assignment does not really hold any real value in this world in the large scheme of things, and so I excuse using Gen AI in this instance. After all, I am not using the code to compete with actual coders. But here is the thing: I could. Instead of hiring an app developer in the future for my start-up idea, I could try to code it using generative AI, thus making obsolete the jobs of the coders whose output was used for training these models.
The code used in the training only exists because people spent their personal time and energy writing and publishing code online, whether to help their fellow colleagues, to show their skills, for practice, or whatever other reason. Their effort holds as much value as that of any artist involved in the field of painting, music, or literature.
This is where regulation has to come into play. I see the value of generative AI, both personally and professionally. I intend to “copy” my friend’s idea of creating a storybook for her niece’s first birthday using Midjourney, and I intend to continue learning coding using AI. But we have to solve the answer of how these tools are being developed: whether setting rules on compensations for using training data, or on how we can monetize the output of these tools, or some other solution that we all have to find together (with consensus of all the stakeholders), and not unilaterally by big corporations. I do see the value of these tools and am going to use them increasingly in the coming days, weeks and months. But we must make sure, that in the excitement of what is possible with these tools and their convenience, we do not ignore out responsibilities towards are fellow humans, their works, and their rights.
References
Robison, K. (2024, September 17). OpenAI’s new model is better at reasoning and, occasionally, deceiving. The Verge. https://www.theverge.com/2024/9/17/24243884/openai-o1-model-research-safety-alignment.
Milmo, D. (2024, January 8). Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says. The Guardian. https://www.theguardian.com/technology/2024/jan/08/ai-tools-chatgpt-copyrighted-material-openai
Metz, C and Grant, N. (2024, July 19). The Push to Develop Generative A.I. Without All the Lawsuits. The New York Times. https://www.nytimes.com/2024/07/19/technology/generative-ai-getty-shutterstock.html