In recent years, significant advancements in machine learning have led to the rise of Generative AI (GenAI). A variety of GenAI tools, such as ChatGPT, DALL-E and Gemini, have emerged as part of this technological wave. Among these tools, ChatGPT has become one of the most widely used GenAI solutions. I also found ChatGPT to be an invaluable asset in my daily workflow. Its integration into my workflow has greatly enhanced both my efficiency and productivity.
Last week, I missed an important lecture due to unforeseen circumstances. Determined to catch up, I used ChatGPT to help me understand the missed material. I uploaded the lecture slides and asked it to explain the key concepts. It broke down complex topics into simple parts and provided clear insights, helping me understand material that would have taken much longer on my own. I was genuinely impressed by how quickly and accurately it provided the information I needed.
However, when I attempted to use ChatGPT to transcribe the audio recording of the lecture, I encountered a limitation: it was unable to process voice recordings directly or generate a transcript. Instead, it suggested specialized transcription tools like Otter.ai, which helped me complete the task.
Beyond basic text generation, ChatGPT has also proven instrumental in my programming work. As a university student working with R, I often face challenges with complex code. ChatGPT helps me generate and debug code, offering useful explanations and optimized solutions. Although I still need to review and refine the code myself, it makes the process much easier and deepens my understanding of programming concepts. This has significantly improved both my learning and productivity.
Generative AI has also brought many other improvements to my daily life. It supports my learning, creativity, and problem-solving in ways that weren’t possible before. That said, AI still has its limitations. Interactions with tools like ChatGPT are mainly text-based, and they lack the personal touch that handwritten notes offer. At the same time, privacy is also a concern when sharing sensitive information with AI platforms.
In summary, my experiences align with the broader perspective that AI serves as a valuable tool to augment, rather than replace, human capabilities (De Cremer & Kasparov, 2021). As AI technology continues to advance, I believe it will become an even more supportive force in our lives.
Reference
De Cremer, D., & Kasparov, G. (2021, March 18). AI Should Augment Human Intelligence, Not Replace It. Harvard Business Review. Retrieved from https://hbr.org/2021/03/ai-should-augment-human-intelligence-not-replace-it
Very nice post, Hao Feng. I enjoyed reading how you implement GenAI into your school workflow. It gave me a nice insight into how you incorporate and use GenAI to your advantage for your schoolwork. I will probably take some of your implementation and use them myself. I do like how you also mentioned the current limitations of the tools available and some concerns you have with them. I wonder if in the coming months/years we will see more advanced GenAI tools created for students specifically (i.e. helping make note summaries and transcribing from audio-to-text). I, myself, have used ChatGPT to help me make a study plan for exams, this worked very well for myself, and might be something to look into if you feel it would elevate your workflow.
Thank you for sharing your experience!
I had the same experience with R Studio. Having AI tools that can review your code and find out the errors can be time efficient and helpful.
However, I also think it could be challenging since AI tools can sometimes be unreliable. Especially when it comes to information told by the professor, personal opinions, or other contexts. Also regarding to programming, sometimes I think it feels unsafe as you mentioned, as data is uploaded to an open source platform. Therefore I agree with the risks and benefits of AI tools. Do you think there could be some solutions for improving privacy and security?
Great reflection and my experience is very similar to yours. At the beginning, ChatGPT indeed had the risk of privacy leakage. In March 2023, Samsung Electronics employees in semiconductor department accidentally leaked confidential information such as codes by using ChatGPT. Also, GenAI once released financial data that has not yet been made public and caused serious consequences. The reason behind is that ChatGPT is able to “remember” personal information in questions and answers and use it for model training and content generation.
In response, OpenAI explicitly states that, by default, the data provided by users when using the API or interacting with the model will not be used to train the model, which alleviates people’s concerns to some extent. In daily life, we can also prevent the disclosure of confidential information to GenAI at organizational and individual levels. Companies can set rules to specify the information that cannot be uploaded in certain time limit, and individual can upload only part of confidential information to protect our privacy. Hope that relevant regulations will become more complete in the future, so that GenAI can maximize people’s productivity!