Nowadays, Generative AI has quickly become a great companion for a lot of students, with ChatGPT and Claude being at the forefront of the most used tools in universities. I’ve only recently started using Claude, and while both tools seem quite similar at face value (e.g., they generate text, help brainstorm ideas, or summarize articles), in practice, some differences are noticeable. To me, ChatGPT feels a lot more structured and precise, somewhat formal and to the point. Whereas Claude comes from a more qualitative angle, where responses feel more conversational and creative. Using and switching between these two made me realize that these systems don’t just provide answers to me, but they also influence and reshaped the way I study and approach academic work.
Yet, using these tools in a university setting isn’t as straightforward. Every assignment now comes with the awareness of AI-checking software. Even when just using Claude to brainstorm for different angles or ChatGPT to help me rewrite a sentence, I think about whether it will be flagged, which sometimes creates an odd tension with using these tools. While the use of AI is often celebrated as innovative and productive, in academia, it can be treated as something that needs to be hidden.
This experience makes me think about what the future of academia, including GenAI tools, will look like. It’s definitely unavoidable, and transparency with the use of AI moving forward could be encouraged. Just as we cite books or journal articles, students could note how AI supported their work and was used in an assignment. For example, notation added on how ChatGPT was used to refine structure or Claude was consulted for brainstorming could uphold academic integrity intact, and would allow students to learn how to work with AI critically and openly. Universities should perhaps teach courses on AI literacy, equipping us to navigate this future where collaboration with these tools is likely the norm.