Should we be taught how to use AI at university?

26

September

2025

No ratings yet.

Nowadays, Generative AI has quickly become a great companion for a lot of students, with ChatGPT and Claude being at the forefront of the most used tools in universities. I’ve only recently started using Claude, and while both tools seem quite similar at face value (e.g., they generate text, help brainstorm ideas, or summarize articles), in practice, some differences are noticeable. To me, ChatGPT feels a lot more structured and precise, somewhat formal and to the point. Whereas Claude comes from a more qualitative angle, where responses feel more conversational and creative. Using and switching between these two made me realize that these systems don’t just provide answers to me, but they also influence and reshaped the way I study and approach academic work.

Yet, using these tools in a university setting isn’t as straightforward. Every assignment now comes with the awareness of AI-checking software. Even when just using Claude to brainstorm for different angles or ChatGPT to help me rewrite a sentence, I think about whether it will be flagged, which sometimes creates an odd tension with using these tools. While the use of AI is often celebrated as innovative and productive, in academia, it can be treated as something that needs to be hidden.

This experience makes me think about what the future of academia, including GenAI tools, will look like. It’s definitely unavoidable, and transparency with the use of AI moving forward could be encouraged. Just as we cite books or journal articles, students could note how AI supported their work and was used in an assignment. For example, notation added on how ChatGPT was used to refine structure or Claude was consulted for brainstorming could uphold academic integrity intact, and would allow students to learn how to work with AI critically and openly. Universities should perhaps teach courses on AI literacy, equipping us to navigate this future where collaboration with these tools is likely the norm.

Please rate this

4 thoughts on “Should we be taught how to use AI at university?”

  1. Very interesting take! I can follow your thoughts and concerns. Also, I really like your proposed idea of adding notations about how AI tools were used to improve structure or for brainstorming. I think this could potentially even increase the quality of research and writing. But do you think that there should be limits to AI usage? Should some things not be allowed at all, even if documented correctly?

    1. Thanks for your comment and insights! I definitely think limits make sense. For example, using AI to generate entire essays or research papers probably crosses the line, even if it’s documented, because it defeats the purpose of learning.
      Maybe the key is to distinguish AI as an assistant versus AI as a substitute. As long as students remain the primary thinkers and creators, and AI is used transparently to enhance rather than replace, it can be a healthy balance.

  2. While many institutions encourage transparent and fair use of AI for academic work, many students choose to ignore this. Students would rather run the risk of getting caught for blatantly copying work done by AI than actually take the time to do the work themselves. I agree that AI literacy courses will help students gain an extremely useful skill, but I believe the issue runs much deeper. Students need to regain the desire to learn and to think critically. So, to what extent will AI influence the curriculum and infrastructure set by established academic institutions?

  3. Very interesting topic Jill! As a student, I get your concerns, especially with the AI-checking software. I 100% agree with your idea about universities teaching AI-topics. It will be unaivodable in the future, so students can be better be prepared, than having to learn to work with the tool themselves!

Leave a Reply to Pienvanderleeden Cancel reply

Your email address will not be published. Required fields are marked *