My AI study buddy: How generative AI helps me train smarter

4

October

2025

No ratings yet.

When I was starting to use generative AI, I only thought of it as a fancy writing assistant, it worked great for writing emails, brainstorming ideas or helping me decide what I could eat for dinner. I did certainly not think of it as my favourite study buddy, but that’s exactly what happened. 

Instead of reading and writing my lecture notes over and over again, I now use AI to turn them into something interactive. For instance, I now use Quizgecko or Quizlet’s AI-powered flashcard tool to instantly create study cards and quizzes. These tools can take my raw notes and transform it into multiple-choice questions, quick tests or fill in the blank exercises. This way, passive reading becomes interactive and actually works out my brain.

Another way of learning is with ChatGPT, he can role-play as a tutor, helping me with follow-up questions or he can explain tricky concepts in simpler terms. When I get a question wrong, I can ask him to create a new practice problem on the same topic until I control it. It feels like having a teacher that is patient and never gets tired of repeating things. It also helps me prepare for presentations or class discussions. I can ask ChatGPT to quiz me orally or act like debate partner so I can practise questions under pressure. 

Of course, the way of studying is not perfect, sometimes the questions are way to easy, or the explanations are a bit shallow. That is why I always double check. I felt that studying could sometimes become rather monotonous, especially when it came to a subject that was purely theoretical, but overall studying has become more engaging and this way I get a little more of a twist.

Looking back, I think the coolest part is how these ways make learning more personal. So that they adapt to my personal flaws and pace.

Please rate this

Can we trust AI with our data? Privacy in the age of algorithms

15

September

2025

No ratings yet.

In the past years artificial intelligence has moved from science into our daily routine. We stream series and movies based on personalized recommendations, unlock our phones with facial recognition and we ask ChatGPT what we should eat for dinner. But there is a hidden side to all these advantages. These systems rely on enormous amounts of data, often personal. The question that keeps coming back is: can we trust AI with our data?

The risks are there to see. Many AI models are using data to train themselves; this data is scraped from the internet. Using social media posts, blogs, images that are uploaded over all these years. Most of the internet users never gave explicit permission for this, but ones you put something on the internet it is most likely never going to disappear. Fragments of our digital life’s are training and shaping up the behaviour of large-scale commercial systems (Stanford HAI, 2024). Even when data is anonymized, AI models can infer sensitive information such as health conditions, political preferences, or religious beliefs from patterns hidden in seemingly harmless details (F5, 2024). In some cases, weak security can cause private information to leak. These risks go beyond the power of the individual.

Europe has tries to address these challenges through regulation. The General Data Protection Regulation (GDPR) has already set the standard for how organizations should handle personal information, reguiring transparency, consent and data minimization (IBM, 2024). More recently the EU Artificial Intelligence Act, which started in August 2024, has gone a step further by classifying AI systems according to risk. High-risk systems, particularly those with direct impact on safety and fundamental rights will now face stricter rules around documentation, data governance, and transparency (European Parliament, 2024). Together, these two frameworks aim to create a regulatory environment where AI innovation can coexist with strong protection of individual rights (INTA, 2024).

But even with these developments, there will remain a constant tension between convenience and protection. Everytime we embrace a smarter, faster, more personalized service, we implicitly decide how much privacy we are willing to trade. For me, the most important question for the future is not simply whether we can trust AI with our data, but what kind of society we want AI to help create. If we choose carefully, privacy and progress can grow together. If not, the price of convenience may turn out to be far higher than we expect.

References
European Parliament. (2024). The EU AI Act: First regulation on artificial intelligence. Retrieved on 14 September 2025, from https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
F5. (2024). Top AI and Data Privacy Concerns. Retrieved on 14 September 2025, from https://www.f5.com/company/blog/top-ai-and-data-privacy-concerns
IBM. (2024). Exploring privacy issues in the age of AI. IBM Think Insights. Retrieved on 14 September 2025, from https://www.ibm.com/think/insights/ai-privacy
INTA. (2024). How the EU AI Act Supplements GDPR in the Protection of Personal Data. Retrieved on 14 September 2025, from https://www.inta.org/perspectives/features/how-the-eu-ai-act-supplements-gdpr-in-the-protection-of-personal-data/
Stanford HAI. (2024). Privacy in an AI Era: How Do We Protect Our Personal Information? Retrieved on 14 September 2025, from https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information

Please rate this