Generative AI as a Co-pilot?

10

October

2025

No ratings yet.

My first real use of generative AI was to plan a trivia night for my friends. Not only did ChatGPT save me from hours of work, but it also gave me personalized questions based on the information I gave it. Furthermore, I used it to make a PowerPoint presentation in a jeopardy game show-like format with over 30 slides. It gave me questions with varying difficulty, themes, tones, and styles to keep the game night fun but not too easy. I have also used Dall-E to help a friend design marketing graphics for their bagel start-up. I am not a very artistic person, but this helped curate my ideas into figures. It only took some short prompts and a few seconds, and I was delivered multiple mockups. This made me realise that generative AI tools have helped me do more with less.

The most helpful generative AI tool for me has been NotebookLM. I attach my lecture notes, readings, and personal notes to convert them into podcasts. It is quick, and it helps me review concepts even while commuting. I believe that I learn better while listening than while reading. So, NotebookLM has definitely been my saving grace while studying for exams.

Yet, I wonder whether I am learning or just consuming what the AI tool decides is important. Across these experiences, I have felt both smarter and dependent. I believe that generative AI improves my ideas and productivity, but it also reduces the natural creativity aspect of being human. By this I mean experiences such as brainstorming, reflecting, discovering, writing, etc. I realize the solution is not rejecting AI completely, but rather using it consciously. Hence, I use generative AI tools as my co-pilot, who helps me navigate through the proccess more efficently rather than it being my captain.

Please rate this

Building Applications Has Become Easier Than Ever

10

October

2025

No ratings yet.

A couple of months ago, a client at my student consultancy job asked us to automate a document anonymization process at their real estate agency. Due to data protection requirements, the processing had to be done locally or within their Microsoft environment.

After some unfruitful experimenting with Power Automate, we decided to give building our own tool with Claude (an LLM by Anthropic) a try. With a bachelor’s degree in international business, I had no coding knowledge whatsoever. The results were amazing. Within a few hours, we had some basic capabilities established.

After a while, I wanted to make the process more efficient than having to copy paste Claude’s changes into my project files. A new version, named Claude Code, had been released. It enables the AI to work in your project files directly. I had to watch a few tutorials and do some error-fixing with ChatGPT to get it to work in container (a sealed-off environment on my laptop). After about two hours we were ready to go.

The result was a developer working at lightning speed. It could code, test, readjust and retest until it works, all in one go. You see it break down the task into sub-tasks and tackle them one-by-one. Alternatively, you can put it in plan-mode so it will brainstorm about what you want, come up with multiple alternatives with pros and cons and execute one when you give the word. While it is executing that piece, you can open a second, third or fourth window to work on a different issue. You can quite literally run an entire team of coders at the same time, while you only manage them.

However, it’s not perfect. Especially fixing more complex bugs can be an issue. Sometimes, after showing the problem and asking for a solution several times, it won’t be able to fix it. Since I do not know anything about code myself, I had to be creative.

Firstly, working modularly helps you pinpoint the issue to a specific module. You can then ask Claude to zoom in on that module and come up with possible causes. With just logic you can often judge its suggestions. That way you can help Claude get closer to fixing the issue.

Sometimes, it gets stuck in a certain thinking path it has gone down. In that case, it can help to get a second opinion. You open a second window or ask a different LLM (e.g. ChatGPT) to look at the issue. This way it is not biased by the context in your current conversation or its LLM specific knowledge. This has more than once resulted in it immediately recognizing the real issue, and me being frustrated with the fact that I spent half an hour trying to fix it in the initial chat.

All in all, I was really amazed with the possibilities. Getting it all set up was a bit of trial and error, and it takes quite some time to brainstorm about the implications of architectural choices. But once you have done that, it builds full-fledged applications in minutes.

New AI tools are being released quicker than we can learn to use them, so adaptability seems more important than ever. Just being able to build applications is not enough either. Just like before coding became so much easier, you need a business case for the application too. All in all, I think it’s a great time to be a business student with an interest in technology.

To anyone else who has been experimenting with AI tools for coding: what tools do you use and what best practices have you discovered?

Please rate this

Film Photography and AI: My experience

10

October

2025

No ratings yet.

I started using ChatGPT for school, then realized it could help my photography too. With film, every click costs money and time. Having a quick second brain lowered the stress and helped me make better choices before I even loaded a roll.

Most of my shoots are low light or mixed neon. I ask for a quick plan: likely shutter speeds for Cinestill 800T or my MARIX 135 T800 and Amber T800, what EV to expect at blue hour, and how far I can push before motion blur ruins the look. It is not magic. It just gives me a sensible starting point so I do not waste half a roll testing the obvious.

I also use it for composition practice. I describe a scene from my contact sheet, like “subject under a shop sign, bright window behind, messy foreground.” It suggests two or three framings to try next time. Step left to kill a distraction. Drop the angle to separate the subject from the background. Add a leading line from the curb. Simple ideas, but it keeps me iterating. My contact sheets feel less random and more like a series with intent.

Metering and color are where it saves me the most. If I am debating 1 stop over for skin indoors, or how much to bias exposure for tungsten under mixed LEDs, I ask for trade-offs. It reminds me what will happen to highlights on 800T and what to expect from halation. When a scan comes back with a green cast, I run a quick checklist for likely causes and fixes. It is the same with push or pull. I still note my lab’s advice, but I go in with clearer expectations.

Trust grew with results. The more useful the output, the more I tried. I still keep guardrails. I verify technical claims, write shot lists, and never paste personal data. The goal is not to outsource taste. The goal is to give my taste more chances to show up.

If you shoot film, try this next roll. Write a one paragraph brief, ask for two lighting setups and a backup plan, and make a tiny shot list. Then compare that contact sheet to your usual one. Did you see more, or just shoot faster?

Please rate this

Agentic AI in Customer Relationship Management (CRM)

10

October

2025

5/5 (1)

One of the most transformative developments in enterprise technology today is the emergence of Agentic AI in the field of CRM. The way campaigns are being designed, executed and how data is being processed fundamentally shifts towards the integration of Agentic Systems. 

But how does that actually work in practice?

Unlike conversational AI tools that only assist users through predictive analytics or content generation, agentic AI actively makes decisions and executes tasks, without constant human intervention or the need for prompts. In todays practice, this means CRM systems are evolving from static databases into intelligent ecosystems where AI agents autonomously manage lead follow-ups, orchestrate personalized customer journeys and interestingly, also initiate retention campaigns when churn risk is detected. When companies decide to implement those agentic capabilities into their CRM, the implications for efficiency and scalability are profound. Those companies can engage customer continuously and react to behavioural changes in real time. The most important aspect is that a level of personalization for the individual consumer can be achieved, which was previously impossible at scale. 

How to implement that into existing workflows?

Many firms overestimate their technology readiness, meaning that the often launch isolated pilots, rather than focussing on clean data, orchestration frameworks, or proper human oversight. To be able to implement this technology successfully companies need to follow a balanced approach between bottom-up and top-down. Only when the employees are being enabled and empowered to identify areas where the agentic AI can help, the implementation will work out. Especially in CRM it is of high importance, that the system development begins with clear process mapping, well-defined guardrails, and incremental deployment. This way the firms can expand the given autonomy as trust in the system grows. If Agentic AI in CRM is implemented right the CRM moves from a reporting tool about campaign success, customer churn, or CLV into a living, learning collaborator that augments every stage of the customer lifecycle. 

How does the agentic workflow look like in practice?

First, the Agent sets a Budget-Goal for a campaign (Increase of Abonnement-Conversion by 15%). Then it accumulates data from CRM and other sources. Third, the agent analyses and prioritizes the given data (Decision who, when, on which channel and with which offer we can contact the client). The fourth step is about Asset generation such as creating personalized text or visuals. Here the highlight is, that the agentic AI personalizes every Client contact based on the accumulated data in the previous steps. The next step is about Optimizing the output and flushes the campaign to the client base. The next step is crucial for the agentic system since feedback loops, as well as deep learning capabilities come into play. Here the agent will focus on the interpretation of the performance of the previous campaigns and adjusts where it is necessary. 

Conclusion

Agentic AI in CRM is without any doubt the biggest transformation in today’s business-world. Companies are constantly searching for better ways to run client campaigns, to reduce churn and to increase interaction with clients to consequently generate more revenue. With the integration of an Agentic CRM system, topics like scalability and marginal costs are important. Companies need to focus on the implementation now, instead of falling behind competitors.

Please rate this

Trying on Clothes with Chat

10

October

2025

No ratings yet.

I feel like every day I’m introduced to a new model of AI, and I need to make a conscious effort to keep up with it, especially to get the most value out of my ChatGPT subscription. Before starting this course, I had never really thought about the differences between General AI, Generative AI, Deep Learning AI, and so on. However, I’ve become much more aware of the types of AI I use in my daily life and how I can apply them to everyday questions and tasks.

To get more hands-on and fully utilize ChatGPT’s generative capabilities, I decided to ask him to help choosing which clothing items I should purchase. For a task to be considered “generative,” the AI needs to create new content (e.g., images, music, code) rather than simply analyze existing information. So, I asked it to generate images of the outfits I had in mind so I could better visualize them.

I’ve been wanting to buy a new coat for a while, but I kept putting it off because it usually takes me a long time to decide on the fit and color I want, and then even longer to find a store that sells something similar. To make this process more efficient, I first shared a link to my Pinterest board so he could know my preferences. Then, I asked it to search for coats I might like, and it returned a list of potential options with images and store links. This part of the process was more of a general AI function, involving search and curation.

^^ One of the suggestions that he gave me

Based on the items I liked most, I then provided ChatGPT with a screenshot of a dress I already owned and asked it to generate images of full outfits that included different coats. Before generating, I also gave my height and clothing size so that the proportions would look more realistic. This allowed me to clearly imagine how each piece might look on me, without actually having to try them on.

^^ Output of the generated image.

I found this use of ChatGPT incredibly helpful because it saves me a lot of time. Normally, it would take me at least 18 minutes to narrow down my options to a few stores, and then even more time to visit the shops and try things on. With generative AI, I was able to visualize my outfit options quickly and efficiently.

One way I could improve this process even further would be to provide ChatGPT with a full-body photo of myself. That way, the generated outfits could be customized to my actual body shape and features. In the future, it would be great if more clothing websites started integrating GenAI tools that allow customers to virtually try on clothes. This could completely change the online shopping experience, making it faster, more personal, and much more convenient.

Please rate this

It Wasn’t Easier with AI – Just Finally Possible

9

October

2025

No ratings yet.

When I first started with Power BI, I spent days trying to make sense of DAX, the formula language used to calculate and analyze data inside Power BI. I even joined a full-day training once, only to leave more confused than before. Getting one formula to work felt like winning the lottery – rare and mostly luck. Most of the time, I’d ask someone from IT for help, wait a few days, get a fix, and then forget half of it by the time I tried again. It was slow, frustrating, and I always felt dependent on others to move forward.

That changed the moment I started using generative AI tools.

Suddenly, I could ask why something didn’t work, not just copy a fix that a more experienced colleague or someone from an online forum came up with. I could try different versions of the same formula, see how each behaved, and actually understand the logic behind it. Instead of watching people like Alberto Ferrari on YouTube and trying to force their examples onto my own messy dataset, I could finally learn by doing. In my data at my pace.

What surprised me most was how quickly small insights added up. A few lines of explanation from ChatGPT often cleared up what hours of tutorials couldn’t. Over time, I stopped treating DAX as a collection of tricks to memorize and started to see the structure behind it.

AI didn’t make things easy. It just made them possible. It gave me the sense that, if I stayed curious long enough, I could figure almost anything out.

When Research Says the Opposite

Funny enough, that’s not what the studies say.

An MIT Media Lab paper (Kosmyna et al., 2025) found that people using AI for writing were actually less mentally engaged. Their memory recall dropped, and they felt less ownership of their work. Another study by Becker et al. (2025) found that even professional developers got slower when using AI tools – by almost 20%. So, if people think less and produce worse results with AI, why was I having the opposite experience?

The Difference Between Copying and Learning

I think it comes down to ownership.

Those studies gave people pre-defined tasks – write this essay, fix that code. In my case, I wasn’t completing someone else’s assignment. I was solving my own problems, things that mattered to me and my work. When I use ChatGPT for DAX, I’m not outsourcing my brain. Each time I test a formula, I understand a bit more about how Power BI “thinks.” The difference is that the feedback loop is instant. I don’t have to wait days for an answer.

Before, I often hesitated before starting something new: Do I even know enough to try this?
Now it’s more like: Let’s see what happens.

The biggest shift for me isn’t technical – it’s psychological.

Maybe that’s what the research doesn’t capture. For me, AI hasn’t made me think less. It’s made me more curious, more confident, and more willing to experiment.

References

Kosmyna, N., MIT Media Lab, Hauptmann, E., MIT, Yuan, Y. T., Wellesley College, Situ, J., MIT, Liao, X.-H., Mass. College of Art and Design (MassArt), Beresnitzky, A. V., MIT, Braunstein, I., MIT, Maes, P., & MIT Media Lab. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. In MIT Media Lab. https://arxiv.org/pdf/2506.08872

Becker, J., Rush, N., Beth Barnes, David Rein, & Model Evaluation & Threat Research (METR). (2025). Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity [Journal-article]. arXiv. https://arxiv.org/pdf/2507.09089

Please rate this

The Job Hunt in the Age of Generative AI

29

September

2025

No ratings yet.

One of the most interesting fields I have used Generative AI as of now, is in the context of job applications. I often struggled with wording when creating my resume and cover letters, such as how to appear professional without being generic or how to modify the same experience for different professions. I was able to more clearly reframe my job experience with the assistance of tools like ChatGPT or Perplexity. For instance, they recommended me to reformulate lines like “Supported business development” into “Conducted market analysis and prepared client proposals, helping to improve communication”. Although it didn´t create anything, it did assist me in using more powerful words to express my accomplishments.

Meanwhile, I noticed that businesses are also beginning to use AI in their hiring procedures. AI-powered applicant tracking systems, also known as ATS, that search resumes for keywords are already used by some HR departments. Now, generative AI goes one step further by being able to autonomously create applicant summaries, create job descriptions and even recommend interview questions. In theory, this may result in a quicker and more reliable hiring procedure.

However, the risks are easy to define. AI adoption by recruiters and applicants may turn the process into a sort of “autonomation arms race”. AI-generated keywords are used by applicants to optimise their resumes and by recruiters it is used to filter them. In such a system, the questions come up: What is happening to authenticity? And by favouring particular educational backgrounds or language patterns, who makes sure AI doesn´t reproduce biases?

In my view, generative AI is most useful when it increases clarity rather than when it replaces human decisions. I think the difficulty in hiring is finding a balance between fairness and efficiency. When applied properly, AI can help candidates express themselves more effectively and help businesses manage high application quantities. However, there is the risk of turning humans into automatic patterns and keywords if it takes over the process. It is impossible for an algorithm to fully imitate human characteristics like creativity, motivation and cultural fit.

The key question regarding this interesting topic remains: How can we make sure AI not only speeds up recruiting but also makes it more transparent and inclusive?

Sources:
https://www.cmu.edu/intelligentbusiness/expertise/gen-ai-in-hiring_lee_100323.pdf

https://www.nature.com/articles/s41599-023-02079-x

Please rate this

Double edged glasses: Excitement and concern about the new Meta Smart Glasses

27

September

2025

5/5 (1)

Last week, Meta unveiled their new Ray-Ban Display smart glasses (Aouf, 2025). Once again, I appreciate living in a time experiencing numerous interesting technological innovations. I think this launch of augmented reality integrated into an almost normal looking pair of glasses is a big leap into the future of personal technology. In my opinion you can compare this launch with the first introduction of the smartphone, a product which will have a big influence on everyone’s lives. Imagine navigating by receiving turn-to-turn notifications, getting real-time translations or getting direct answers from an AI assistant all while still being able to see your surroundings and without having to pull out your smartphone. 

However, my enthusiasm is dampened by my knowledge of the downsides of many of Meta’s applications, largely informed by the 2020 documentary, “The Social Dilemma”. This documentary provided me with a clear look at how social media platforms try to capture our attention and monetize this attention by design. This design often has a negative influence on our mental well-being (Vashist, 2023). We can not ignore that these glasses are a product of Meta, the parent company of Instagram and Facebook. These platforms are the center of the documentary’s critique, and will have a direct interface in our view with the new glasses.

For me this raises a critical concern about the potential of these glasses for becoming another product with a big potential of increasing digital addiction. Meta has already faced accusations and lawsuits for years, being accused of knowingly designing platforms which have psychologically manipulative features to hook young users for profit (Li, 2024). Internal documents have also suggested that there is an awareness within the company that its platforms could harm young people, especially regarding body image and sleep disruption (Paul, 2023). 

Being aware of these dangers and  actively trying to regulate my dopamine responses and phone usage myself I am concerned about the potential for a constant distraction being delivered directly to our eyes. Will we be able to control this distraction to only use it to our benefit or will the digital and physical worlds blur to an unhealthy degree. The nature of AR is to blend our reality with digital information, which could directly amplify the attention seeking mechanism which has made social media so problematic. 

This is why I think the glasses, at the minimum, need to have focus modes or functionality similar to those which are already implemented in the IOS and Android operating system. For a device like this ideally, the focus functionality would have to go further by for example, automatically detecting when a user is concentrating,  filtering out non-essential notifications. 

In the end, this responsibility in my opinion should not solely fall on the individual. With these powerful technologies becoming available to the general public I think companies such as Meta can not be trusted with the responsibility to protect it’s users against negative effects, so a conversation about regulation is essential. We already see legislative efforts, like New York’s SAFE for kids act, which aims to curb addictive social media features for minors by restricting algorithmic feeds and late-night notifications (NY State Senate Bill 2023-S7694A, n.d.). Similar principles should be applied to the field of wearable AI and AR. By enforcing ethical design principles by regulation we can hopefully embrace the huge potential of these innovations without falling for the addictive pitfalls that have characterized the last decade of social media.

References

Aouf, R. S. (2025, September 24). Meta launches first AI smart glasses with integrated display. Dezeen. https://www.dezeen.com/2025/09/19/meta-ray-ban-display-ai-smart-glasses/

Li, C. (2024, March 2). Social Media Addiction x Meta Lawsuit. YIP Institute Technology Policy. https://yipinstitute.org/policy/social-media-addiction-x-meta-lawsuit

NY State Senate Bill 2023-S7694A. (n.d.). NYSenate.gov. https://www.nysenate.gov/legislation/bills/2023/S7694/amendment/A

Paul, K. (2023, November 29). Meta designed platforms to get children addicted, court documents allege. The Guardian. https://www.theguardian.com/technology/2023/nov/27/meta-instagram-facebook-kids-addicted-lawsuit

Vashist, S. (2023, October 16). “The Social Dilemma” Netflix Documentary: The Perils of Social Media. Medium. https://medium.com/@shashankvashist/the-social-dilemma-netflix-documentary-the-perils-of-social-media-37f601f84606

Please rate this

Do We Think Less When AI Thinks for Us?

25

September

2025

5/5 (2)

Generative AI: From Helper to Thought Partner?

The first time I heard of an generative AI tool was in my exchange semester in Budapest. A fellow student introduced Chat GPT to me and it felt almost like magic. My fellow student asked ChatGPT to summarize a research article I had been struggling with, and within seconds I had a clear overview. It would have taken me at least an hour. I used it for a few weeks and introduced it to some of my other friends who were amazed aswell. Since then, I have experimented with different tools: text-to-text models like ChatGPT for writing support and easy explanations of difficult exercises, and text-to-image models like DALL·E or MidJourney for creating visuals in presentations.

The most remarkable aspects are the speed and inspiration these tools offer. When preparing for group projects for example, I used ChatGPT to brainstorm as an example to give me an outline for a digital strategie for a company. The ideas weren’t perfect, but they helped my group to get started much faster. In addition to that, image generators helped us visualize concepts that would have been difficult to explain with words alone. In this sense, GenAI acts like an assistant that can take on many tasks and implement them super quickly.

At the same time, the limitations are quite obvious. The quality of the results varies often, the information is sometimes outdated or simply incorrect. I have also found that relying too heavily on AI can affect my own critical thinking, as I am sometimes tempted to accept the first answer rather than question it.

In the future, I would like to see improvements in two areas. First, better integration of reliable sources (imagine if ChatGPT could always generate citations in APA style correctly) and more transparency about how answers are generated and where the information comes from.

How about you? Do you use GenAI more as a brainstorming tool, or do you rely on it for polished results? And should universities encourage students to use these tools  or restrict them to protect independent thinking?

Please rate this

AI as a Personal Assistant

23

September

2025

No ratings yet.

I’d say I was quite late to start using gen AI. I was curious about it but nothing more. I remember seeing my friends asking chatGPT for things instead of googling them. So eventually I started doing that too. Since then gen AI has become an important source of information but also a guide to improve my productivity.

First of all, gen AI helps me with panning my days to ensure I have enough time for all the work/ study related and personal tasks. I use it to create study timelines for university subjects by giving it deadlines and topics I need to study. I also use it for planning my workouts at the gym as well as my weekly meals to ensure I can reach my fitness goals.

Moreover, when I took a gap year between my Bachelor and Master studies to travel around Asia, AI helped me a lot with that as well. It has been very helpful in creating travel itineraries for different countries taking my time frame, interests and my budget into account. For example, when I went to the Philippines I wasn’t sure which places to go to as there are over 7.5 thousand islands. So I used gen AI to help me narrow it down. I specified my budget, the fact that I had two months in the country, that I was very interested in scuba diving and that I wanted to meet other young travellers. In a couple of seconds I had a complete itinerary of islands to visit that fit my exact criteria. In the end, the Philippines became one of my favourite destinations of the trip and it took no time to create a perfect plan for it.

For me, AI has become something like a personal assistant. It helps me to structure my day, helps to achieve my goals, both academic and personal, and saves my time. It allows me to focus on things that are actually important for me.

I think AI doesn’t replace us but rather allows us to multiply our productivity and enrich our lives.

Please rate this