The Smartest Colleague in the Room: AI as the New Internal Expert

17

October

2025

No ratings yet.

In a reality where technology is almost an intrinsic characteristic of our society it would seem like our communication channels are super efficient, right? Well… it is not fully wrong, yet it is still far from perfect efficiency, especially in the business world. Corporations and workforce are bigger than ever but somehow people feel more disconnected from work than before. The numbers are shocking as almost 80% of workers feel disengaged from work activities, thus creating a negative impact on workspaces. 

Despite our different work experiences, we found common ground on a challenge that we all faced: not having internal company knowledge. Have you ever had questions that you did not know how to answer? “Who can I contact for IT support? Who do I contact for this matter?” Hours pass by as you reach out to different colleagues, before you obtain the information you need. These questions may seem like minor inconveniences, but across an entire organization, these inefficiencies compound into significant productivity and time losses. 

What already feels like a burden is augmented even more once you reach this phase and a big catalyst for this is miscommunication. Big companies are successful but they can also be their biggest challenge if they ignore foundational problems. Luckily, we were born on the same timeline as our new digital friend and saviour of proper communication channels, the AI-Agent.

One significant takeaway from this project was the potential of a well-designed AI assistant to revolutionise knowledge management and internal communication within organisations. By integrating structured company data and deploying an intelligent, conversational interface, employees can access information up to 30% faster, significantly improving productivity and onboarding experiences. Moreover, aligning the AI implementation with clear SMART goals (Doran, 1981) ensures measurable impact and organisational value. It gives employees metrics to look for and creates a standard for further improvement. 


When it comes to the creation of our business model, creating a user-centric design that meets regulatory compliance offers a sustainable advantage. It increases productivity and trust within the organisations, especially for those operating in highly regulated sectors such as financial services.

Let’s talk about impact!

The numbers are loud and clear. Our AI agent solution transforms operational inefficiency into measurable productivity. Firstly, it cuts the time agents spend searching for information by 30% which is a big chunk of time saved alone (Bula et al. 2025). Secondly, onboarding time is shortened by 25% as agents are no longer reliant on colleagues alone for asking their questions (Bula et al. 2025). Furthermore, to ensure frequent quality and compliance updates clever use is made of self-enforcing feedback loops as discussed by Jullien et al. (2021). These feedback loops ensure updates and continuous learning.

The AI agent provides a higher quality customer service desk. In a world where time is the most valuable resource, our AI agent will give it back to you.

References:

Bula, A., Torres, B., Hu, M., & Fennis, W. (2025). Innovating business models with Gen-AI [Unpublished academic report]. MSc Business Information Management, Erasmus University Rotterdam.

Chui, M., Manyika, J., Bughin, J., Dobbs, R., Roxburgh, C., Sarrazin, H., Sands, G., & Westergren, M. (2012, July). The social economy: Unlocking value and productivity through social technologies. McKinsey & Company. Retrieved from: https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/the-social-economy

Doran, G.T. (1981) There’s a SMART Way to Write Management’s Goals and Objectives. Journal of Management Review, 70, 35-36.
https://community.mis.temple.edu/mis0855002fall2015/files/2015/10/S.M.A.R.T-Way-Management-Review.pdf

Jullien, B., Pavan, A., & Rysman, M. (2021, July). Two-sided markets, pricing, and network effects (TSE Working Paper No. 21-1238). Toulouse School of Economics. 

Jus, M. (2025, July 9). Why so many feel disconnected at work and how to reclaim meaning. Medium. https://medium.com/@monika.jus/why-so-many-feel-disconnected-at-work-and-how-to-reclaim-meaning-be2f09bcb539

Please rate this

How AI changed my FYP and probably yours too

10

October

2025

No ratings yet.


I used to think that I would never use Tiktok, but just like many others: I enjoy a bit of scrolling. Maybe a lot… I open the app, I start scrolling and suddenly an hour has passed. With every scroll and every second that I spend on Tiktok, it only feeds the data to the point that they know me a bit too well. My FYP does not just show what I like, it predicts what I will like next. However, with the current content that has taken over my FYP, I really wonder: Where are we going with all this new AI generated content?

It looked so real. “Did Jake actually do his own makeup?” My exact thoughts when I stumbled across the new Jake Paul videos. As OpenAI updated their Sora app, the AI generated content took my FYP over by storm. Not only were the visuals realistic, the voiceover also sounded exactly like him. The AI content started quite innocently. I didn’t mind watching bears jumping on a trampoline, but now it has evolved to actual human beings who are being modified for content. I can not help but ponder if the current technology is actually good for us. The possibilities with the new AI are endless, making it more and more difficult to draw the line between what’s authentic and synthetic. 

Personally, it makes me feel a bit uneasy that this new content has become so accessible. Besides, I also want to reflect on what we actually see: what value do these videos actually give us? Do we actually get satisfaction out of this new kind of content? I think it is good to stay critical on how this new AI can actually enhance our lives. That being said, I am curious on how this will evolve and whether this will actually change our entertainment. Maybe the scariest part isn’t that AI is changing what we see, but that it is quietly changing what we expect to see.

Please rate this

When AI crosses the line between help and harm

9

October

2025

No ratings yet.

*WARNING: this blog discusses mental health and suicide. Reader discretion is advised

I recently read a heartbreaking case about parents suing OpenAI after the death of their teenage son. According to the article, the young boy shared all his struggles and mental issues with ChatGPT, as the programme became his closest confidant. From writing out his thoughts, to uploading pictures and eventually even discussing his suicide plan. The AI had recognized the emergency signs, but it still continued the conversations anyway. Rather than understanding, some might even say that the chatbot had encouraged the teenager. This case really shocked me. It is terrifying to imagine someone turning to an AI tool in such a vulnerable moment, and what’s even scarier is to think that AI can respond in a way that feels genuine, but isn’t truly human.

What I find most unsettling is how supportive and natural these tools can sound. When you are sad or lonely, and something responds caring, thoughtfully, it is easy to neglect that it is not a real person. The chatbot does not understand empathy nor pain, it simply just predicts what words should come next. Besides, AI does not know when to protect or to escalate the same way a human would. For someone in crisis, the words of encouragement can weigh heavier than we think.

The big question then remains: how do we prevent more cases like these to occur again in the future? I believe that companies like OpenAI have a significant responsibility here. If people are using ChatGPT for emotional support, there should be safeguards. AI should be able to recognize danger in case of mental distress and connect the users with real help, like a hotline or professional support. Another option could be to not enable emotional support at all, but what prompt would you accept and what not?

This story really made me realize how powerful and risky AI can be. It mirrors us so well, that we start to believe it is like us. What do you think? Should AI ever offer emotional support? Or should that spot always be filled by humans? In this new digital age, the biggest challenge is finding where to draw the line before technology crosses it for us.

Sources:

Yousif, N. (2025, 27 augustus). Parents of teenager who took his own life sue OpenAI. https://www.bbc.com/news/articles/cgerwp7rdlvo

Please rate this