The slippery slope of AI

10

October

2025

5/5 (1)

The idea of a technology we can talk to and understand us and helping with tasks akin to a human, has fascinated humanity for decades. From HAL in 2001: A Space Odyssey more than 50 years ago to JARVIS in Iron Man, our imagination has long been filled with intelligent machines assisting or even surpassing us as seen in out fictions. Today, that ideas closer to science-fact rather than fiction. While AI is not (thankfully) sentient like in the movies, tools such as ChatGPT have given something very similar. It can talk to us, create, and edit solutions across different fields of study, enhancing or even completely doing our work.

I was still a university student when ChatGPT first came into prominence, and I witnessed the impact it had. It transformed how students worked almost overnight. Assignments that once took weeks could now be completed in a day with trivial ease, often by feeding lecture notes into the AI and asking it to generate full essays. While this made workflows faster and more efficient, it also came with a cost. These tasks were originally meant to help recall understanding in the subject, but now students could submit polished work without truly learning the material. However, it showed in exams, with many who relied on AI struggled in closed-book tests and couldn’t recall even the basic concepts of their courses.

This raises an uncomfortable question: what happens to the value of higher education if students graduate without genuine mastery of their subjects? I began to notice this among my friends and realized that the problem wasn’t the technology itself, but how it was being used, with many of them treating AI as a part of them, rather than an assistant. I found that the most rational approach to use AI while making sure I am still learning and understanding the subject, was to use AI not to replace my learning, but to supplement, as a tool to fill gaps in my knowledge, and as an intellectual sparring partner in which I could discuss content

The advent of AI should push educational institutions to rethink how they teach. Instead of focusing on assignments that simply test recall, educators could design tasks that demand creative reasoning, problem-solving, and critical thinking areas where collaboration with AI becomes an extension of human intellect rather than a shortcut around it. The arrival of ChatGPT will not mark the end of real and authentic learning, but the beginning of its evolution, one that rewards people’s understanding and ability to utilize knowledge creatively over output.

Please rate this

The hidden risks of Productivity suites.

19

September

2025

No ratings yet. Generative AI is now being widely embedded into your productivity systems, while this has made completing tasks more intuitive and efficient than it ever was, there is a worrying implication in this trend: Are we being trapped in these productivity ecosystems?

The process is commonly referred to as platform envelopment, and the integration of AI copilots into Microsoft’s Office 365 suite is a textbook example, they are not just enhancing software, but they are bundling their AI into an already dominating platform. According to platform theory, this envelopment occurs when a provider leverages different components of their products and overlapping user bases to subsume adjacent markets. Microsoft is doing this by having their AI assistant Copilot tightly tied to the world’s most used enterprise suite, Office365.

The strategic implication for Microsoft is obvious, leaving Office now does not only mean giving up on familiar productivity tools, but also leaving the AI workflows embedded into these tools. This increases switching costs and deepening dependency, positioning Microsoft as the default provider of enterprise AI. However, for the firms that adopts these platforms, the risk of platform lock-in is evident, once employees restructure workflows around Copilot, alternatives (etc. Google) become harder to adopt, even if they offer better functionality.

The dilemma is then, is integration making companies more efficient, or more vulnerable. Through embracing Copilot, firms gain access to enhancements in productivity and other synergies, but they risk themselves having their fate tied to Microsoft, as they are locked into their pricing, data governance, and how Microsoft decides to further change their products further on. For example, firms are concerned about Copilot potentially “oversharing” within an organization, which can expose confidential files if not properly managed, The CEO of Salesforce Marc Benioff criticized Copilot, calling it “disappointing”, citing issues around oversharing and security (Business Insider, 2024).

Similarly, Google’s handling of AI is also showing how envelopment is playing out. AI features, such as Gemini, are now directly included into Business and Enterprise plans. Meaning firms can no longer easily opt in or opt out of AI functionalities, as they now become part of the base product. Such bundling has simplified processes, but there are complaints about price hikes, even where opt-out is unclear (PPC Land, 2024).

As platform envelopment has been supercharged, going into the future, firms are left with the question, are the productivity and integration benefits more helpful than how vulnerable these platforms leave them, and is there a viable route in maintaining autonomy while still being able to reap benefits?

References:
Business Insider. (2024). Marc Benioff says Microsoft’s Copilot is disappointing and raises security concerns. Retrieved from https://www.businessinsider.com/marc-benioff-salesforce-microsoft-copilot-clippy-2024-9
Concentric AI. (2024). Microsoft Copilot Data Risks Explained. Retrieved from https://concentric.ai/too-much-access-microsoft-copilot-data-risks-explained/
PPC Land. (2024). Google forces Gemini AI on Workspace users amid pricing complaints. Retrieved from https://ppc.land/google-forces-gemini-ai-on-workspace-users-amid-pricing-complaints/

Please rate this