I would like to reflect on my use of GenAI in the corporate space. During my gap year, I participated in two internships and I feel like I got to experience the full range of AI adoption in companies. In my first internship, Copilot was barely accessible because you first had to complete a 3-hour training session before you could use it. AI was treated as this thing that you have to touch with silk gloves, otherwise there might be huge irreparable damage. On the other hand, in my second internship, I was told that I could use any GenAI with the only rule being “Just don’t put any sensitive information in there”.
That difference was striking to me. Although these companies operated in completely different industries, it really felt like Company 1 artificially slowed themselves down. In Company 2 however, I was very much encouraged to use it and my colleagues were open about how they use it as well.
I have noticed big productivity gains for myself and the work I produced when I was able to use GenAI freely. Although it was usually smaller things the AI helped with, such as reformulating a sentence, these efficiencies added up and made my work seemingly much easier.
One example that comes to mind is the note-taking function during meetings. During a quite technical meeting where a colleague explained the inner workings of a system, I was able to concentrate on what was being said rather than trying to jot down everything as quickly as possible in order to not miss anything. Afterwards, I was able to go through the notes again for details and things I might have missed.
So, from my own experience, I would say that companies need to make sure that they don’t lose themselves in bureaucracy and rules. While yes, it is important that especially sensitive information is kept safe, being overly cautious can also lead to productivity losses for the company and its employees.
Have you made any similar experiences during your work experience? I’m looking forward to reading about it!
I really liked how you compared the two different approaches the companies had toward GenAI. It makes the difference super clear, and I think it would be really interesting to see how this plays out in the long run. Like, does the strict company actually protect itself better, or are they just slowing themselves down compared to the more open one? It feels like there should be a way to measure the value these approaches bring, but I guess there isn’t really a tool for that yet.
I also wanted to ask about the sensitive info part. In your second internship, where the rule was basically “don’t put sensitive info in there,” did people actually know what counts as sensitive? And did they stick to those rules? I feel like in practice that could be a bit blurry, so I’d be curious to hear how that worked in your experience.
Thank you for your thoughtful comment!
I agree that it will be very interesting to see how these companies will tackle the AI challenges in the future.
To answer your question, I think most people were quite aware of what sensitive information entails since we also worked with client information, therefore my colleagues were quite sensitized to it and stuck to the rules. For example, for simple, non-sensitive tasks we mostly used ChatGPT, while for tasks with sensitive data we then used the company-internal GenAI chatbot to keep the data safe.
Nice paper on sharing your experiences of how AI has functioned within the company, as well as how it differed. The difference is astonishing, showing the progress company’s have taken in adopting AI into their business processes. What is especially suprising to me is your work experience with your first internship. While following a short course seems logical to me to make sure that employees are using AI productively and responsibly, I am however surprised that the company then does not cherish this new software to boost their productivity and growth within their workforce. Sooner or later, all company’s will have to adopt such technologies and softwares as avoiding it will lead to firms falling behind. So instead, shouldn’t firms then find ways to configure the AI that matches their needs and security policies?
During my career thus far, I have done 3 internships, all which have embraced the use of AI, although some have embraced it more freely than others. For example, one of my internships took place in a start-up/scale-up. Within such an environment, I have experienced that most to all employees use AI on a daily basis, with the company embracing it as well. On the other hand, another experience of mine took place in a larger corporation, one which evidently has far more sensitive information. Nevertheless, the company signaled and understood the importance of AI usage in their workforce. Therefore, as a result, the company designed their own, private, AI, which avoids the leakage of sensitive information by keeping everything internal. That leaves me to my final thought to your paper, while there are always mechanisms in which a firm can tailor the AI that fits them, should all firms proceed to head into this direction?
Interesting reflections, after reading about your experiences, I’m left thinking whether it is all a company culture thing. The first example, where you described 3-hour trainings and strict rules, it comes across that this company sees its workers as risks. Where the second company seems to trust its workers much more. Therefore, it might just be how the employer views its workers, on the one hand as risks, on the other hand as capable professionals. Is AI policy a predictor for a companies view and implement innovation?
Interesting points made! I have had a similar experience with the use of AI in different companies. My current company is supporting the use of AI. We have our own GPT, which we can use with all our projects and is also integrated within the SharePoint of my company. Furthermore, we can even use ChatGPT with the limitation: Just don’t put any company or client data in it. We are even experimenting with GenAI to design slides. This use of AI for sure has helped me to become more efficient and better at my job.
Simultaneously, I see on the client side, especially in big corporations, AI being treated like porcelain and not as a value-adding feature. Some clients are now trying to incorporate AI, but it is a long and heavy process.
I would agree that companies are currently losing themselves in bureaucracy and rules. Rules and guidelines on the use of AI are important, but at some point, too many are in place, and potential added value will be left behind.