PolicyPal

17

October

2025

5/5 (1)

Building PolicyPal: what we made, why it matters (and what is next)

If you have ever tried to read GDPR on a Friday afternoon…. We kept hearing the same story from small businesses: we are drowning in rules, we do not have a lawyer, and Google is not cutting it. That is the seed of PolicyPal, a lightweight, GenAI-powered helper that turns legalese into plain, cited answers you can actually use.

The problem

We mapped the SME reality: they are 99% of EU businesses, yet they carry a huge chunk of compliance work. Most teams either outsource (expensive), copy templates (risky), or just… wait out consequences. Add to that a flood of new EU regulations, GDPR, CSRD, the AI Act, each written in dense legal language and updated regularly. For small companies without in-house lawyers, figuring out what actually applies to them can take hours, sometimes days. The result? Missed deadlines, mounting costs, and a constant low-level anxiety about getting something wrong.

Some facts we found out

  • 99% of all EU businesses are SMEs — employing over two-thirds of the workforce.
  • SMEs carry ~90% of total EU administrative compliance costs (≈ €200 billion/year).
  • 55% of SMEs say regulation is their biggest barrier to growth.
  • 43% have delayed expansion or digitalisation because of legal complexity.
  • The global RegTech market is expected to grow from $16 billion (2024) to over $33 billion (2029).
  • 49% of surveyed SMEs already use technology for 11 or more compliance activities, showing both awareness and room for smarter tools.

What we built (so far)

Our prototype is a simple chat-style Q&A. You ask normal questions (“Do we need a DPO?”), and PolicyPal pulls from official Dutch GDPR texts, then drafts a short answer that makes complicated questions simple. Under the hood, we use retrieval-augmented generation (RAG): retrieve first, generate second, show your work. It is not wired to a live backend yet, but the experience, from question to cited answer to quick export, is there.

Who it is for

We are starting with Dutch SMEs, SaaS/IT, e-commerce, agencies, accountancies, and small clinics. The primary users are IT and HR managers who need fast, paste-ready answers with sources for tickets, policies, and stakeholder updates. Freemium lets teams try basic answers; upgrades unlock things like memo exports and audit trails.

What surprised us
Even without direct SME testing, a few things stood out while building and discussing the concept. First, how huge the gap still is between regulatory language and what small business owners actually understand it is wider than we expected. Second, the number of existing “compliance tools” that claim to simplify things but end up just moving the complexity somewhere else. And third, how many SMEs openly admit they rely on guesswork or outdated templates to stay “compliant enough.” It confirmed our hunch that the real problem is not access to information, it is access to clarity.

The rough edges

It is a front-end demo for now. Risks are designing around: hallucinations, privacy, and over-reliance. We are aligning with ISO/IEC 27001 practices and the NIST AI RMF. For edge cases, there is the possibility of adding an optional human review layer to the business; however, that is not the main scope.

What we learned
Working on PolicyPal showed us that solving a regulatory problem is not just about adding AI; it is about understanding how people experience complexity. Our research made clear that SMEs do not suffer from a lack of information, but from a lack of structure and confidence in using it. Regulations are written for legal professionals, not small business owners, and technology alone cannot fix that gap without thoughtful design.

We also learned that transparency matters as much as accuracy. Features like citations, disclaimers, and visible sources are not just “nice to have”; they determine whether users trust the answer at all. In that sense, PolicyPal became less about automation and more about building digital trust in an area where mistakes are costly.

Finally, this project helped us understand how business models and technology choices shape each other. RAG-based systems, freemium adoption, and compliance verification are not just technical or commercial decisions; they reflect how a tool positions itself between accessibility and accountability. For us, that intersection of design, trust, and value creation is the most important insight we are taking from the work.

Please rate this

Copilot Can Code for you, But It Can’t Learn for You

9

October

2025

No ratings yet.

When the coding assignments came during the bachelor’s, I was already stuck in the “why won’t this code run” phase. I spent hours trying to fix a simple Python loop and couldn’t figure it out. So I decided to turn on Copilot.

At first, it felt kind of magical. I would start typing something, and Copilot would just finish it for me. The code looked neat, and it even added comments. It was nice, like having someone like an assistant, who actually knew what was going on.

After a while, it started saving me a lot of time. I could ask for a small function, and it would write something that worked. I didn’t have to search Stack Overflow every five minutes anymore. It felt easier to focus on what I wanted the program to do. Sometimes, I’d finish an entire exercise in a fraction of the time it used to take me. I started to feel confident, maybe too confident. It was easy to forget that I hadn’t really written most of the logic myself.

But then I noticed something. I was finishing faster, but I wasn’t really learning. I’d look back at my code later and couldn’t explain why it worked. It felt like someone else had written it. During group projects, when teammates asked how I solved something, I didn’t always have an answer. That’s when it hit me, I wasn’t stuck anymore, but I wasn’t improving either.

Copilot makes the code run, but doing your own projects makes you understand why it runs. The small experiments, the bugs, the late nights, that’s what builds the skill, not just working code.

Copilot is great for the basics, loops, syntax, and formatting, but it doesn’t really teach you how to solve problems on your own. When I got to a harder project, I realized I couldn’t just let it do everything.

That’s when I figured out that Copilot helps you go faster, but it doesn’t make you better. You still have to learn the hard parts yourself — the thinking, the debugging, and the moments of frustration that actually make you understand what you’re doing.

Please rate this

ChatGPT, Privacy, and Data Security: What You Need to Know

1

October

2025

No ratings yet.

Since its November 2022 launch, ChatGPT has become one of the fastest-growing digital platforms in history, surpassing 700 million weekly users (OpenAI, 2025a). This powerful language model can respond to almost any question or request in natural, human-like text. But behind the convenience lies a seriously overlooked concern: privacy and data security.

Every day, ChatGPT and other AI tools process millions of prompts. These tools learn from and store user interactions, sometimes including personal information. It raises questions about how this data is collected and used. This blog post talks about the main privacy issues with ChatGPT, explains how OpenAI uses user data, and gives some advice on how you can keep your own data safe.

How ChatGPT Collects and Uses Data

According to OpenAI’s Privacy Policy, as of October 2025, the company collects several types of personal data when you use its services (OpenAI, 2025b). These include:

  • Account details such as your name, email address, and any third-party accounts you connect to your subscription.
  • Prompt content, meaning anything you type or upload (text, files, and/or images).
  • Technical data like your IP address, browser device type, operating system, and location (based on your IP).

OpenAI uses this information to provide, maintain, and improve its services, detect fraud or misuse, comply with legal requirements, and develop new features (OpenAI, 2025b). While these uses are common in many online platforms, the sensitive nature of ChatGPT interactions (often involving creative ideas, business data, or personal details) makes data handling particularly delicate.

Sharing personal information (such as names, contact details, or medical data) or confidential business information (such as client records, financial data, or proprietary ideas) through platforms like ChatGPT without proper authorisation can amount to a confidentiality breach and, in many cases, a personal data breach under data protection law (Autoriteit Persoonsgegevens, 2025; ICO, n.d.).

Key Privacy Concerns Around ChatGPT

Here are five of the main risks users should be aware of:

1. Your content may have been used to train ChatGPT

ChatGPT was trained on a vast amount of open web text from the internet, including websites, articles, forums, open data sets, and books (Heaven, 2023). While there is no individual consent process for collecting publicly available online texts, it does raise legal and ethical questions about collecting such data. An internet user should always be aware of what is put online and made public.

2. ChatGPT collects extensive user data

To use ChatGPT, you must create an account. That means OpenAI receives identifying information before you even start using the tool. The combination of IP tracking, cookies, and content logs is a signal of high profiling risks.

3. Your chats can be used for model training

OpenAI has stated that user conversations may be reviewed to help improve model performance. In practice, this means that something you type, even unintentionally, could become part of future model training data. This has led to cases like Samsung, where employees accidentally leaked sensitive source code (debug errors and summarise transcripts) while using ChatGPT for work (Ray, 2023).

4. ChatGPT may share data with third parties

OpenAI will share personal data with service providers, affiliates, or legal authorities in certain circumstances, for example, during audits, investigations, or when required by law (OpenAI, 2025b). While this is standard practice in big tech, it highlights how once data leaves the chat environment, users lose direct control over where it goes and who may process it.

5. Data leaks can happen

In March 2023, OpenAI experienced a data breach due to a bug in its system. Some users were able to see others’ chat histories and partial payment details (OpenAI, 2023). The breach was closed in 9 hours and affected roughly 1.2% of ChatGPT Plus users (OpenAI, 2023).

Privacy Issues for European Users

For users in the European Economic Area (EEA), the UK, and Switzerland, OpenAI provides a specific privacy policy to comply with the General Data Protection Regulation (GDPR). This policy grants you rights such as (GDPR, n.d.):

  • Accessing your personal data (Art. 15)
  • Requesting corrections or deletion (Art. 16 – 17)
  • Limiting or objecting to processing (Art. 18, 19, and 21)
  • Requesting data portability (Art. 20)

These rights are designed to give users more control over their personal information, so people can check what data is held about them and ensure that it is accurate and used fairly. They help ensure transparency and fairness when AI tools like ChatGPT process user data.

To exercise these rights, you can contact OpenAI via their Data Rights Portal or by emailing dsar@openai.com. For additional assistance with this process, you can find more help here.

How to Protect Your Privacy When Using ChatGPT

While ChatGPT takes measures to secure user data, you can also take steps to reduce your own risk:

  1. Avoid sharing personal or sensitive information. Don’t include unnecessary personal details and, absolutely, no confidential work details in prompts.
  2. Request data deletion. If you want to remove your data, fill out OpenAI’s deletion request form through their privacy portal.
  3. Limit consent. You can ask OpenAI not to use your chats for model training.

Settings > Data controls > Improve the model for everyone > Off

  1. Use a VPN (Virtual Private Network). A VPN hides your IP address, preventing ChatGPT from identifying your location.

Final Thoughts

ChatGPT is an amazing and widely used tool, but it is not risk-free. Using it responsibly means staying aware of how your data may be collected, stored, and shared.

AI thrives on information – make sure you decide which information it gets.

Sources:

Autoriteit Persoonsgegevens. (2025). What is a data breach? https://www.autoriteitpersoonsgegevens.nl/en/themes/security/data-breaches/what-is-a-data-breach

General Data Protection Regulation (GDPR). (n.d.). General Data Protection Regulation (GDPR) – legal text. https://gdpr-info.eu/

Heaven, W. D. (2023). The inside story of how ChatGPT was built from the people who made it. MIT Technology Review. https://www.technologyreview.com/2023/03/03/1069311/inside-story-oral-history-how-chatgpt-built-openai/

ICO. (n.d.). Personal data breaches: a guide. https://ico.org.uk/for-organisations/report-a-breach/personal-data-breach/personal-data-breaches-a-guide/#whatisa

OpenAI. (2023) March 20 ChatGPT outage: Here’s what happened. https://openai.com/index/march-20-chatgpt-outage/

OpenAI. (2025a). How people are using ChatGPT. https://openai.com/index/how-people-are-using-chatgpt/

OpenAI. (2025b). Privacy statement. https://openai.com/nl-NL/policies/row-privacy-policy/

Ray, S. (2023). Samsung bans ChatGPT among employees after sensitive code leak. Forbes. https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/

Please rate this

AI and the Threat to Gen Z in the Job Market

18

September

2025

No ratings yet.

Artificial intelligence is moving really fast, and people are worried about how it is going to change jobs. One of the biggest problems is that it’s not just taking away repetitive work but also hitting entry-level white-collar jobs. These are the jobs younger people, like Gen Z, usually start with to gain experience. If those jobs are gone, then they don’t really know how to move up into senior positions.

Dario Amodei, the CEO of Anthropic, warned that AI could remove as many as half of all entry-level white-collar jobs (VandeHei and Allen, 2025; Duffy, 2025). He even said unemployment could go up to 10 to 20 percent in the next five years. This is a massive change in a very short time, and it will hurt younger workers the most.

The technology is improving so quickly that it is hard to keep up. A few years ago, people said AI was like a high school student, now it’s already more like a college student (Coursera, 2025). If it keeps improving at this speed, it will soon be able to do even more complex jobs.

Examples already illustrate the tension. Klarna replaced its customer service with AI, only to roll back after major failures, yet in other contexts, AI integrated into workflows has boosted human productivity dramatically. For instance, an MIT Sloan study found that AI tools increased call center (technical support) productivity by 14 percent, showing that when applied effectively, the technology can deliver measurable gains (Global Desk, 2025; Mangelsdorf, 2024). For individual workers, this duality, jobs disappearing on one hand, productivity gains on the other, creates uncertainty and anxiety.

Critics caution that pausing AI progress could allow rival nations to surge ahead and gain leverage. Yet focusing only on being first ignores the immediate risks, and speed alone cannot be the true measure of success.

However, there is also reason for hope. If AI really makes production cheaper and more efficient, it could lower costs across industries, allowing people to maintain a good quality of life with less. Used wisely, AI does not have to mark the end of opportunity. It can expand what humans are capable of, if its benefits are shared and its risks competently managed.

References
Coursera. “The History of AI: A Timeline of Artificial Intelligence.” Coursera, 2025, www.coursera.org/articles/history-of-ai?utm_medium=sem&utm_source=gg&utm_campaign=b2c_emea_x_multi_ftcof_career-academy_cx_dr_bau_gg_pmax_gc_s1_en_m_hyb_23-12_x&campaignid=20858198824&adgroupid=&device=c&keyword=&matchtype=&network=x&devicemodel=&creativeid=&assetgroupid=6490027433&targetid=&extensionid=&placement=&gad_source=1&gad_campaignid=20854471652&gbraid=0AAAAADdKX6aKb4Wc7SL3xHeIleoyXESN9&gclid=Cj0KCQjw267GBhCSARIsAOjVJ4FZlS25hHwTUrLqSAm1WNXCYB82oMlf6qDj8KkNWqk7KmFXhWSa_TIaAktUEALw_wcB.
Duffy, Clare. “Why This Leading AI CEO Is Warning the Tech Could Cause Mass Unemployment.” CNN, 2025, edition.cnn.com/2025/05/29/tech/ai-anthropic-ceo-dario-amodei-unemployment.
Global Desk. “Company That Sacked 700 Workers with AI Now Regrets It — Scrambles to Rehire as Automation Goes Horribly Wrong.” The Economic Times, 2025, economictimes.indiatimes.com/news/international/us/company-that-sacked-700-workers-with-ai-now-regrets-it-scrambles-to-rehire-as-automation-goes-horribly-wrong/articleshow/121732999.cms?from=mdr.
Mangelsdorf, Martha. “Generative AI and Worker Productivity | MIT Sloan.” MIT Sloan, 2024, mitsloan.mit.edu/centers-initiatives/institute-work-and-employment-research/generative-ai-and-worker-productivity.
VandeHei, Jim, and Mike Allen. “Behind the Curtain: A White-Collar Bloodbath.” Axios, 2025, www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic?utm_source=chatgpt.com.

Please rate this