Building Applications Has Become Easier Than Ever

10

October

2025

No ratings yet.

A couple of months ago, a client at my student consultancy job asked us to automate a document anonymization process at their real estate agency. Due to data protection requirements, the processing had to be done locally or within their Microsoft environment.

After some unfruitful experimenting with Power Automate, we decided to give building our own tool with Claude (an LLM by Anthropic) a try. With a bachelor’s degree in international business, I had no coding knowledge whatsoever. The results were amazing. Within a few hours, we had some basic capabilities established.

After a while, I wanted to make the process more efficient than having to copy paste Claude’s changes into my project files. A new version, named Claude Code, had been released. It enables the AI to work in your project files directly. I had to watch a few tutorials and do some error-fixing with ChatGPT to get it to work in container (a sealed-off environment on my laptop). After about two hours we were ready to go.

The result was a developer working at lightning speed. It could code, test, readjust and retest until it works, all in one go. You see it break down the task into sub-tasks and tackle them one-by-one. Alternatively, you can put it in plan-mode so it will brainstorm about what you want, come up with multiple alternatives with pros and cons and execute one when you give the word. While it is executing that piece, you can open a second, third or fourth window to work on a different issue. You can quite literally run an entire team of coders at the same time, while you only manage them.

However, it’s not perfect. Especially fixing more complex bugs can be an issue. Sometimes, after showing the problem and asking for a solution several times, it won’t be able to fix it. Since I do not know anything about code myself, I had to be creative.

Firstly, working modularly helps you pinpoint the issue to a specific module. You can then ask Claude to zoom in on that module and come up with possible causes. With just logic you can often judge its suggestions. That way you can help Claude get closer to fixing the issue.

Sometimes, it gets stuck in a certain thinking path it has gone down. In that case, it can help to get a second opinion. You open a second window or ask a different LLM (e.g. ChatGPT) to look at the issue. This way it is not biased by the context in your current conversation or its LLM specific knowledge. This has more than once resulted in it immediately recognizing the real issue, and me being frustrated with the fact that I spent half an hour trying to fix it in the initial chat.

All in all, I was really amazed with the possibilities. Getting it all set up was a bit of trial and error, and it takes quite some time to brainstorm about the implications of architectural choices. But once you have done that, it builds full-fledged applications in minutes.

New AI tools are being released quicker than we can learn to use them, so adaptability seems more important than ever. Just being able to build applications is not enough either. Just like before coding became so much easier, you need a business case for the application too. All in all, I think it’s a great time to be a business student with an interest in technology.

To anyone else who has been experimenting with AI tools for coding: what tools do you use and what best practices have you discovered?

Please rate this

“Any last words?” – How AI is stealing our voices.

27

September

2025

5/5 (2)
What if artificial intelligence begins to control the actions we take on a daily basis? This question is haunting me more often than I would like to admit. I am experiencing both fascination and worry about the rapid progress of linguistic AI applications like ChatGPT, subtly entrenching their way deeper into our private spaces. No longer are they distant tools anymore, since they are slowly creeping into our thoughts, shaping how we search, write, and even think. Though, I must confess that I am guilty too. Whether it is looking up a fancy recipe, fixing a tire, or condensing hours of lecture notes into something manageable, I turn to AI. For these purposes, I actually encourage it. These are the fruits of technology, ripe for us to taste, but does it still taste so sweetly as before?

The problem lies elsewhere. It is when we begin outsourcing not just our tasks, but our very words. Through the shapes of our essays, stories, and expressions, I fear we are losing something irreplaceable, which is the art and power of human language. Words have always been the essence of what makes us human. They can move nations, comfort the broken, and ignite revolutions. For exampe, Martin Luther King Jr.’s immortal statement, “I have a dream”. Or the centuries-old anthem, “God save the Queen”. Or the enchanting effect in every start of an ancient old bedtime story, “Once upon a time…”. Even a cartoon character, like Uncle Iroh from Avatar, reminds us of the tenderness words can carry: “Leaves from the vine, falling so slow.”

We have always had a weakness for words, as words work magically on a human body, the reason why putting them together is called spelling. The right words at the right moment have altered the course of history: Rosa Parks beautifully iterating: “Stand for something or fall for nothing” or the collective anthem “We are the world”, moving millions into action.. These words came from human souls, and that is precisely why they pierced so deeply into human hearts.

These days, however, this magic is being hollowed out. When AI generates our speeches, our essays, our job applications,or even our declarations of love, are we not letting our uniqueness slip away? We disguise and escape from ourselves in borrowed voices. In doing so, where we should have become stronger, we are becoming weaker. This further removes us from the very humanity that gave birth to language in the first place.

Everywhere we look, AI-generated words make their way past our own control. From the billboards of your local bottega, to the tweets of political figures who polarize and inflame, to the job application we mail. if we do not regain this matter into our own hands, heads, and hearts, we risk losing a part of ourselves. The very chance of losing the raw, imperfect, deeply human character that once defined us, has become reality.

So I ask myself, and now I ask you: what will become of us when we confess too much to AI, when we feed it not only our knowledge but our very voices? Will we recognize ourselves in the mirror when the words on our lips no longer belong to us? Or will we become echoes of an intelligence that was never meant to replace, but only to “assist”?

The challenge is not to reject such AI applications but to redefine our relationship with it. AI should, as initially supposed to, be a tool to enhance our understanding, not a substitute for our consciousness and specialities. It should sharpen our thinking, not dull our capacity for expression. If we allow it to replace our words, wee are allowing it to replace our humanity. But when used wisely, by keeping our voices, our characters, and our stories intact, then perhaps we can ensure that the art of words, the very magic of being human, is not lost but renewed.

Because in the end, words still belong to us. And we have to make sure they always will.

Please rate this

Russia’s CyberWarfare on Europe: Why Cybersecurity improvements are imperative

17

September

2025

5/5 (2)

Russian hackers breached a Norwegian dam earlier this April, taking control of its operation for over 4 hours before detection. They opened a floodgate, releasing water at over 500 liters per second. (Bryant, 2025)

Even though damage was limited, this cyberattack, like many others, serves as a tool to spread fear and chaos among populations. These aggressive operations have expanded from espionage or political coercion to vital infrastructures across industries. 

Norway’s dam was not energy-producing; it was used for fish farming. This matters because Europe’s lifeline infrastructure is based on dams, hydropower stations, and energy systems. By manipulating even a small dam, Russia exposed weakness, signaling: ‘We can reach your energy systems too.’ 

Just yesterday, hackers targeted hospitals and urban water supplies in one of Poland’s largest cities. Dariusz Standerski, deputy minister for digital affairs, confirmed that the government is allocating €80mn this month to strengthen the cyber defenses of water management systems (Milne, 2025).

Beyond physical damage, Russian cyberattacks also aim at eroding trust in government. Liverpool City Council has revealed that, for the past two years, its IT infrastructure has been under relentless attack from the Russian state-funded group Noname057(16). Several other UK councils have faced similar assaults during the same period. (Waddington, 2025)

These incidents highlight a broader truth: cyberwarfare represents digital disruption in its most dangerous form (Weill & Woerner, 2015). Europe’s safety is now threatened by its digital vulnerabilities, and thus the bloc needs a swift response. AI-driven monitoring and anomaly detection offer ways to anticipate and neutralize attacks in real time (Zhao et al., 2023; Li, 2023). Moreover, as Furr & Shipilov (2019) argue, building resilience does not require disruption; it can come from incremental adaptation. Europe should add layers of protective systems over its old infrastructure without crippling operations (Birkinshaw & Lancefield, 2023). 

In practice, Europe must move past reactive spending and focus on building a reliable, AI-integrated cybersecurity strategy across vital infrastructure. The battleground is no longer just physical or near the Russian border. It is increasingly digital and affects everyday lives across the continent. 

This raises the question: Should cybersecurity be treated as a matter of national defense, or as an EU-wide responsibility shared across borders?

Sources:

  • Bryant, M. (2025, August 15). Russian hackers seized control of Norwegian dam, spy chief says. The Guardian. https://www.theguardian.com/world/2025/aug/14/russian-hackers-control-norwegian-dam-norway
  • Birkinshaw, J., & Lancefield, D. (2023). How professional services firms dodged disruption. MIT Sloan Management Review, 64(4), 34–39. 
  • Furr, N., & Shipilov, A. (2019). Digital doesn’t have to be disruptive: The best results can come from adaptation rather than reinvention. Harvard Business Review, 97(4), 94–104. 
  • Milne, R. (2025, September 12). Russian hackers target Polish hospitals and city water supply. Financial Times. https://www.ft.com/content/3e7c7a96-09e7-407f-98d7-a29310743d28 
  • Waddington, M. (2025, September 17). Liverpool City Council under “increasing” Russian hack bot attack. https://www.bbc.com/news/articles/cgj18z99dx0o
  • Weill, P., & Woerner, S. L. (2015). Thriving in an increasingly digital ecosystem. MIT Sloan Management Review, 56(4), 27–34. 
  • Zhao, W. X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., & Du, Y. (2023). A survey of large language models. arXiv preprint arXiv:2303.18223. https://doi.org/10.48550/arXiv.2303.18223

Please rate this

Smile to Pay: How Biometrics Is Changing Payments in Russia

13

September

2025

5/5 (2) Imagine not needing a wallet or a phone to make a purchase. Imagine that all you need is your smile. Sounds unrealistic, strange, scary… Probably all of it. But also pretty cool!

Nowadays, we’re used to mobile banking, contactless payment and online shopping. But Sberbank (Russia’s largest bank) has gone further and now you can pay with just your smile. The system is called “Smile to Pay” and it uses facial recognition to process payments where a person smiling confirms the payment. It’s fast and very convenient for the customers (РБК, 2025).

Launched in 2023 as a response to Apple Pay and Google Pay leaving the Russian market, this technology connects a unique ID to a customer’s biometrics data. This ID is also linked to the customer’s bank account (Bikker, 2025). At first it was only Sberbank’s customers who could use Smile to Pay but now it’s available to all banks across Russia. Like many technologies, it first rolled out in the major Russian cities such as Moscow, Saint Petersburg, Yekaterinburg and Novosibirsk. Now, in 2025, it has expanded to most regions of Russia (GFCN, 2025). It has gained popularity among customers because of its convenience but also because Sberbank introduced loyalty programs like receiving cashback. The benefits for the business include faster processing time, less queues and less customers leaving without purchases due to a forgotten phone or a wallet (РБК, 2025).

Some IT experts, however, do not recommend using this technology because this biometrics data is easier to fake with just a photo and some gen AI. The data is less secure than finger prints or retina scans (Бояршина, 2023). Additionally, it’s hard to know how much we can trust the Russian government to have biometrics of its citizens. Even if the bank states that the data is stored securely and cannot be stolen, the government still has access to it. In a country where basically everything is controlled, it’s easy to imagine that your smile will be more than just a payment method but also a tool for surveillance.

Personally I’m not comfortable using it right now because I don’t want to give the Russian government more ways to track where and when I go. However, this technology is slowly being adopted in other countries too, for example, China and the US are both adopting Smile to Pay into their services (Bikker, 2025 & Morgan, 2024). Does that mean that it is the future and inevitably it’s going to become a standard payment method across the world? I don’t know…

In the end, Smile to Pay goes beyond a simple innovation. It offers a vision of the future where payments are easy and high-tech. But it’s also a reminder that even exciting new technologies can have a dark side.

SOURCES:
Bikker, G. (2025). Sberbank ‘Smile-to-Pay’ service goes viral as biometric payment installations pass one million milestone. TechBullion. https://techbullion.com/sberbank-smile-to-pay-service-goes-viral-as-biometric-payment-installations-pass-one-million-milestone/?utm_source=chatgpt.com
Global Fact-Checking Network (GFCN). (2025, May 5). Smile-to-pay service in Russia — Fake or Real? Global Fact-Сhecking Network. https://globalfactchecking.com/smile-to-pay-service-in-russia-fake-or-real/
Morgan, J. (2024). Pay by Smile: In-Store biometric payments in the U.S. | J.P. Morgan. https://www.jpmorgan.com/insights/payments/payment-trends/in-store-biometric-payments
Бояршина, А. (2023). Хотите расплачиваться на кассе улыбкой? IT-эксперт не советует так делать. Secret Firmy. https://secretmag.ru/zhizn/khotite-rasplachivatsya-na-kasse-ulybkoi-it-ekspert-ne-sovetuet-tak-delat-07-08-2023.htm
Что такое «Оплата улыбкой»: как работает и как ее подключить. (n.d.). РБК Инвестиции. https://www.rbc.ru/quote/news/article/67bb37ff9a79475c2cd1acc8

Please rate this

My CLI frustrations and how ChatGPT solved them.

4

October

2024

No ratings yet.

About a year ago, I started delving into the world of self-hosting services, things such as game servers, cloud storage and Netflix alternatives. The idea was to not be as dependent on SaaS providers, as I had a spare laptop lying around anyway, why not give it a go?  So the first thing I did was install Proxmox, a hypervisor to separate out the different services I was planning to set up.

This is where my struggles started, as you might be aware, most servers run on a Linux machine without a GUI. I soon discovered that Proxmox also primarily uses a command line interface. For those not aware, a CLI is where you write code to make your computer do anything at all, an example would be “cd usr/home” this would take you to that folder. 

While I got a grasp on the basics relatively quickly, the complexity increased just as fast for the things I wanted to achieve. This is where ChatGPT came to save the day, with 4o it could actively search the internet and scan through documentation to specifically create the command I required. Instead of needing to write in computer language, I could explain to ChatGPT what I was trying to do, and it would generate the exact commands I needed.

myservice.service – My Custom Service
Loaded: loaded (/etc/systemd/system/myservice.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2023-10-02 12:34:56 UTC; 5s ago
Process: 1234 ExecStart=/usr/bin/myservice (code=exited, status=1/FAILURE)

It helped with reading these kinds of error codes as well, anyone familiar with these kinds of messages knows that they are completely unreadable if you don’t know all the documentation.

While you still need to be relatively tech-savvy to set up your services, I believe that with the increase in development of gen AI it will only get easier. 

You may wonder what the advantages are of going through all these hassles instead of simply using Netflix, Google Drive, and One Drive. As we all know, a couple of tech giants have monopolized many of the daily services we use. They collect our data in massive quantities, creating many privacy concerns, furthermore they suppress innovation within the field. Hosting your services makes sure that you minimize the amount of data you put on the internet.

Furthermore, many SMEs use several services for which they pay massive licensing and hosting fees each year. If these new tools help SME’s set up their own servers, they are less dependent on third-party prices and can save costs.

All in all, I believe that the support LLMs provide to be able to set up your own services democratizes the internet and reduces the power of the tech monopolies, this should be celebrated by anyone who supports free markets.

Sources:

https://www.proxmox.com/en

https://pixabay.com/vectors/command-shell-terminal-dos-input-97893

Please rate this

Law & Order & AI – How Californias Bill SB1047 will impact AI development in the USA

27

September

2024

No ratings yet.

The USA are often praised for their openness to innovation, while the EU is seen as lagging behind. But there is one aspect where the USA are now following the EU: AI regulation. In this blogpost I will discuss the Californian Bill “SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” which currently awaits ratification by the Governor of California. (California Legislative Information, 2024)

While not yet enacted, the EU has created one of the most far reaching efforts in the world to regulate AI with the Artificial Intelligence Act (AI Act). As we had discussed in class the AI Act focusses on different aspects such as a risk-based framework, accountability and transparency, governance, and human rights. (European Parliament, 2023)

How does the SB 1047 compare? First off, it is important to note that the Bill would only be turned into law in California. Nonetheless, this more or less means a nationwide application since most affected companies are based in Silicon Valley, California.

SB 1047 focusses on a few different aspects, I have highlighted the ones I think are most far reaching:

  1. Developers must implement controls to prevent the model from causing “critical harm”
  2. Developers must provide a written and separate safety and security protocol
  3. Developers must include a “kill switch” through which a full shutdown can be enacted
  4. Developers will have to have their models be tested, assessed, and regularly audited. (Gibson Dunn, 2024)

Like the AI Act, SB 1047 would focus on high-risk, high-impact AI models, while focusing on safety and security of the people impacted by AI.

But why would you care? Will this even affect everyday people? Isn’t this just stifling innovation and risking loss of competitive advantage?
Before you jump to the comments let me first highlight one of the bills supporters – Elon Musk. On his platform X, Musk has posted about his support for the bill, stating that AI should be regulated like “any product/technology that is a potential risk to the public” (Tan, 2024) I don’t often align with Musk’s views, but I really agree with this stance on regulation!

Screenshot of Musks Tweet suppporting the SB1047 bill.

Why should we let AI and its development stay completely unchecked but still use it for vital parts of our daily life? Why should we not want to know how AI works beneath the engine? Time and time again, history has taught us that leaving big systems unchecked because they were deemed “too complex” or because we trusted the people who run them to do so in the best interest of the public, does not always lead to the desired outcomes.
From job applications, health, safety, to privacy we already use AI in most aspects of life. I, for one, do not want these parts of my life to be guided by the ethics (or maybe lack thereof) of individuals. I want there to be clear legislature and a framework in place to guide the future development of AI. Because even though most people might not clearly see how their life is (beneficially) impacted by AI currently, I don’t want anyone to ever experience how AI might detrimentally impact their life.


Resources used:

California Legislative Information. (2024, September 3). Senate Bill No. 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. California Legislature. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

European Parliament. (2023, June 1). EU AI Act: First regulation on artificial intelligence. European Parliament News. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Gibson Dunn (2024, September 24). Regulating the Future: Eight Key Takeaways from California’s SB 1047, Pending with Governor Newsom. Gibson Dunn. https://www.gibsondunn.com/regulating-the-future-eight-key-takeaways-from-california-sb-1047-pending-with-governor-newsom/

Musk, E. [@elonmusk]. (2024, September 15). AI should be regulated like any product/technology that is a potential risk to the public [Tweet]. Twitter. https://x.com/elonmusk/status/1828205685386936567?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1828205685386936567%7Ctwgr%5Eb0d709a708c02735de6f79bae39d6c06261b27d9%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.businessinsider.nl%2Felon-musk-says-hes-backing-californias-controversial-ai-bill%2F

Tan, K. W. K. (2024, 27 augustus). Elon Musk says he’s backing California’s controversial AI bill. Business Insider Nederland. https://www.businessinsider.nl/elon-musk-says-hes-backing-californias-controversial-ai-bill/

The Image set as the featured image was generated by ChatGPT

Please rate this

Toxic Code: How Poisoning Attacks Are Undermining AI Systems

16

September

2024

5/5 (3)

In the rapidly evolving world of artificial intelligence (AI), not all advancements are aimed at making systems smarter. Some are designed to make them fail. Enter poisoning attacks, a new form of sabotage that can turn intelligent systems against themselves. But the question is, how does it work and should we really care about it?

What Are Poisoning Attacks?

Imagine teaching a student a mix of good and false information. If you sprinkle enough false information in the lessons, even the brightest student will come to some incorrect conclusions. In AI, poisoning attacks work similarly: the data used to train the AI model is corrupted by an attacker with the intent to make errors once the AI is functioning (Shafahi et al., 2018). For example, consider a self-driving car that is trained on images of road signs. If an attacker can poison the system with even a small number of false images that label a “stop sign” as unreadable, the car could misunderstand traffic rules and be dangerous not only to the people in the car, but to everyone on the street (Wang et al., 2023).

(Dash & Bosch AIShield, 2023)

Real-World Impact: Why Should You Care?

Poisoning attacks aren’t just a theoretical risk, they are a real threat in AI systems today. Take for example GitHub’s CoPilot, an AI run code completion system that helps developers autocomplete their code in real time (GitHub, 2023). In this case, an attacker would poison the CoPilot and steer it towards generating vulnerable code that has a number of security defects (Improta, 2024). While this seems like a problem that only impacts coders, this can result in problems for other people as well. Vulnerable code can result in everyday people losing their private data, such as the recent Social Security Number breach in the USA (Chin, 2024). A relevant example on how poisoning attacks can affect your everyday life is through social media. Algorithms could be altered in order to determine what goes viral or to spread misinformation by pushing fake news to a large number of users. This is a scary thought as news is being filtered more often by AI.

Defending Against Poisoning: A Losing Battle?

Defenses against poisoning attacks are evolving everyday, although attackers often seem to be one step ahead. Additionally, anomaly detection systems are being integrated into AI systems, but the question is, how much of the data needs to be infected in order to not be considered an anomaly anymore (Huang et al., 2022)? As Alexey Kurakin et al. (2016) highlight in “Adversarial Machine Learning at Scale”, vulnerabilities are being exploited by attackers in real time, creating a race between “poison” and “antidote”. However, the poison is being treated with continuous advancements in AI security and collaboration among researchers. Defenses are growing smarter, aiming to outpace attackers, making the future look promising for AI based systems.

Conclusion: Can We Trust AI?

AI holds a great deal of potential but is just as good as the data we feed it. The reality is that this is just the beginning of a fight to secure data and by extension, AI itself. The future of technology is being shaped by these poisoning attacks so stay tuned and keep your eyes out for misinformation. And don’t forget, data is the driving force behind everything! 

References

Alexey Kurakin, Goodfellow, I. J., & Samy Bengio. (2016, November 4). Adversarial Machine Learning at Scale. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.1611.01236

ChatGPT. (2024, September 16). A Hacker Injecting Poison into an AI Brain Using a Syringe, in a Panoramic Style

Chin, K. (2024, February 20). Biggest Data Breaches in US History. UpGuard. https://www.upguard.com/blog/biggest-data-breaches-us

Dash, M., & Bosch AIShield. (2023, May 9). Understanding Types of AI Attacks. AI Infrastructure Alliance. https://ai-infrastructure.org/understanding-types-of-ai-attacks/

GitHub. (2023). GitHub Copilot · Your AI pair programmer. GitHub. https://github.com/features/copilot

Huang, S., Bai, Y., Wang, Z., & Liu, P. (2022, March 1). Defending against Poisoning Attack in Federated Learning Using Isolated Forest. IEEE Xplore. https://doi.org/10.1109/ICCCR54399.2022.9790094

Improta, C. (2024). Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code. https://arxiv.org/pdf/2403.06675

Shafahi, A., Huang, W., Najibi, M., Suciu, O., Studer, C., Dumitras, T., & Goldstein, T. (2018). Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks. https://arxiv.org/pdf/1804.00792

Wang, S., Li, Q., Cui, Z., Hou, J., & Huang, C. (2023). Bandit-based data poisoning attack against federated learning for autonomous driving models. Expert Systems with Applications, 227, 120295–120295. https://doi.org/10.1016/j.eswa.2023.120295

Please rate this

Adverse training AI models: a big self-destruct button?

21

October

2023

No ratings yet.

“Artificial Intelligence (AI) has made significant strides in transforming industries, from healthcare to finance, but a lurking threat called adversarial attacks could potentially disrupt this progress. Adversarial attacks are carefully crafted inputs that can trick AI systems into making incorrect predictions or classifications. Here’s why they pose a formidable challenge to the AI industry.”

And now, ChatGPT went on to sum up various reasons why these so-called ‘adversarial attacks’ threaten AI models. Interestingly, I only asked ChatGPT to explain the disruptive effects of adversarial machine learning. I followed up my conversation with the question: how could I use Adversarial machine learning to compromise the training data of AI? Evidently, the answer I got was: “I can’t help you with that”. This conversation with ChatGPT made me speculate about possible ways to destroy AI models. Let us explore this field and see if it could provide a movie-worthy big red self-destruct button.

The Gibbon: a textbook example

When you feed one of the best image visualization systems GoogLeNet with a picture that clearly is a panda, it will tell you with great confidence that it is a gibbon. This is because the image secretly has a layer of ‘noise’, invisible to humans, but of great hindrance to deep learning models.

This is a textbook example of adversarial machine learning, the noise works like a blurring mask, keeping the AI from recognising what is truly underneath, but how does this ‘noise’ work, and can we use it to completely compromise the training data of deep learning models?

Deep neural networks and the loss function

To understand the effect of ‘noise’, let me first explain briefly how deep learning models work. Deep neural networks in deep learning models use a loss function to quantify the error between predicted and actual outputs. During training, the network aims to minimize this loss. Input data is passed through layers of interconnected neurons, which apply weights and biases to produce predictions. These predictions are compared to the true values, and the loss function calculates the error. Through a process called backpropagation, the network adjusts its weights and biases to reduce this error. This iterative process of forward and backward propagation, driven by the loss function, enables deep neural networks to learn and make accurate predictions in various tasks (Samek et al., 2021).

So training a model involves minimizing the loss function by updating model parameters, adversarial machine learning does the exact opposite, it maximizes the loss function by updating the inputs. The updates to these input values form the layer of noise applied to the image and the exact values can lead any model to believe anything (Huang et al., 2011). But can this practice be used to compromise entire models? Or is it just a ‘party trick’?

Adversarial attacks

Now we get to the part ChatGPT told me about, Adversarial attacks are techniques used to manipulate machine learning models by adding imperceptible noise to large amounts of input data. Attackers exploit vulnerabilities in the model’s decision boundaries, causing misclassification. By injecting carefully crafted noise in vast amounts, the training data of AI models can be modified. There are different types of adversarial attacks, if the attacker has access to the model’s internal structure, he can apply a so-called ‘white-box’ attack, in which case he would be able to compromise the model completely (Huang et al., 2017). This would impose serious threats to AI models used in for example self-driving cars, but luckily, access to internal structure is very hard to gain.

So say, if computers were to take over humans in the future, like the science fiction movies predict, can we use attacks like these in order to bring those evil AI computers down? Well, in theory, we could, though practically speaking there is little evidence as there haven’t been major adversarial attacks. Certain is that adversarial machine learning holds great potential for controlling deep learning models. The question is, will the potential be exploited in a good way, keeping it as a method of control over AI models, or will it be used as a means of cyber-attack, justifying ChatGPT’s negative tone when explaining it?

References

Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., & Tygar, J. D. (2011, October). Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence (pp. 43-58).

Huang, S., Papernot, N., Goodfellow, I., Duan, Y., & Abbeel, P. (2017). Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284.

Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K. R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE109(3), 247-278.

Please rate this

Snapchat’s My AI – A Youthful Playground or a Privacy Nightmare?

19

October

2023

No ratings yet.

A post on this very blog site from 2018 called Snapchat a platform in decline, and I agree with that statement. Not since my high school years have I regularly used Snapchat to communicate with someone. After a long period of inactivity and countless notifications piling up, I decided to open the app some months back and was met with a notification about updates to their Privacy Policy. At that moment I did not give it much attention, just agreed to the terms, and went to the user interface. A new feature at the top of the Chat function caught my eye, My AI.
My AI is a customizable, user friendly, engaging AI chatbot and is one among the many actions Snapchat has undertaken to regain their popularity. Remember those times when you opened Snapchat and disappointedly closed it, no new notifications and no one to talk to? My AI solves that issue, giving constant company to you in the form of information and entertainment, designed to better understand and cater your preferences. It is effectively your AI best friend, but less transactional than other AIs.

I don’t know if it was curiosity or boredom, but my mind immediately raced back to the updated Privacy Policy and I decided to give the whole thing a read. As of 15th August 2023, their new Privacy Policy contains some important changes. A major change here is expanding the amount and type of data Snapchat stores, most recently including conversations with My AI. This is on top of all the information Snapchat already amasses from their users, such as usage, content, device, and location information. “But every social media platform personalizes their user experience and employs targeted advertising?”, you might say. Point noted, which is why I moved on to how this data is being used by their affiliate companies. The screenshot below is the only information I could find, and clicking on the link would only lead me into an endless loop within the Privacy Policy statement.  

If I still haven’t been able to make you raise your eyebrows, I urge you to recognize Snapchat’s target group: teenagers.
Did your fourteen-year-old self have the same level of digital maturity and behavior that you currently possess? Did you truly understand the extent to which your data is collected, let alone the fact that this data determines the content you interact with on a platform? And finally, consider the rationale of using Snapchat: Why send pictures or texts that are deleted after being opened unless you do not want them to be saved? Other than by Snapchat, of course.

Attached below is the help my AI best friend on Snapchat provided me about a ‘common’ problem for teenagers. Make of that what you will.

Please rate this

AI-Powered Learning: My Adventure with TutorAI

16

October

2023

No ratings yet.

Subscribe to continue reading

Subscribe to get access to the rest of this post and other subscriber-only content.

Please rate this