Building Applications Has Become Easier Than Ever

10

October

2025

No ratings yet.

A couple of months ago, a client at my student consultancy job asked us to automate a document anonymization process at their real estate agency. Due to data protection requirements, the processing had to be done locally or within their Microsoft environment.

After some unfruitful experimenting with Power Automate, we decided to give building our own tool with Claude (an LLM by Anthropic) a try. With a bachelor’s degree in international business, I had no coding knowledge whatsoever. The results were amazing. Within a few hours, we had some basic capabilities established.

After a while, I wanted to make the process more efficient than having to copy paste Claude’s changes into my project files. A new version, named Claude Code, had been released. It enables the AI to work in your project files directly. I had to watch a few tutorials and do some error-fixing with ChatGPT to get it to work in container (a sealed-off environment on my laptop). After about two hours we were ready to go.

The result was a developer working at lightning speed. It could code, test, readjust and retest until it works, all in one go. You see it break down the task into sub-tasks and tackle them one-by-one. Alternatively, you can put it in plan-mode so it will brainstorm about what you want, come up with multiple alternatives with pros and cons and execute one when you give the word. While it is executing that piece, you can open a second, third or fourth window to work on a different issue. You can quite literally run an entire team of coders at the same time, while you only manage them.

However, it’s not perfect. Especially fixing more complex bugs can be an issue. Sometimes, after showing the problem and asking for a solution several times, it won’t be able to fix it. Since I do not know anything about code myself, I had to be creative.

Firstly, working modularly helps you pinpoint the issue to a specific module. You can then ask Claude to zoom in on that module and come up with possible causes. With just logic you can often judge its suggestions. That way you can help Claude get closer to fixing the issue.

Sometimes, it gets stuck in a certain thinking path it has gone down. In that case, it can help to get a second opinion. You open a second window or ask a different LLM (e.g. ChatGPT) to look at the issue. This way it is not biased by the context in your current conversation or its LLM specific knowledge. This has more than once resulted in it immediately recognizing the real issue, and me being frustrated with the fact that I spent half an hour trying to fix it in the initial chat.

All in all, I was really amazed with the possibilities. Getting it all set up was a bit of trial and error, and it takes quite some time to brainstorm about the implications of architectural choices. But once you have done that, it builds full-fledged applications in minutes.

New AI tools are being released quicker than we can learn to use them, so adaptability seems more important than ever. Just being able to build applications is not enough either. Just like before coding became so much easier, you need a business case for the application too. All in all, I think it’s a great time to be a business student with an interest in technology.

To anyone else who has been experimenting with AI tools for coding: what tools do you use and what best practices have you discovered?

Please rate this

“Any last words?” – How AI is stealing our voices.

27

September

2025

5/5 (2)
What if artificial intelligence begins to control the actions we take on a daily basis? This question is haunting me more often than I would like to admit. I am experiencing both fascination and worry about the rapid progress of linguistic AI applications like ChatGPT, subtly entrenching their way deeper into our private spaces. No longer are they distant tools anymore, since they are slowly creeping into our thoughts, shaping how we search, write, and even think. Though, I must confess that I am guilty too. Whether it is looking up a fancy recipe, fixing a tire, or condensing hours of lecture notes into something manageable, I turn to AI. For these purposes, I actually encourage it. These are the fruits of technology, ripe for us to taste, but does it still taste so sweetly as before?

The problem lies elsewhere. It is when we begin outsourcing not just our tasks, but our very words. Through the shapes of our essays, stories, and expressions, I fear we are losing something irreplaceable, which is the art and power of human language. Words have always been the essence of what makes us human. They can move nations, comfort the broken, and ignite revolutions. For exampe, Martin Luther King Jr.’s immortal statement, “I have a dream”. Or the centuries-old anthem, “God save the Queen”. Or the enchanting effect in every start of an ancient old bedtime story, “Once upon a time…”. Even a cartoon character, like Uncle Iroh from Avatar, reminds us of the tenderness words can carry: “Leaves from the vine, falling so slow.”

We have always had a weakness for words, as words work magically on a human body, the reason why putting them together is called spelling. The right words at the right moment have altered the course of history: Rosa Parks beautifully iterating: “Stand for something or fall for nothing” or the collective anthem “We are the world”, moving millions into action.. These words came from human souls, and that is precisely why they pierced so deeply into human hearts.

These days, however, this magic is being hollowed out. When AI generates our speeches, our essays, our job applications,or even our declarations of love, are we not letting our uniqueness slip away? We disguise and escape from ourselves in borrowed voices. In doing so, where we should have become stronger, we are becoming weaker. This further removes us from the very humanity that gave birth to language in the first place.

Everywhere we look, AI-generated words make their way past our own control. From the billboards of your local bottega, to the tweets of political figures who polarize and inflame, to the job application we mail. if we do not regain this matter into our own hands, heads, and hearts, we risk losing a part of ourselves. The very chance of losing the raw, imperfect, deeply human character that once defined us, has become reality.

So I ask myself, and now I ask you: what will become of us when we confess too much to AI, when we feed it not only our knowledge but our very voices? Will we recognize ourselves in the mirror when the words on our lips no longer belong to us? Or will we become echoes of an intelligence that was never meant to replace, but only to “assist”?

The challenge is not to reject such AI applications but to redefine our relationship with it. AI should, as initially supposed to, be a tool to enhance our understanding, not a substitute for our consciousness and specialities. It should sharpen our thinking, not dull our capacity for expression. If we allow it to replace our words, wee are allowing it to replace our humanity. But when used wisely, by keeping our voices, our characters, and our stories intact, then perhaps we can ensure that the art of words, the very magic of being human, is not lost but renewed.

Because in the end, words still belong to us. And we have to make sure they always will.

Please rate this

My CLI frustrations and how ChatGPT solved them.

4

October

2024

No ratings yet.

About a year ago, I started delving into the world of self-hosting services, things such as game servers, cloud storage and Netflix alternatives. The idea was to not be as dependent on SaaS providers, as I had a spare laptop lying around anyway, why not give it a go?  So the first thing I did was install Proxmox, a hypervisor to separate out the different services I was planning to set up.

This is where my struggles started, as you might be aware, most servers run on a Linux machine without a GUI. I soon discovered that Proxmox also primarily uses a command line interface. For those not aware, a CLI is where you write code to make your computer do anything at all, an example would be “cd usr/home” this would take you to that folder. 

While I got a grasp on the basics relatively quickly, the complexity increased just as fast for the things I wanted to achieve. This is where ChatGPT came to save the day, with 4o it could actively search the internet and scan through documentation to specifically create the command I required. Instead of needing to write in computer language, I could explain to ChatGPT what I was trying to do, and it would generate the exact commands I needed.

myservice.service – My Custom Service
Loaded: loaded (/etc/systemd/system/myservice.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2023-10-02 12:34:56 UTC; 5s ago
Process: 1234 ExecStart=/usr/bin/myservice (code=exited, status=1/FAILURE)

It helped with reading these kinds of error codes as well, anyone familiar with these kinds of messages knows that they are completely unreadable if you don’t know all the documentation.

While you still need to be relatively tech-savvy to set up your services, I believe that with the increase in development of gen AI it will only get easier. 

You may wonder what the advantages are of going through all these hassles instead of simply using Netflix, Google Drive, and One Drive. As we all know, a couple of tech giants have monopolized many of the daily services we use. They collect our data in massive quantities, creating many privacy concerns, furthermore they suppress innovation within the field. Hosting your services makes sure that you minimize the amount of data you put on the internet.

Furthermore, many SMEs use several services for which they pay massive licensing and hosting fees each year. If these new tools help SME’s set up their own servers, they are less dependent on third-party prices and can save costs.

All in all, I believe that the support LLMs provide to be able to set up your own services democratizes the internet and reduces the power of the tech monopolies, this should be celebrated by anyone who supports free markets.

Sources:

https://www.proxmox.com/en

https://pixabay.com/vectors/command-shell-terminal-dos-input-97893

Please rate this

Toxic Code: How Poisoning Attacks Are Undermining AI Systems

16

September

2024

5/5 (3)

In the rapidly evolving world of artificial intelligence (AI), not all advancements are aimed at making systems smarter. Some are designed to make them fail. Enter poisoning attacks, a new form of sabotage that can turn intelligent systems against themselves. But the question is, how does it work and should we really care about it?

What Are Poisoning Attacks?

Imagine teaching a student a mix of good and false information. If you sprinkle enough false information in the lessons, even the brightest student will come to some incorrect conclusions. In AI, poisoning attacks work similarly: the data used to train the AI model is corrupted by an attacker with the intent to make errors once the AI is functioning (Shafahi et al., 2018). For example, consider a self-driving car that is trained on images of road signs. If an attacker can poison the system with even a small number of false images that label a “stop sign” as unreadable, the car could misunderstand traffic rules and be dangerous not only to the people in the car, but to everyone on the street (Wang et al., 2023).

(Dash & Bosch AIShield, 2023)

Real-World Impact: Why Should You Care?

Poisoning attacks aren’t just a theoretical risk, they are a real threat in AI systems today. Take for example GitHub’s CoPilot, an AI run code completion system that helps developers autocomplete their code in real time (GitHub, 2023). In this case, an attacker would poison the CoPilot and steer it towards generating vulnerable code that has a number of security defects (Improta, 2024). While this seems like a problem that only impacts coders, this can result in problems for other people as well. Vulnerable code can result in everyday people losing their private data, such as the recent Social Security Number breach in the USA (Chin, 2024). A relevant example on how poisoning attacks can affect your everyday life is through social media. Algorithms could be altered in order to determine what goes viral or to spread misinformation by pushing fake news to a large number of users. This is a scary thought as news is being filtered more often by AI.

Defending Against Poisoning: A Losing Battle?

Defenses against poisoning attacks are evolving everyday, although attackers often seem to be one step ahead. Additionally, anomaly detection systems are being integrated into AI systems, but the question is, how much of the data needs to be infected in order to not be considered an anomaly anymore (Huang et al., 2022)? As Alexey Kurakin et al. (2016) highlight in “Adversarial Machine Learning at Scale”, vulnerabilities are being exploited by attackers in real time, creating a race between “poison” and “antidote”. However, the poison is being treated with continuous advancements in AI security and collaboration among researchers. Defenses are growing smarter, aiming to outpace attackers, making the future look promising for AI based systems.

Conclusion: Can We Trust AI?

AI holds a great deal of potential but is just as good as the data we feed it. The reality is that this is just the beginning of a fight to secure data and by extension, AI itself. The future of technology is being shaped by these poisoning attacks so stay tuned and keep your eyes out for misinformation. And don’t forget, data is the driving force behind everything! 

References

Alexey Kurakin, Goodfellow, I. J., & Samy Bengio. (2016, November 4). Adversarial Machine Learning at Scale. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.1611.01236

ChatGPT. (2024, September 16). A Hacker Injecting Poison into an AI Brain Using a Syringe, in a Panoramic Style

Chin, K. (2024, February 20). Biggest Data Breaches in US History. UpGuard. https://www.upguard.com/blog/biggest-data-breaches-us

Dash, M., & Bosch AIShield. (2023, May 9). Understanding Types of AI Attacks. AI Infrastructure Alliance. https://ai-infrastructure.org/understanding-types-of-ai-attacks/

GitHub. (2023). GitHub Copilot · Your AI pair programmer. GitHub. https://github.com/features/copilot

Huang, S., Bai, Y., Wang, Z., & Liu, P. (2022, March 1). Defending against Poisoning Attack in Federated Learning Using Isolated Forest. IEEE Xplore. https://doi.org/10.1109/ICCCR54399.2022.9790094

Improta, C. (2024). Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code. https://arxiv.org/pdf/2403.06675

Shafahi, A., Huang, W., Najibi, M., Suciu, O., Studer, C., Dumitras, T., & Goldstein, T. (2018). Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks. https://arxiv.org/pdf/1804.00792

Wang, S., Li, Q., Cui, Z., Hou, J., & Huang, C. (2023). Bandit-based data poisoning attack against federated learning for autonomous driving models. Expert Systems with Applications, 227, 120295–120295. https://doi.org/10.1016/j.eswa.2023.120295

Please rate this

Adverse training AI models: a big self-destruct button?

21

October

2023

No ratings yet.

“Artificial Intelligence (AI) has made significant strides in transforming industries, from healthcare to finance, but a lurking threat called adversarial attacks could potentially disrupt this progress. Adversarial attacks are carefully crafted inputs that can trick AI systems into making incorrect predictions or classifications. Here’s why they pose a formidable challenge to the AI industry.”

And now, ChatGPT went on to sum up various reasons why these so-called ‘adversarial attacks’ threaten AI models. Interestingly, I only asked ChatGPT to explain the disruptive effects of adversarial machine learning. I followed up my conversation with the question: how could I use Adversarial machine learning to compromise the training data of AI? Evidently, the answer I got was: “I can’t help you with that”. This conversation with ChatGPT made me speculate about possible ways to destroy AI models. Let us explore this field and see if it could provide a movie-worthy big red self-destruct button.

The Gibbon: a textbook example

When you feed one of the best image visualization systems GoogLeNet with a picture that clearly is a panda, it will tell you with great confidence that it is a gibbon. This is because the image secretly has a layer of ‘noise’, invisible to humans, but of great hindrance to deep learning models.

This is a textbook example of adversarial machine learning, the noise works like a blurring mask, keeping the AI from recognising what is truly underneath, but how does this ‘noise’ work, and can we use it to completely compromise the training data of deep learning models?

Deep neural networks and the loss function

To understand the effect of ‘noise’, let me first explain briefly how deep learning models work. Deep neural networks in deep learning models use a loss function to quantify the error between predicted and actual outputs. During training, the network aims to minimize this loss. Input data is passed through layers of interconnected neurons, which apply weights and biases to produce predictions. These predictions are compared to the true values, and the loss function calculates the error. Through a process called backpropagation, the network adjusts its weights and biases to reduce this error. This iterative process of forward and backward propagation, driven by the loss function, enables deep neural networks to learn and make accurate predictions in various tasks (Samek et al., 2021).

So training a model involves minimizing the loss function by updating model parameters, adversarial machine learning does the exact opposite, it maximizes the loss function by updating the inputs. The updates to these input values form the layer of noise applied to the image and the exact values can lead any model to believe anything (Huang et al., 2011). But can this practice be used to compromise entire models? Or is it just a ‘party trick’?

Adversarial attacks

Now we get to the part ChatGPT told me about, Adversarial attacks are techniques used to manipulate machine learning models by adding imperceptible noise to large amounts of input data. Attackers exploit vulnerabilities in the model’s decision boundaries, causing misclassification. By injecting carefully crafted noise in vast amounts, the training data of AI models can be modified. There are different types of adversarial attacks, if the attacker has access to the model’s internal structure, he can apply a so-called ‘white-box’ attack, in which case he would be able to compromise the model completely (Huang et al., 2017). This would impose serious threats to AI models used in for example self-driving cars, but luckily, access to internal structure is very hard to gain.

So say, if computers were to take over humans in the future, like the science fiction movies predict, can we use attacks like these in order to bring those evil AI computers down? Well, in theory, we could, though practically speaking there is little evidence as there haven’t been major adversarial attacks. Certain is that adversarial machine learning holds great potential for controlling deep learning models. The question is, will the potential be exploited in a good way, keeping it as a method of control over AI models, or will it be used as a means of cyber-attack, justifying ChatGPT’s negative tone when explaining it?

References

Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., & Tygar, J. D. (2011, October). Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence (pp. 43-58).

Huang, S., Papernot, N., Goodfellow, I., Duan, Y., & Abbeel, P. (2017). Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284.

Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K. R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE109(3), 247-278.

Please rate this

Using Midjourney to recreate lost memories

18

October

2023

No ratings yet.

Midjourney is a generative AI program which can convert simple natural language prompts into high-quality images. If you have an idea which you can pen (or rather type) down for the program, it will visualize it for you.

Right around the time the hype for this newly launched AI was building up I was finishing my exchange semester in Madrid, and like any other exchange student I made some stupid mistakes. My first mistake was to drop my phone from the fourth-floor balcony during New Year’s Eve. My second mistake was not making sure all my phone pictures are backed up on the cloud when I went to the repair store the next morning, still half dizzy.  It was merely coincidental that during the two days my phone was kept at the store, I was bombarded with AI generated pictures on photography communities online. Upon further research, I found out that these were being created by inputting prompts into Midjourney. All you needed was a Discord account.

Thus, when I received my newly formatted phone back only to realize that all my pictures from the past six months of exchange have vanished, I decided to give Midjourney a try. Crestfallen that I had lost so many memories, I wanted these images to be as realistic as possible. The free version gives you 25 prompt tries, so I researched on the science behind these text prompts to make the most out of those tries. You enter “/imagine” into the text field and voila, you can describe your image.

Midjourney prompt text field

Using a bit of trial and error and building upon what I read on the Internet, here are some general ideas which helped me recreate the images of my choice:

  • The more detailed the description, the better your image results usually are.
  • Make use of commas, they act as soft breaks to your image description.
  • Adding weights to your words, such as 0.5 or mentioning the axis ratio such as “–ar 16:9” can enhance the results.

Example of a typical Midjourney prompt

You can find the results of my journey with Midjourney below, which I believe are quite impressive. The only aspect where Midjourney struggled back when I made these pictures was recreating realistic humanistic features, this being continuously improved and functioning even better now. Whether AI generated images pose a threat to the professionals in the field is a matter of the consumer’s demand, and I have no opinions on that because the creative industry seems like an irrational vortex to me. However, I can definitely see photographers, film studios, and creatives making use of such programs for conceptualization, innovation and maximizing their creative potential.

What do you think?

AI recreation of my lost 2022 camera roll

Please rate this

AI-Powered Learning: My Adventure with TutorAI

16

October

2023

No ratings yet.

Subscribe to continue reading

Subscribe to get access to the rest of this post and other subscriber-only content.

Please rate this

Weapons of mass destruction – why Uncle Sam wants you.

14

October

2023

No ratings yet.

The Second World War was the cradle for national and geopolitical informational wars, with both sides firing rapid rounds of propaganda at each other. Because of the lack of connectivity (internet), simple pamphlets had the power to plant theories in entire civilizations. In today’s digital age, where everything and everyone is connected, the influence of artificial intelligence on political propaganda cannot be underestimated. This raises concern as, unlike in the Second World War, the informational wars being fought today extend themselves to national politics in almost every first-world country.

Let us take a look at the world’s most popular political battlefield; the US elections; in 2016, a bunch of tweets containing false claims led to a shooting in a pizza shop (NOS, 2016), these tweets had no research backing the information they were transmitting, but fired at the right audience they had significant power. Individuals have immediate access to (mis)information, this is a major opportunity for political powers wanting to gain support by polarising their battlefield.

Probably nothing that I have said to this point is new to you, so shouldn’t you just stop reading this blog and switch to social media to give your dopamine levels a boost? If you were to do that, misinformation would come your way six times faster than truthful information, and you contribute to this lovely statistic (Langin, 2018). This is exactly the essence of the matter, as it is estimated that by 2026, 90% of social media will be AI-generated (Facing reality?, 2022). Combine the presence of AI in social media with the power of fake news, bundle these in propaganda, and add to that a grim conflict like the ones taking place in East Europe or the Middle East right now, and you have got yourself the modern-day weapon of mass destruction, congratulations! But of course, you have got no business in all this so why bother to interfere, well, there is a big chance that you will share misinformation yourself when transmitting information online (Fake news shared on social media U.S. | Statista, 2023). Whether you want it or not, Uncle Sam already has you, and you will be part of the problem.

Artificial intelligence is about to play a significant role in geopolitics and in times of war the power of artificial intelligence is even greater, luckily full potential of these powers hasn’t been reached yet, but it is inevitable that this will happen soon. Therefore, it is essential that we open the discussion not about preventing the use of artificial intelligence in creating conflict and polarising civilisations, but about the use of artificial intelligence to repair the damages it does; to counterattack the false information it is able to generate, to solve conflicts it helps create, and to unite groups of people it divides initially. What is the best way for us to not be part of the problem but part of the solution?

References

Facing reality?: Law Enforcement and the Challenge of Deepfakes : an Observatory Report from the Europol Innovation Lab. (2022).

Fake news shared on social media U.S. | Statista. (2023, 21 maart). Statista. https://www.statista.com/statistics/657111/fake-news-sharing-online/

Langin, K. (2018). Fake news spreads faster than true news on Twitter—thanks to people, not bots. Science. https://doi.org/10.1126/science.aat5350

NOS. (2016, 5 december). Nepnieuws leidt tot schietpartij in restaurant VS. NOS. https://nos.nl/artikel/2146586-nepnieuws-leidt-tot-schietpartij-in-restaurant-vs

Please rate this

Can AI help me get a job?

10

October

2023

No ratings yet.

I am searching for a new job. A job that I can combine with my studies and which can provide me with enough to allow my shoe-box-sized apartment. But to get there, one often needs to write long motivational letters to various organisations and go through various potential job postings. However, the new age offers many opportunities to write motivational letters automatically and adapt to each and every company.  

In this search, I tested three separate AI-powered websites, ChatGPT, Kickresume, LazyApply and Rezi.

Kickresume, LazyApply and Rezi all provide a free trial of the algorithm that formulates extensive motivation letters. What is more, they all provide an easy User experience. The latter three websites also provide the user with prompts, like ‘’paste the job description’’ and ‘’paste your CV’’ which can provide a great deal of intertwined attention to one’s abilities and the required skills. The given prompts can also be skipped or occasionally modified if deemed to be unnecessary. Therefore, a complementary document can be readily made if one has a CV.

Regarding more mainstream and wide-use AI-language models like ChatGPT, one needs to insert a significant number of self-created prompts to create even a slightly similar quality document. It can be a helpful tool for people with more background knowledge of HR. For others, it can also complicate the creative process even more since Farrohina et al. (2023) find that AI language tools, if not used right, can significantly hinder one’s productivity and creativity.

Baert and Verhaest (2019) also emphasize that overqualification in the application process does not lower one’s chances of receiving the job and even increases the chances of employment for temporary jobs. Therefore, additional effort can not be of harm.  

Overall, all platforms provide similar-level content and are an excellent tool to create a personalized motivation letter. Sadly, the lack of layout options persists but can be easily tackled, by the use of other platforms.

Last but not least, the AI language models are built upon similar documents; therefore, the originality can only reach as far. Hence, the generated letters can come out to be too generic if applied to highly sought-after positions. Therefore, as helpful as these websites can be, they cannot replace well-thought-out and personal material.

Baert, S., & Verhaest, D. (2019). Unemployment or overeducation: which is a worse signal to employers? De Economist167(1), 1-21.

Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 1-15.

Please rate this

Could AI be contributing to the disappearance of language diversity?

18

September

2023

No ratings yet.

Almost in no time, AI-powered large language models (LLMs) such as ChatGPT, Bing AI Chat, Google Bard AI, etc., have gained popularity among the mainstream part of society. However, I have noticed increased social media attention, specifically among Latvian language speakers, about the lack of applicability and, oftentimes, even comedic outputs these language models create.

I test this observation by typing ‘write a poem’ in multiple languages in the dialogue interface of ChatGPT. English, Russian, French, Arabic, Hindi, Dutch, Latvian, Estonian and Lithuanian. Interestingly, although ChatGPT can produce, to some extent, coherent text in all prior languages, the latter three, i.e., the Baltic countries, excel with incoherent meanings and even grammar and style inconsistencies. Bang et al. (2023) argue that these are low-resource languages, i.e., languages with relatively few speakers. Not surprisingly, Latvian is spoken by 1,5 million native inhabitants (Latvian Presidency, 2023), and the AI model has not received the necessary data input to produce grammatically or style-wise coherent sentences (see picture).

So, how can this be an issue?

In Baltic countries, approximately 95% of individuals speak at least two languages (Latvian Presidency, 2015). The second language most often is Russian or English, i.e., high-resource languages.

This is a worry, as many native speakers might instead stick to high-resource languages while browsing or creating content. This, in return, reciprocates the poor usability of low-resource languages and exacerbates language polarization. The low-resource languages are, therefore, at risk unless new measures are implemented to better the LLM training, e.g., training AI with ‘small data’ as suggested by Ogueji, Zhu, Lin (2021), or feeding AI with new data resources.

Of course, the future of the language is not as one-dimensional and depends on many factors, but at times of language globalization, the mainstream AI tools have helped no further!

Bang, Y., Cahyawijaya, S., Lee, N., Dai, W., Su, D., Wilie, B., … & Fung, P. (2023). A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023.

Ogueji, K., Zhu, Y., & Lin, J. (2021). Small data? no problem! exploring the viability of pretrained multilingual language models for low-resourced languages. In Proceedings of the 1st Workshop on Multilingual Representation Learning.

Latvian Presidency. (2015). Language. Latvian Presidency of the Council of the European Union. https://eu2015.lv/latvia-en/discover-latvia/language 

Please rate this