Through The Looking Glass of Languages: The Bridge Between AI and Linguistics

9

October

2025

5/5 (2)

AI does not have to come across as complex to be transformative. Take Google Translate for example. Google Translate was first launched in 2006 remaining free-for-use throughout its life with exactly the purpose its name states (Burman, 2023). A decade later, it integrated Neural Machine Translation which has revolutionized translation by increasing accuracy, providing contextual awareness, promoting real-time rendition, enhancing voice and image interpretation, and allowing for the continuous improvement of translation with more use (Schäferhoff, 2024). Perhaps one of the more revolutionary additions to Google Translate is the image-to-text feature which translates text from photos and images or live through the camera (Harsha, 2024). This technology is called Optical Character Recognition (OCR) which extracts the text and analyzes it, which is then translated into the language of choice (Uk, 2024). Google Translate has made the experience of understanding another language more accessible and inclusive, transforming human communication and travel globally.

While the technology itself is fascinating, its true impact is best reflected through lived experience. Although the feature doesn’t generate images or essays, it generates connection and understanding. It transforms captured visual information into intelligible meaning. When I first moved to the Netherlands in 2022, I did not understand a word of Dutch. Grocery shopping was difficult as I could not read the ingredients, ordering from a menu was a challenge if the dishes were described in Dutch, and even finding the exit at Amsterdam Centraal was an obstacle. I would manually type in the phrases on the web version of Google Translate (which was quite a tedious task) until I discovered the  image-to-text feature in the Google Translate app. All I had to do was take a photo or hover my phone camera over a product’s packaging, label, or menu to be able to see the words instantly transform into a language that I could make sense of. This translation feature enabled me to slowly pick up on the language, even learning frequently reappearing words or sentences over the course of a few weeks. This simple yet powerful AI tool helped me feel independent, included, and confident as a young woman in a foreign country. 

Using this tool every day helped me realize how intuitive and democratized AI can be which later sparked my interest in exploring ChatGPT’s potential for language learning purposes. When I was enrolled in Dutch courses, I would frequently use ChatGPT in Dutch to better grasp the language’s grammatical structure and variety in vocabulary. Because of this convenience, I was able to obtain the A2 certification within 9 weeks. ChatGPT’s ability to take specific commands such as the complexity of the language level, feedback recommendations, and tone adjustments allowed me to tailor every interaction to match my exact learning pace and needs. The learning process felt very adaptive, customized, and personal rather than coming across as robotic.

Despite the usefulness and strength of both tools, they come with their own limitations. Google Translate struggles with its lack of contextual awareness often resulting in errors with translating idioms, slang, cultural references, and expert sentences (Raiano, 2025). It also does not necessarily generate new content but just translates existing input from one language to another by mapping the linguistic structures. ChatGPT on the other hand, tends to ‘hallucinate’, occasionally responding with false statements (OpenAI, 2025). If your source of learning is giving you inaccurate information, can you really trust it to teach you the correct concept? Such moments reminded me that while AI can streamline learning and aid in simplifying communication, it cannot compare to the level of human understanding, intuition, and cultural empathy that bring magic into the language’s meaning.

AI is only as effective as the data it is trained on. Challenges pertaining to its struggle with cultural nuance detection can be addressed by exposing the models to regional expressions and context-specific datasets to minimize biases that can lead to literal translations that are contextually inaccurate. Additionally, these models can be taught to identify tonal variations (formal/informal) such that users benefit from a variety of natural responses rather than one-dimensional outputs. It is equally important to consider the different dialects shared within a single language. If the models can distinguish and recognize dialects, it makes room for more authentic and culturally accurate translations.

Through this looking glass of languages, I have come to realize that the innovative efforts behind AI do not need to be grand, but rather serve as a lens for everyday life lived. Humans crave connection, understanding, and belonging (Koehler, 2024). It is this humanity that AI must empathize with to successfully cement its place in society.

References:

Burman, A. (2023, September 26). Google Translate – 12 years on. Business Language Services. https://businesslanguageservices.co.uk/google-translate-10-years/

Harsha. (2024, August 3). How to use Google Translate for images: A comprehensive guide. ImageTranslate Blog. https://imagetranslate.com/blog/how-to-use-google-translate-for-images/

Koehler, J., PhD. (2024, September 12). Exploring the psychological forces that shape our need for belonging. Psychology Today. https://www.psychologytoday.com/us/blog/beyond-school-walls/202408/why-we-crave-connection-and-why-some-of-us-dont

OpenAI. (2025, September 5). Why language models hallucinate. openai.com. https://openai.com/index/why-language-models-hallucinate/

Raiano, A. (2025, February 21). Google Translate: Accuracy & Alternatives. https://www.locize.com/blog/google-translate-accuracy

Schäferhoff, N. (2024, July 9). The History of Google Translate (2004-Today): A Detailed analysis. TranslatePress. https://translatepress.com/history-of-google-translate/Uk, A. (2024, April 5). Translating the text of images: zoom in on this intelligent translation technology. https://www.alphatrad.co.uk/news/how-translate-text-from-image#:~:text=Google%20Translate:%20uses%20image%20segmentation%20to%20detect,this%20technology%20to%20extract%20the%20text%20from

Please rate this

AR Filters And Self-Perception: Fun or Fallacy?

17

September

2025

5/5 (5)

AR filters were first introduced to social media by the app Snapchat in September 2015 and called “Snapchat Lenses” (Inde, 2023). It was especially a hit among netizens when Lense Studios was released in late 2017 – allowing users to create custom AR filters (Inde, 2023). The pioneering technology took social media by storm with other platforms soon following suit. For the first time, you could not just do what you wanted on the platforms, you could be whoever you wanted to be. At the touch of a fingertip, users can perfect their skin, change their hair or eye colour, or even shift into an animated character. Although marketed as fun, researchers and psychologists ask: Do AR filters distort our perception of ourselves? (Javornik et al., 2021)

Studies show that they can. Arata (2016) highlights the discussions surrounding Snapchat filters usage and its effects on self-esteem. Psychologists even nicknamed this phenomenon “Snapchat dysmorphia wherein people undergo cosmetic surgeries to look as close as possible to the filtered versions of themselves (Brucculieri, 2018). These effects are seen most prominently in adolescents (Habib et al., 2022), who are still shaping their identity, leaving them more susceptible to the pressures of unconventional beauty standards.

Social media companies have responded with various mixed approaches. In 2019, Instagram’s parent company, Meta, responded by re-evaluating their policies and introducing bans on filters promoting plastic surgeries (BBC News, 2019).  Snapchat on the other hand, launched the Snap’s Council for Digital Well-Being made up of teenagers (SNAP Council for Digital Wellbeing, n.d.). According to their website, this initiative was launched to receive direct feedback from the younger demographic across the world about their experiences online. Still, filters remain a key factor for user-engagement creating tension between user well-being and corporate goals.

In my opinion, AR filters as is are not the problem. When used as a means for self-expression, such as using background effects or funny distortions, they contribute to the expansion of digital creativity. It is when they enforce and influence an unrealistic and unattainable paragon of beauty that it can quickly turn psychologically harmful. The GDPR in the context of EU data privacy laws adds a compelling thought: while these strong policies limit data exploitation (Steindl, 2023), they do not address the impact of immersive technologies on the psychological well-being of users. Perhaps AR filters play a role deeper than just entertainment – a product with noticeable effects on mental health. This raises the question: Should AR beauty filters and their use be regulated to protect young users or does the responsibility lie on the hands of individuals to use them mindfully?

References:

Arata, E. (2016b, August 1). The unexpected reason Snapchat’s “Pretty” filters hurt your Self-Esteem. Elite Daily. https://www.elitedaily.com/wellness/snapchat-filters-self-esteem/1570236

BBC News. (2019b, October 23). Instagram bans “cosmetic surgery” filters. https://www.bbc.com/news/business-50152053

Brucculieri, J. (2018b, February 22). “Snapchat dysmorphia” points to a troubling new trend in plastic surgery. HuffPost. https://www.huffpost.com/entry/snapchat-dysmorphia_n_5a8d8168e4b0273053a680f6

Habib, A., Ali, T., Nazir, Z., & Mahfooz, A. (2022b). Snapchat filters changing young women’s attitudes. Annals of Medicine and Surgery82. https://doi.org/10.1016/j.amsu.2022.104668

Inde, T. (2023c, May 26). The brief history of Social Media Augmented Reality Filters — INDE – the leading augmented reality agency. INDE – the Leading Augmented Reality Agency. https://www.indestry.com/blog/the-brief-history-of-social-media-ar-filters

Javornik, A., Marder, B., Pizzetti, M., & Warlop, L. (2021b, December 22). Research: How AR Filters Impact People’s Self-Image. Harvard Business Review. https://hbr.org/2021/12/research-how-ar-filters-impact-peoples-self-image

SNAP Council for Digital Wellbeing. (n.d.-b). https://values.snap.com/safety/cdwb

Steindl, E. (2023). Safeguarding privacy and efficacy in e-mental health: policy options in the EU and Australia. International Data Privacy Law13(3), 207–224. https://doi.org/10.1093/idpl/ipad009

Please rate this