EA Sports and the Future of Gaming: Dynamic AI-Powered Voiceovers in EA FC

18

October

2024

5/5 (3)

Imagine this: It’s the last minutes of a weekend league EA FC final game. The clock is ticking. Emotions are through the roof. The tension is unbearable. And as you score the winning goal, the commentators go “Quite a lovely goal wasn’t it?” “Hmm yes, quite…”. No excitement, no praise, no passion. Somewhat disappointing, isn’t it?

Our AI voiceover solution ensures that the commentator would erupt in a thrilling, game-relevant, personalized voiceover that feels like they were watching you the whole time. This is the future of gaming, and we give EA Sports an opportunity to lead the charge with its revolutionary voiceover system. Forget about repetitive, boring and irrelevant commentary. Welcome to the world of dynamic voiceovers, where in-game events trigger real-time reactions that are also relevant to recent events.

The Game-Changing Tech: GenAI-Powered Dynamic Voiceovers 

Our dynamic voiceover system relies on Generative AI technology, seamlessly integrated with ElevenLabs’ Voice Lab, a leader in AI audio. This collaboration allows high-quality, real-time voice synthesis that adapts to any scenario. Using Natural Language Processing and Sentiment Analysis, the system processes real-time gameplay data and generates lifelike commentary that matches game’s intensity. ElevenLabs’ Text-to-Speech API ensures ultra-low latency, meaning commentary syncs perfectly with in-game events. Their voice cloning technology also opens new possibilities for customizable, recognizable voices, offering more personalized experience. Not only does it support multiple languages, but it also allows players to tweak voice characteristics like pitch and tone. 

The flexibility of ElevenLabs’ platform lets EA FC implement this solution without reinventing the wheel, ensuring integration into their existing ecosystem while delivering high-quality commentary that evolves with the game. By leveraging existing tools like ElevenLabs, EA Sports can minimize development time and focus on enhancing player immersion, driving in-game spending, and elevating the gaming experience.

Why EA FC Needs This 

EA FC is known for its unmatched realism, but this dynamic voiceover system takes things to a new level. With players seeking more engagement, adding real-time voiceovers will enhance replayability, making each match feel unique, even after hours of play. This level of customization doesn’t just keep things exciting – it also drives very desirable in-game spending. With options to purchase new voice profiles or unlock additional commentary, the potential for downloadable content is huge. Imagine a special edition featuring legendary football commentators or even guest appearances from football stars. EA Sports can open up an entirely new revenue stream while enhancing player satisfaction.

The Future of EA Sports Gaming 

Next steps would be to gradually introduce proposed solutions across EA’s diverse portfolio. Starting with EA FC, the system can be expanded to other fast-paced games like UFC, NHL, and F1, where real-time reactions to in-game moments will amplify immersion. Beyond sports, franchises like Battlefield could benefit from this tech, bringing adaptive voiceovers and sound effects that enhance player engagement and create a unique experience with every mission.

This AI-driven system isn’t just a cool feature – it’s the future of gaming. Personalized voiceovers, background noises and music as well as NPC interactions, have a huge potential to make every gameplay an exciting experience. By investing in GenAI-powered voiceovers, EA Sports is giving fans an experience that goes far beyond the screen. 

How to Disable Commentators in EA Sports FC 25 (Bowen et al., 2024)

Please rate this

Voices of the Future: How AI is Revolutionizing Sound in Video Games, EA FIFA Case Study

19

September

2024

5/5 (1)

The evolution of sound in gaming video game audio has come a long way since the early days of simple beeps and 8-bit soundtracks in the late 80’s (Drake, 2019). Today, sound and voice overs play a crucial role in immersing players in rich, dynamic worlds and game scenarios (Gallacher, 2013)(Bormann & Greitemeyer, 2015)(Stingel-Voigt, 2020)(Cesário et al., 2023). And  another revolution is just underway – one powered by artificial intelligence. AI-generated sounds and voice overs are rapidly transforming how game developers approach audio design, offering a completely new dimension of customization, scalability, and immersion (Filipović, 2023). This blog post dives into the emerging world of AI-generated sound and voice overs, highlighting companies that already provide such solutions, and provides a case study on how major gaming companies such as Electronic Arts (EA) could harness this technology to enhance their flagship franchises like FIFA (Nelva, 2024). 

(Drake, 2019)

“And now of course we’re in the era of generative AI which is the most exciting yet by a fairly wide margin and something that we’re embracing deeply. We think about it in three core vectors: efficiency, expansion, and transformation.” – Andrew Wilson, CEO EA.

Speaking of efficiency, EA Sports CEO Andrew Wilson mentioned that the company’s business involves an incredibly iterative development cycle as pressing a button in a game doesn’t just need to trigger the desired effect on the screen, but also needs to be fun. As a result, game development is very time-consuming and new games are still taking a couple of years to fully develop (Morgan Stanley, 2024).

What is AI-Generated Sound and Voice Over? 

At its core, AI-generated sound and voiceover technology relies on deep learning models that synthesize speech and sound effects based on their vast datasets. These models can be trained to replicate human voices, create new ones, or generate contextual soundscapes in real time. Those can further be added to in-game NPC’s, background noises such as stadium chanting or interactions with other users (Replica Studios, 2024). AI voice models are particularly powerful because they can scale endlessly – enabling developers to generate unique character voices or sound environments that evolve with player actions and game development. This technology is already being adopted by various companies, making sound design faster, cheaper, and more flexible than ever before (Filipović, 2023). I hope to see it implemented on a larger scale as soon as possible as it brings a lot of new functionalities and increased accessibility for all users. 

A feasibility study across all of EA’s game development processes showed that about 60% of them have “high feasibility to be positively impacted by generative AI.” (Morgan Stanley, 2024).

To provide a more material example, in the past building a stadium for a sports game such as FIFA took six months. In the past year, it took six weeks, and it’s not unnatural to think that very soon, it’ll take less than six days. Wilson believes that extending this concept to every aspect of development could drive meaningful efficiency for the EA (Nelva, 2024).

EA Sports uses many advanced technologies powered by revolutionary AI systems such as HyperMotion and AI Mimic (Molina, 2024).

Companies Leading the Way 

  1. Replica Studios

Replica Studios is at the forefront of AI-generated voice overs offering both text-to-speech as well as speech-to-speech solutions in multiple languages. They provide game developers with a library of AI voices that can deliver dialogue, narration, and character voices at scale. Replica’s platform allows developers to generate voice lines in minutes, which is a massive leap in efficiency compared to traditional voiceover production. They recently introduced a very interesting plug-in for unreal engine called Smart NPC, which basically allows talking to any NPC through your microphone and receiving a custom dialogue response in real-time. It adjusts the emotional tone and intensity of the response as well as adds NPC face expressions based on in-game events (Replica Studios, 2024). For games like FIFA, where commentators could dynamically react to player performance or key moments, this kind of AI-driven personalization could significantly elevate player engagement.

Voice Lab: Describe your voice, or the role or character you would like the AI to portray, and dream it into existence with Voice Lab, a prompt-to-voice design feature which can create a blend of up to 5 Replica voices which all contribute their unique accents, prosody, and other vocal features to the resulting new voice (Replica Studios, 2024).

  1. Eleven Labs 

Eleven Labs specializes in deep-learning models that generate highly realistic and natural-sounding speech. Their ability to clone voices and synthesize speech in multiple languages has the potential to revolutionize localization for global games (Eleven Labs, 2024). For a company like EA, which regularly releases games in dozens of languages, Eleven Labs could dramatically reduce time and costs associated with localization. Moreover, Eleven Labs’ technology could allow players to customize their in-game avatars’ voices, adding an additional layer of personalization and immersion that would further enhance user experience across EA’s game portfolio.

(Eleven Labs, 2023)

  1. Sonantic 

Sonantic, recently acquired by Spotify, focuses on generating emotionally modulated voice overs. Their AI voice models can express a range of emotions, from subtle sadness to intense excitement. This level of emotional depth is essential for creating believable character interactions and narratives in games (Virtucio, 2023). For a game like FIFA, Sonantic’s technology could enable commentators or characters to convey emotions based on real-time match scenarios- turning a victory celebration or last-minute goal into a more compelling experience for players.

Spotify to Acquire Sonantic, an AI Voice Platform (Spotify, 2022).

How does it work?

AI-generated voices and sounds work by using models that have been trained to understand and mimic human speech or create sounds based on patterns. Here’s how it works step by step, using text-to-voice or in-game action to sound as an example: 

1. Training the AI Model

First, developers “teach” an AI how human voices sound. This is done by feeding the AI large amounts of voice recordings paired with the text spoken in those recordings. The AI learns to recognize patterns like how certain words are pronounced, different voice tones, and even emotions. This process is called machine learning. For sound effects, the same concept applies – the AI learns how different sounds (like footsteps, explosions, or wind) should sound based on data it’s trained on (Eleven Labs, 2023) (PlayHT, 2024).

2. Turning Text or Actions into Sound

Once the AI is trained, here’s what happens when it needs to turn text or an in-game action into sound or voice: 

  • Step 1 – Input: The game sends the AI a command based on what’s happening. This command can be a piece of text (for voiceovers) or an in-game action (like a character running). When FIFA produces the text, the AI system breaks it down into phonetic components. It then synthesizes these components, piecing them together to form words and sentences (Eleven Labs, 2023).
  • Step 2 – Processing: The AI processes this input. It uses its training to understand how the text should be pronounced or what sound should be made based on the action. For example, it knows how to emphasize excitement when saying “Goal!” in a sports game. To enhance realism, some advanced AI voice generators incorporate techniques like Natural Language Processing (NLP). NLP helps the system understand and interpret the nuances of language, allowing it to modify its speech output accordingly. This includes adjusting for sarcasm, questions, or excitement, making the synthetic voice sound more natural and human-like (Eleven Labs, 2023).
  • Step 3 – Sound Generation: Using this understanding, the AI generates the actual sound. For text, it creates a voiceover that sounds natural, as if a human said it. For in-game actions, it produces the appropriate sound effect, like the crowd cheering or the sound of the ball hitting the net (PlayHT, 2024). 

3. Real-Time Adjustments 

One of the cool things about AI-generated sounds is that they can adjust in real time. In a game, AI can react immediately to what’s happening: 

– If a player scores a goal, the AI might dynamically adjust the commentator’s tone to match the excitement of the moment.

– For sound effects, the AI might adjust the intensity or volume of crowd noises based on the importance of the goal. 

In the end, all of this happens quickly and seamlessly in the background. The AI takes text or actions, interprets what they mean, and instantly turns them into sound or voiceover, creating a more immersive experience for the player.

What are the differences between text-to-speech versus AI voice generation?

FeatureText-to-Speech (TTS)AI Voice Generation
TechnologyUses synthesized speech from text using basic digital voices.Employs advanced machine learning algorithms to generate more natural-sounding voices.
CustomizationLimited to pre-set voices and basic adjustments in pitch and speed.Offers extensive customization, including voice cloning and nuanced emotional tones.
RealismOften sounds robotic and less natural.Produces highly realistic and human-like speech.
ApplicationWidely used for reading text aloud in a straightforward manner.Used for creating dynamic and engaging audio content, mimicking human speech patterns more accurately.
FlexibilityGenerally offers a one-size-fits-all approach.Allows for creating unique voices tailored to specific needs or characters.
User InteractionPrimarily unidirectional; reads text as-is.Can interact more fluidly in conversational AI, adapting tone and style contextually.
DevelopmentBased on simpler speech synthesis technology.Involves complex AI models like neural networks for voice generation.
Use CasesUseful in accessibility tools, GPS navigation, and basic voice assistants.Ideal for high-quality voiceovers, virtual assistants, gaming, and personalized customer interactions.
(Eleven Labs, 2023)

How EA and FIFA Could Leverage AI-Generated Voiceovers: Case Study

EA Sports is already a giant in the gaming industry, with its FIFA franchise being one of the best-selling games of all time. By integrating AI-generated voiceovers and sounds, EA could unlock several strategic advantages (Nelva, 2024).

EA Sports FC 24 has hundreds of run cycles for its players built by generative AI (Nelva, 2024).

  1. Enhanced Player Engagement with Dynamic Voiceovers

One of the most exciting applications of AI-generated voices for EA would be the introduction of dynamic, real-time voice overs in FIFA. Currently, FIFA’s commentators are pre-recorded, with a fixed number of responses to match events. With AI-generated voice overs, commentators could react dynamically to player actions, offering new commentary each time a similar event occurs. For example, in a high-stakes match, the AI commentator could offer unique insights based on the players’ performance history or their current standing in a tournament. This level of customization could lead to increased immersion, keeping players more engaged and extending the life cycle of each game. Replayability would also improve as players receive fresh commentary in every match.

  1. Cost-Effective Localization and Multi-Language Support 

FIFA games are released in numerous languages, requiring extensive voice recording for each localized version. With tools like Eleven Labs, EA could significantly reduce the cost and time associated with this process. AI voice synthesis could generate high-quality localized commentary and dialogue for global markets quicker, and at a fraction of the traditional cost. This scalability would also allow EA to release more language options simultaneously, expanding its market reach and improving its presence in regions that currently have limited localization support.

  1. Monetization Opportunities Through Custom Voice Packs

AI-generated voices open up a new avenue for monetization through downloadable content (DLC). EA could sell custom voice packs – allowing players to download unique commentators, player voice overs, or even region-specific packs that provide a more personalized gaming experience. For example, fans could purchase special voice packs for their favorite leagues or teams, or even retro-style commentators from past FIFA games. This type of microtransactions could drive revenue while providing additional value to players.

Risks and Ethical Considerations 

Despite the clear advantages, implementing AI-generated voices and sounds is not without risks. One concern is the potential displacement of voice actors, as AI-generated voices reduce the need for human talent. Companies like EA will need to balance innovation with the preservation of creative jobs, potentially by using AI voices as a supplement to human actors rather than a full replacement. Another ethical concern is the misuse of AI-generated voices, particularly when it comes to voice cloning. Companies must ensure that AI models are used transparently and ethically to avoid issues like deep fakes or unauthorized voice replication. In the case of EA, clear policies on voice data and AI usage will be necessary to maintain player trust. 

Strategic Implications for EA 

By adopting AI-generated voice overs, EA could further solidify its leadership in the gaming industry while enhancing its ability to innovate and scale. Key strategic benefits include:
 

  • Faster Development Cycles: With AI handling more repetitive voiceover tasks, EA could release games and updates more quickly, maintaining its competitive edge. 
  • Expanded Market Reach: Efficient localization would allow EA to target more global markets, increasing the international appeal of its FIFA franchise. 
  • New Revenue Streams: Custom voice packs and AI-enhanced features could create additional microtransaction opportunities and further drive this classics’ popularity, ensuring EA’s continued financial success. 

Conclusion

This new era of game audio AI-generated sounds and voice overs represent a major step forward in how games are developed and experienced. Companies like Replica Studios and Eleven Labs are pushing the boundaries of what’s possible in the video game world, and large developers like EA are to benefit immensely from these advancements. By embracing this technology, EA can not only improve its games but also shape the future of audio in gaming – creating richer, more personalized, and more immersive experiences for players all around the world. I’m excited to see how such concepts become reality in the nearest future.

References

Bormann, D., & Greitemeyer, T. (2015). Immersed in Virtual Worlds and Minds. Social Psychological and Personality Science. https://www.semanticscholar.org/paper/Immersed-in-Virtual-Worlds-and-Minds-Bormann-Greitemeyer/cd705ccbcb2d3316e8645ec05bf08e22974fbbce

Cesário, V., Ribeiro, M., & Coelho, A. (2023). Design Recommendations for Improving Immersion in Role-Playing Video Games. A Focus on Storytelling and Localisation. Interaction Design & Architecture(s) Journal. https://doi.org/10.55612/s-5002-058-009

Drake, J. (2019, August 11). The 10 Best Soundtracks From The 8-Bit Generation. TheGamer. Retrieved September 19, 2024, from https://www.thegamer.com/best-soundtracks-retro-games-8-bit/

Eleven Labs. (2023, January 11). This Voice Doesn’t Exist – Generative Voice AI. ElevenLabs. Retrieved September 19, 2024, from https://elevenlabs.io/blog/enter-the-new-year-with-a-bang

Eleven Labs. (2023, December 3). What is an AI voice generator and how does it work? ElevenLabs. https://elevenlabs.io/blog/what-is-an-ai-voice-generator

Eleven Labs. (2024). AI Dubbing: Free Online Video Translator. ElevenLabs. https://elevenlabs.io/dubbing

Filipović, A. (2023). THE ROLE OF ARTIFICIAL INTELLIGENCE IN VIDEO GAME DEVELOPMENT. Kultura Polisa. https://www.ceeol.com/search/article-detail?id=1201751

Gallacher, N. (2013). Game audio — an investigation into the effect of audio on player immersion. The Computer Games Journal. https://link.springer.com/article/10.1007/BF03392342

Molina, D. (2024, March 11). The Dawning of a New Era: AI Takes the Field in EA Sports FC. FIFA Infinity. Retrieved September 19, 2024, from https://www.fifa-infinity.com/ea-sports-fc/the-dawning-of-a-new-era-ai-takes-the-field-in-ea-sports-fc/

Morgan Stanley. (2024). Tech, Media & Telecom 2024: The State of Generative AI. Morgan Stanley. https://www.morganstanley.com/Themes/tech-media-telecom-trends-insights-outlook

Nelva, G. (2024). EA Hopes to Use Generative AI to Drive Monetization and Make Development 30% More Efficient. TechRaptor. https://techraptor.net/gaming/news/ea-hopes-to-use-generative-ai-to-drive-more-monetization-and-make-development-30-more

PlayHT. (2024). What is an AI Voice Generator? PlayHT. https://play.ht/blog/what-is-an-ai-voice-generator/#:~:text=AI%20voice%20generators%20convert%20text,structure%20and%20generate%20corresponding%20audio.

Replica Studios. (2024). Smart NPCs | Ethical AI. Replica Studios. https://www.replicastudios.com/products/smart-npcs

Replica Studios. (2024). Voice Lab. Replica Studios. https://www.replicastudios.com/products/voice-lab

Spotify. (2022, June 13). Spotify to Acquire Sonantic, an AI Voice Platform — Spotify. Spotify Newsroom. Retrieved September 19, 2024, from https://newsroom.spotify.com/2022-06-13/spotify-to-acquire-sonantic-an-ai-voice-platform/

Stingel-Voigt, Y. (2020). Functions and Meanings of Vocal Sound in Video Games. Journal of Sound and Music in Games. https://online.ucpress.edu/jsmg/article-abstract/1/2/25/106828/Functions-and-Meanings-of-Vocal-Sound-in-Video?redirectedFrom=fulltext

Virtucio, M. (2023, January 20). Sonantic AI Voice Generator: Detailed. Softlist.io. Retrieved September 19, 2024, from https://www.softlist.io/sonantic-ai-voice-generator-detailed/

Please rate this

Thirsty AI

16

September

2024

5/5 (2)

Artificial intelligence (AI) is revolutionizing our world, from helping us choose what to cook for dinner, to enabling advanced data analysis. For us, students, AI has become part of the academic toolkit, whether it’s for writing assistance, article and lecture summaries, or accessing more personalized learning resources. However, what many don’t realize is that our growing reliance on AI comes at a hidden cost – one that is largely invisible yet increasingly significant: water consumption. AI’s environmental impact is often discussed along the topics of energy usage and carbon emissions, but not many of us realize that water plays a major role in keeping AI running.

Where does the water go?

When thinking of AI’s environmental cost, water might not be the first thing that comes to mind. However, it plays a critical role in both the direct and indirect operations of AI systems, primarily through data centers, as well as various processes throughout the supply chain such as the production of semiconductors and microchips used in AI models. Popular large language models (LLMs) likeOpenAI’s ChatGPT and Google’s Bard are energy-intensive, requiring massive server farms to provide enough data to train the powerful programs (DeGeurin et al., 2023). 

1. Direct Water Usage:

Data centers – the backbone of AI – require immense cooling systems to prevent overheating. These centers house thousands of servers that generate tremendous amounts of heat while running (Clancy, 2022). Water is commonly used in cooling systems to regulate the temperature of these servers, as the optimal temperature to prevent the equipment from malfunctioning is typically between 10 and 25 degrees Celsius (DeGeurin et al., 2023). Cooling mechanisms vary, but one of the most popular methods is evaporative cooling, which directly consumes significant quantities of water (Digital Realty, 2023). The researchers estimate around a gallon of water is consumed for every kilowatt-hour expended in an average data center (Farfan & Lohrmann, 2023). Not just any type of water can be used, either. Data centers pull from clean, freshwater sources in order to avoid the corrosion or bacteria growth that can come with seawater (DeGeurin et al., 2023).

(Li et al., 2023)

2. Indirect Water Usage:

The electricity that powers AI also has a water footprint, especially when it comes from thermoelectric power plants, which rely on water for steam generation and cooling (Petrakopoulou, 2021) (Torcellini et al., 2023). Even when data centers run on renewable energy, the construction and operation of the renewable infrastructure can still have a water impact. All of that just along other often omitted  factors such as water usage embodied in the supply chains (e.g., water used for chip manufacturing) (Li et al., 2023). To illustrate it better: an average chip manufacturing facility today can use up to 10 million gallons of ultrapure water per day – as much water as is used by 33,000 US households every day (James, 2024). Need more examples? Just imagine that globally semiconductor factories are already consuming as much water as Hong Kong, a city of 7.5 million (Robinson, 2024). 

(James, 2024)

How thirsty is the AI? 

Just how much water does AI consume? The numbers are staggering: in 2021 Google’s US data centers alone consumed 16.3 billion liters of water, including 12.7 billion liters of freshwater (Clancy, 2022) (Li et al., 2023). That’s just as much as the annual consumption of a mid-sized city. According to data published in 2023, a single conversation with ChatGPT (spanning 20 to 50 interactions) is equivalent to drinking a 500ml bottle (DeGeurin et al., 2023). While this may not seem significant on an individual scale, ChatGPT has currently over 200 million active users, engaging in multiple conversations daily (Singh, 2024). GPT-3, an AI model developed by OpenAI, reportedly consumed approximately 700,000 liters of water only during its training phase (Li et al., 2023). When scaled up to consider all functioning and developing AI models along with their data centers, this leads to billions of liters of water being consumed only for cooling purposes. However, not all AI models are equal in their water demands. While smaller models require less computational power, and thus less water for cooling, larger, more advanced models like GPT-4 demand significantly more resources. And of course, as AI models become more sophisticated and popularized, they also become more resource-intensive, both in terms of energy and water.

(Cruchet & MacDiarmid, 2023)

AI’s Water Crisis: Implications 

The high water consumption of AI systems and data centers has significant environmental and societal consequences, particularly in water-scarce regions and less developed countries. 

  1. Escalating Water Scarcity: In regions where water is already scarce, data centers add to the problem. A clear example is Google’s data center in South Carolina, which raised alarms over its massive water withdrawals in an area often hit by droughts (Moss, 2017). As AI’s growth drives up demand for these centers, we’re likely to see more conflicts between tech giants and local communities fighting for the same limited resources.
  2. Strain on Ecosystems: Data centers don’t just impact human communities; they affect nature too. When large amounts of water are diverted for industrial use, natural ecosystems suffer. Less water means habitat loss for animals and severe disruptions to the local environment, throwing entire ecosystems out of balance (Balova & Kolbas, 2023).
  3. Widening the Digital Divide: The high water and energy demands of AI data centers often mean they are built in regions with abundant resources, leaving less developed areas at a disadvantage. These centers are often built in resource-rich regions, close to users, to reduce latency and cut down on data transmission costs. It makes sense from a business perspective—faster data, lower costs. But what happens to the areas that lack water, energy, and infrastructure? They get left behind, further widening the existing digital divide.

Drying Out AI: Smart Solutions for Water Use

While the current water consumption rates may seem unsustainable, there are solutions – though their plausibility and long-term impact vary. 

1. Water-Efficient Cooling Technologies: One promising solution is the adoption of more water-efficient cooling technologies. Some companies are experimenting with air cooling or liquid cooling systems that don’t rely on water. For example, Google’s data center in Finland introduced the first ever system using cold seawater for cooling, drastically reducing freshwater consumption (Miller, 2011). However, not all data centers can be located near natural water sources that can be sustainably tapped. 

2. Renewable Energy Transitions: While much of AI’s water footprint comes from electricity generation, transitioning data centers to renewable energy sources like wind and solar could reduce the indirect water use associated with thermoelectric plants (Arts, 2024). 

(Lenovo StoryHub, 2024)

3. Transparency and Accountability: One of the most plausible and immediately impactful steps is for tech companies to be more transparent about their water usage. Publicly reporting on their water consumption and environmental impact could put pressure on companies to adopt more sustainable practices. Microsoft and Google have already pledged to become “water positive” by 2030, meaning they aim to replenish more water than they consume (Clancy, 2021). While this goal is ambitious, its success will depend on innovations in both technology and infrastructure.

Other specialists have proposed relocating data centers to Nordic countries like Iceland or Sweden, in a bid to utilize ambient, cool air to minimize carbon footprint, a technique called “free cooling” (Monserrate, 2022). However, network signal latency issues make this dream of a haven for green data centers largely untenable to meet the computing and data storage demands of the wider world. 

Will AI ever be sustainable?

AI’s water footprint is a pressing environmental issue that must be addressed alongside energy and carbon concerns. Though constant advancements are being made, there is still much to explore regarding AI’s water consumption. Further research is needed in areas such as:

  • investigation of the environmental trade-offs of AI usage;
  • exploration of alternative cooling methods for data centers;
  • assessment of the feasibility of building AI systems that are less resource-intensive;
  • analysis of the scalability of current solutions like seawater cooling or closed-loop cooling systems,

to ensure the long-term sustainability of AI technologies.

As students and future innovators, understanding these invisible costs is the first step toward making informed and conscious choices. Whether by adjusting our daily digital habits, supporting companies with sustainable practices, or advocating for responsible AI development, we all have a role to play in ensuring that AI can thrive without draining the planet’s resources. By demanding more transparency from the tech industry and pushing for the adoption of more water-efficient technologies, we can help to navigate the future of AI toward a more sustainable and unbiased path.

References

Arts, M. (2024). Designing green energy data centres. Royal HaskoningDHV. https://www.royalhaskoningdhv.com/en/newsroom/blogs/2023/designing-green-energy-data-centres

Balova, A., & Kolbas, N. (2023, August 20). Biodiversity and Data Centers: What’s the connection? Ramboll. https://www.ramboll.com/galago/biodiversity-and-data-centers-what-s-the-connection

Clancy, H. (2021). Diving into ‘water positive’ pledges by Facebook, Google. Trellis. https://trellis.net/article/diving-water-positive-pledges-facebook-google/

Clancy, H. (2022, November 22). Sip or guzzle? Here’s how Google’s data centers use water – Trellis. GreenBiz. Retrieved September 15, 2024, from https://trellis.net/article/sip-or-guzzle-heres-how-googles-data-centers-use-water/

Cruchet, N., & MacDiarmid, A. (2023, November 21). Datacenter Water Usage: Where Does It All Go? Submer. Retrieved September 16, 2024, from https://submer.com/blog/datacenter-water-usage/

DeGeurin, M., Ropek, L., Gault, M., Feathers, T., & Barr, K. (2023). ‘Thirsty’ AI: Training ChatGPT Required Enough Water to Fill a Nuclear Reactor’s Cooling Tower, Study Finds. Gizmodo. https://gizmodo.com/chatgpt-ai-water-185000-gallons-training-nuclear-1850324249

Digital Realty. (2023). The Future of Data Center Cooling: Innovations for Sustainability. Digital Realty. https://www.digitalrealty.com/resources/articles/future-of-data-center-cooling

Farfan, J., & Lohrmann, A. (2023). Gone with the clouds: Estimating the electricity and water footprint of digital data services in Europe. Energy Conversion and Management. https://www.sciencedirect.com/science/article/pii/S019689042300571X

James, K. (2024, July 19). Semiconductor manufacturing and big tech’s water challenge | World Economic Forum. The World Economic Forum. Retrieved September 16, 2024, from https://www.weforum.org/agenda/2024/07/the-water-challenge-for-semiconductor-manufacturing-and-big-tech-what-needs-to-be-done/

M.T Ziegler (2024, March) The world’s AI generators: rethinking water usage in data centers to build a more sustainable future. Lenovo StoryHub. https://news.lenovo.com/data-centers-worlds-ai-generators-water-usage/

Li, P., Ren, S., Yang, J., & Islam, M. (2023, October 29). Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. arXiv. http://arxiv.org/pdf/2304.03271

Miller, R. (2011). Google Using Sea Water to Cool Finland Project – Google Using Sea Water to Cool Finland Project. Data Center Knowledge. https://www.datacenterknowledge.com/hyperscalers/google-using-sea-water-to-cool-finland-project

Monserrate, S. G. (2022, February 14). The staggering ecological impacts of computation and the cloud. MIT Schwarzman College of Computing. Retrieved September 16, 2024, from https://computing.mit.edu/news/the-staggering-ecological-impacts-of-computation-and-the-cloud/

Moss, S. (2017). Google’s plan to use aquifer for cooling in South Carolina raises concerns. Data Center Dynamics. https://www.datacenterdynamics.com/en/news/googles-plan-to-use-aquifer-for-cooling-in-south-carolina-raises-concerns/

Petrakopoulou, F. (2021). Defining the cost of water impact for thermoelectric power generation. Energy Reports. https://www.sciencedirect.com/science/article/pii/S2352484721002158

Robinson, D. (2024, February 29). Growing water use a concern for chip industry and AI models. The Register. Retrieved September 16, 2024, from https://www.theregister.com/2024/02/29/growing_water_use_ai_semis_concern/

Singh, S. (2024). ChatGPT Statistics (SEP. 2024) – 200 Million Active Users. DemandSage. Retrieved September 15, 2024, from https://www.demandsage.com/chatgpt-statistics/

Torcellini, P., Long, N., & Judkoff, R. (2023). Consumptive Water Use for U.S. Power Production. NREL. https://www.nrel.gov/docs/fy04osti/33905.pdf

Please rate this