Reflections from the Age of Co-Creation: My Experience with Generative AI
10
October
2025
No ratings yet.
Generative AI has quietly shifted from being a futuristic concept to a daily companion. Over the past year, I’ve used tools like ChatGPT for writing, Midjourney for visuals, and Runway for short video concepts. At first, these tools felt like “assistants.” Now, they often feel more like co-creators.
I first realized their potential while creating marketing content for my previous job. That is where I was taught – and learned firsthand – how to use AI daily to optimize my workflow. What used to take hours -writing ad copy, structuring blog posts and captions, and experimenting with brand messaging – could suddenly be done in minutes. Since the company also covered the premium subscription, I could make full use of ChatGPT’s advanced features. I wasn’t just speeding up my work; I was expanding my creative and critical thinking. It offered multiple directions at once, forcing me to reflect on why I preferred one version over another. Instead of replacing creativity, it amplified it by giving me a creative mirror to think through ideas faster.
Yet every marketer who uses chat agents like ChatGPT has likely noticed the same limitation: a narrowness of perspective. The model reflects what is statistically common, not what is contextually insightful. When generating campaign ideas or headlines, it tends to default to safe, universal tropes rather than niche or counterintuitive angles that truly capture attention. In other words, AI can reproduce creativity, but it struggles to originate it. This limitation becomes especially visible when working in branding, where differentiation and emotional subtlety are key. ChatGPT might suggest a clever slogan, but it rarely surprises – it gives you what the internet already thinks is good. True creative insight still requires human judgment, intuition, and cultural sensitivity – elements that can’t be reduced to patterns of probability.
Then came visual tools. While I haven’t employed AI image generators for my professional work, I used AI to inspire me on certain elements of the visuals and the layout of the final project. As an example, for my previous blog post – I described an idea – a split world between traditional aviation and virtual travel – and within seconds, I had a hyperrealistic visual that perfectly matched the concept. That moment captured what makes generative AI so transformative: it compresses imagination-to-reality time from hours to seconds.
Again, it’s not without flaws. AI often delivers polished but “safe” answers. Creativity, by nature, thrives on unpredictability and imperfection – two things AI still struggles with. I sometimes notice how text outputs can sound formulaic or visuals too idealized, repetitive and almost too perfect, lacking the human quirks that make content memorable. There’s also a growing concern about over-dependence: when the tool becomes too good, do we stop exploring ideas ourselves?
One improvement stands out to me – especially after writing the text for this blog: It would be a “co-creation mode” – an interface where AI explains why it made certain creative choices and lets users steer tone, emotion, or intent interactively, almost like a conversation with a creative partner rather than a tool.
Generative AI has taught me that creativity isn’t dying – it’s evolving. The next leap won’t be about machines creating for us, but about humans learning to create with them.
So I’ll end with a question for you: When your next big idea comes along, will you brainstorm it alone-or with an AI sitting right beside you? ( I suppose it is the latter )
From Print(“Hello”) to Data Analysis: My Thesis with AI-Assisted Coding
9
October
2025
No ratings yet.
When I started my thesis, I barely remembered how Python worked. I knew what a dataset was and how to print a line or write a simple loop, but that was about it. The idea of building an entire data-science workflow seemed far beyond what I could do on my own. Yet, a few months later, I had written a full pipeline to analyze hybrid work patterns using behavioral logs, location data, and daily surveys. What made that possible was Generative AI.
ChatGPT quickly became my silent collaborator. Whenever I got stuck, I simply described what I needed: filtering AWT data by time, merging JSON files by date, or running a Mann-Whitney U-test. Within seconds, it generated structured and readable code that actually worked. It helped me clean and merge datasets, calculate metrics like active work time and task switches, and even combine GPS data with behavioral data to label each day as home or office. Suddenly, something that felt completely out of reach became manageable.
Of course, the process was not perfect. I often had to debug the AI’s mistakes, rewrite lines of code, and verify that the logic fit my data. Sometimes ChatGPT used outdated Pandas functions or made assumptions that didn’t make sense. But those moments taught me more than any tutorial could. I started to understand not just what the code was doing but why it worked that way.
Looking back, Generative AI didn’t write my thesis for me; it expanded what I was capable of. It turned Python from something intimidating into a tool I could actually use. For me, that is the real power of AI. It doesn’t make you less of a coder; it makes you more confident to learn, experiment, and create things you once thought were impossible.
Should you be using AI tools at all?
7
October
2025
No ratings yet.
Over the last few days, I have read many people’s blog posts on here discussing their dislike towards the current AI landscape and reflecting on their own AI usage. The consensus seems to be that we currently use AI for too many things and that our brains are atrophying because of this. This has led me to reflect on my AI usage as well.
The first AI tool I ever used was ChatGPT in 2022. Back then, I used it to simply find me articles on a certain topic, to tell me where I could improve my writing in my bachelor’s thesis. But much has changed since then. Nowadays, I let ChatGPT create my workout plan for the gym, let it give me recipes if I want to cook something, create a marathon training plan for me, or even let it plan whole vacations.
All of these were things that I used to do by myself. Things that I researched on my own, because I liked to spend time understanding them. In a time before AI, I used to not only get a workout plan for the gym, but also the principles on how to plan your training. I used to not only get recipes, but also the knowledge on why a recipe works and how you can change it up.
So, I have come to the same conclusion as many people here: that I want to use AI tools less. But if I stop using AI for processes that involve problem-solving and thinking, should I be using AI tools at all? I could always learn something more by doing it myself! But to me, like many others, that is not feasible; the productivity gains from using AI tools are simply too great to be ignored at all.
Therefore, I have decided not to forbid myself from using AI tools at all. While there are certain things that I do care about and that I want to do myself, such as finding new recipes or learning the principles of how to cook, there are also things that I do not care about at all, and that I would never have the time for anyway if I did not utilize AI tools.
You cannot put the time and effort into everything that it would take to produce results that are possible with AI in seconds. So maybe next time just ask yourself: “Is this actually something I would like to know, or do I just need the result?”.
Innovating Learning with Canv-AI: A GenAI Solution for Canvas LMS
17
October
2024
No ratings yet.
In today’s educational landscape, generative AI (GenAI) is reshaping how students and instructors interact with learning platforms. A promising example is Canv-AI, an AI-powered tool designed to integrate into the widely used Canvas Learning Management System (LMS). This tool aims to transform both student learning and faculty workload by leveraging advanced AI features to provide personalized, real-time support.
The integration of Canv-AI focuses on two primary groups: students and professors. For students, the key feature is a chatbot that can answer course-specific questions, provide personalized feedback, and generate practice quizzes or mock exams. These features are designed to enhance active learning, where students actively engage with course material, improving their understanding and retention. Instead of navigating dense course content alone, students have instant access to interactive support tailored to their learning needs.
Professors benefit from Canv-AI through a dashboard that tracks student performance and identifies areas where students struggle the most. This insight allows instructors to adjust their teaching strategies in real-time, offering targeted support without waiting for students to seek help. Additionally, the chatbot can help reduce the faculty workload by answering common questions about lecture notes or deadlines, allowing professors to focus more on core teaching tasks.
From a business perspective, Canv-AI aligns with Canvas’s existing subscription-based revenue model. It is offered as an add-on package, giving universities access to AI-driven tools for improving educational outcomes. The pricing strategy is competitive, with a projected $2,000 annual fee for universities already using Canvas. The integration also brings the potential for a significant return on investment, with an estimated 29.7% ROI after the first year. By attracting 15% of Canvas’s current university customers, Canv-AI is expected to generate over $700,000 in profit during its first year.
The technological backbone of Canv-AI relies on large language models (LLMs) and retrieval-augmented generation (RAG). These technologies allow the system to understand and respond to complex queries based on course materials, ensuring students receive relevant and accurate information. The system is designed to be scalable, using Amazon Web Services (AWS) to handle real-time AI interactions efficiently.
However, the integration of GenAI into educational systems does come with challenges. One concern is data security, especially the protection of student information. To address this, Canv-AI proposes the use of Role-Based Access Control (RBAC), ensuring that sensitive data is only accessible to authorized users. Another challenge is AI accuracy. To avoid misinformation, Canv-AI offers options for professors to review and customize the chatbot’s responses, ensuring alignment with course content.
In conclusion, Canv-AI offers a transformative solution for Canvas LMS by enhancing the learning experience for students and reducing the workload for professors. By integrating GenAI, Canvas can stay competitive in the educational technology market, delivering personalized, data-driven learning solutions. With the right safeguards in place, Canv-AI represents a promising step forward for digital education.
Authors: Team 50
John Albin Bergström (563470jb)
Oryna Malchenko (592143om)
Yasin Elkattan (593972yk)
Daniel Fejes (605931fd)
My Experience with GenAI: Improving Efficiency or Becoming Stupid?
9
October
2024
No ratings yet.
I work as a part-time data analyst at a software company, where I analyze sales data. My 9-5 mainly consists of writing code, specifically using SQL in Google Bigquery and creating dashboards in PowerBI. I love using GenAI to help me write queries faster which would have taken me a long time to compose by myself. Additionally, I am a student and use GenAI to help me better understand course content or inspire me on what to write about during assignments. Generally, I would say that GenAI benefits my life as I can get more done in less time, however, from time to time I start to question whether I am not just becoming lazy.
I use GenAI on a daily (almost hourly) basis and rely on it in many ways. I mainly use ChatGPT 3.5, when ChatGPT 4o’s free limit has been reached, and Gemini, when ChatGPT is down. Based on my own experience, I can say that being good at ‘AI prompting’ is a real skill in the field of data analytics as it can drastically improve the efficiency with which you write queries, and therefore, the speed with which you finish tasks. My manager recently even held a knowledge-sharing meeting in which he discussed the best practices to use for data analysts when interacting with ChatGPT. Using GenAI has become a real thing in the field of data analytics, and is not something to be ashamed of.
However, I cannot help but sometimes be slightly embarrassed when I read back the questions I’ve asked ChatGPT. It seems that with any task that requires a little bit of effort or critical thinking, I automatically open the ChatGPT tab in my browser to help me come up with the right approach to solve the task at hand. I don’t even try to solve things by myself anymore, which makes me question: is this something to be desired?
The image presents an interaction with ChatGPT regarding the risk of using GenAI on human intelligence.
As explained by ChatGPT in the image, using GenAI indeed frees up more brain space for things that are important. If I can use less time to get more work done, this improves my work efficiency and also gives me more time for things that I find more valuable, such as spending time with family or friends. Right now, it is still too soon to be able to determine the impact that using GenAI will have on our own (human) intelligence. In the meantime, we should just continue using it for repetitive tasks that would normally take much of our valuable time and hope that it is not ChatGPT’s plan to stupidify humanity before it can take over the world.
My love-hate relationship with ChatGPT: Trust issues exposed
8
October
2024
No ratings yet.
In this world where technology is unimaginable, artificial intelligence like ChatGPT has become big part of our everyday lives. My experience with this AI has turned into a complicated love-hate relationship that is filled with enthusiasm, confusion and frustration.
Building trust
When I first started using ChatGPT, I was excited. It felt like having an assistant always near me, ready to help with my questions, schoolwork, recipes and even emails. It was even better than Google at some points. I could ask questions and get clear answers almost immediately. At first I thought it was fantastic and that I could rely on it for anything. The AI provided explanations, helped me brainstorm ideas and suggested solutions to problems I was struggling with. In those early days it felt like I was forming a solid partnership.
Doubts start to appear
However, the excitement did not take long, when I started asking more straightforward school related questions, questions like “Is this right?”, to check if I’m on the right track with my homework, I found myself getting different responses each time. I expected a confirmation but instead I received answers that did not match what I was looking for.
I tried and intentionally gave a wrong answer to a question and asked if it was right, just to see how ChatGPT would react. When it told me my answer was right, I asked, “Are you sure?” it replied, “I apologize for the mistake. Let me provide the correct information.” That left me more confused than ever. How could it change the answer so quickly? It was hard to trust it when it seemed so inconsistent.
Growing trust issues
When I used it more often, my trust issues increased. I found myself repeating questions, hoping for a good answer. I had moments when I spent more time discussing things with ChatGPT than it would have taken to just do the task myself. I would find myself getting frustrated and typing in all caps. I felt like I was talking to someone who did not even want to understand me. Instead of feeling that it helped me, it felt like I was only arguing back and forth and it was exhausting.
Realising that my frustration only increased. I knew that I had to change the way how I asked my questions. I started double checking answers and used other sources to confirm information. I realized that while it could be a helpful tool, it was important to verify the information I got. I learned to ask more specific questions and provide additional context, this led to better results.
Lessons learned
I learned an important lessons about trust, not just with AI but in all areas of life. Trust takes time and clear communication. It is important to realise that even advanced technology can make mistakes. My relationship with ChatGPT changed from blind trust to a more cautious partnership. I learned to appreciate the strengths while acknowledging the limitations.
Looking back on my experience with ChatGPT, I realised how unstable technology can be. While my experience has had its conflicts, I still appreciate the value it brings to my learning process. Have you ever felt frustrated using AI? You are not alone, let’s share our struggles and find ways to make it work better for us!
Using GenAI as a learning tool: A personal reflection
1
October
2024
No ratings yet.
Generative AI, to be more specific ChatGPT has become a key tool in my learning journey. I do not like to use ChatGPT as a way to do the work for me but rather as an instrument to make learning more fun and productive.
image from cegos.com
Some of the advantages I have found is the ability of this tool to turn complex concepts into more digestible and easy to understand explanations. Even recently when learning about UML diagrams I was a bit confused what exactly an object or class was and how they differed. instead of googling and spending time on finding a trustworthy source I could easily find the answer through ChatGPT. Of course there are pitfall to this, if you ask nuanced questions or vague ones the tool can give you a different answer than you actually need without knowing it. So it is important to ensure the questions you ask are clear and something you deem feasible to be asked to such a tool. As generative AI evolves even further, a time will come where it can smoothly ask questions back and ensure it understands the question completely which can reduce the risk of misinformation further.
Furthermore, when I am learning about a new software such as Notion or R, ChatGPT is the first platform I go to for simple functional questions such as “How do I create a progress bar”, or “How do you insert widgets”. This has always turned out correct and an easy way to find a solution.
Even existing platforms such as the famous Duolingo could gain a lot of value and productivity gains when making use of Generative AI improving the language learning experience for its users. Think of things like basic practice conversations which can be continued with the partial information that a learning student can provide. This is just one example, generative AI is not only limited to text-based information. With generated pictures and videos on the rise learning can be improved even further.
Data Privacy and GenAI
16
September
2024
No ratings yet.
When ChatGPT launched at the end of 2022, most data protection professionals had never heard of generative AI and were then certainly not aware of the potential dangers it could bring to data privacy (CEDPO AI Working Group, 2023). Now that AI platforms grow more sophisticated, so do the risks to our privacy, and therefore, it is important to discuss these risks and how to disarm them as effectively as possible.
GenAI systems are built on vast datasets, often including sensitive personal and organizational data. When users interact with these platforms, they unknowingly share information that could be stored, analyzed, and even potentially exposed to malicious actors (Torm, 2023). The AI itself could potentially reveal confidential information learned from previous interactions, leading to privacy breaches. This could have some major implications for the affected individuals or organizations if sensitive information is being shared without proper anonymization or consent.
Continuing on the topic of consent: Giving consent for generative AI platforms to use your data can be tricky, as most platforms provide vague and complex terms and conditions that are difficult for most users to fully understand. These agreements often include legal jargon and technological terminology, making it hard to know exactly what data is being collected, how it’s being used, or who it’s being shared with. This lack of transparency puts users at a disadvantage, as they may unknowingly grant permission for their personal information to be stored, analyzed, or even shared without fully understanding the risks involved.
To reduce the potential dangers of GenAI platforms, several key measures must be implemented. First, transparency should be prioritized by simplifying terms and conditions, making it easier for users to understand what data is being collected and how it is being be used. Clear consent mechanisms should be enforced, requiring explicit user approval for the collection and use of personal information. Additionally, data anonymization must be a standard practice to prevent sensitive information from being traced back to individuals. Furthermore, companies should limit the amount of data they collect and retain only what is necessary for the platform’s operation. Regular audits and compliance with privacy regulations like GDPR or HIPAA are also crucial to ensure that data handling practices align with legal standards (Torm, 2023). Lastly, users should be educated on best practices for protecting their data when using GenAI, starting with being cautious about what they share on AI platforms.
In conclusion, while generative AI offers transformative potential, it also presents significant risks to data privacy. By implementing transparent consent practices, anonymizing sensitive data, and adhering to strict privacy regulations, we can minimize these dangers and ensure a safer, more responsible use of AI technologies. Both organizations and users must work together to strike a balance between innovation and security, creating a future where the benefits of GenAI are harnessed without compromising personal or organizational privacy.
References:
CEDPO AI Working Group. (2023). Generative AI: the Data protection Implications. https://cedpo.eu/wp-content/uploads/generative-ai-the-data-protection-implications-16-10-2023.pdf
Torm, N. (2023, December 11). Steps to safeguarding privacy in the Gen AI era. www.cognizant.com. https://www.cognizant.com/se/en/insights/blog/articles/steps-to-safeguarding-privacy-in-the-gen-ai-era
Thirsty AI
16
September
2024
5/5 (2)
Artificial intelligence (AI) is revolutionizing our world, from helping us choose what to cook for dinner, to enabling advanced data analysis. For us, students, AI has become part of the academic toolkit, whether it’s for writing assistance, article and lecture summaries, or accessing more personalized learning resources. However, what many don’t realize is that our growing reliance on AI comes at a hidden cost – one that is largely invisible yet increasingly significant: water consumption. AI’s environmental impact is often discussed along the topics of energy usage and carbon emissions, but not many of us realize that water plays a major role in keeping AI running.
Where does the water go?
When thinking of AI’s environmental cost, water might not be the first thing that comes to mind. However, it plays a critical role in both the direct and indirect operations of AI systems, primarily through data centers, as well as various processes throughout the supply chain such as the production of semiconductors and microchips used in AI models. Popular large language models (LLMs) likeOpenAI’s ChatGPT and Google’s Bard are energy-intensive, requiring massive server farms to provide enough data to train the powerful programs (DeGeurin et al., 2023).
1. Direct Water Usage:
Data centers – the backbone of AI – require immense cooling systems to prevent overheating. These centers house thousands of servers that generate tremendous amounts of heat while running (Clancy, 2022). Water is commonly used in cooling systems to regulate the temperature of these servers, as the optimal temperature to prevent the equipment from malfunctioning is typically between 10 and 25 degrees Celsius (DeGeurin et al., 2023). Cooling mechanisms vary, but one of the most popular methods is evaporative cooling, which directly consumes significant quantities of water (Digital Realty, 2023). The researchers estimate around a gallon of water is consumed for every kilowatt-hour expended in an average data center (Farfan & Lohrmann, 2023). Not just any type of water can be used, either. Data centers pull from clean, freshwater sources in order to avoid the corrosion or bacteria growth that can come with seawater (DeGeurin et al., 2023).
(Li et al., 2023)
2. Indirect Water Usage:
The electricity that powers AI also has a water footprint, especially when it comes from thermoelectric power plants, which rely on water for steam generation and cooling (Petrakopoulou, 2021) (Torcellini et al., 2023). Even when data centers run on renewable energy, the construction and operation of the renewable infrastructure can still have a water impact. All of that just along other often omitted factors such as water usage embodied in the supply chains (e.g., water used for chip manufacturing) (Li et al., 2023). To illustrate it better: an average chip manufacturing facility today can use up to 10 million gallons of ultrapure water per day – as much water as is used by 33,000 US households every day (James, 2024). Need more examples? Just imagine that globally semiconductor factories are already consuming as much water as Hong Kong, a city of 7.5 million (Robinson, 2024).
(James, 2024)
How thirsty is the AI?
Just how much water does AI consume? The numbers are staggering: in 2021 Google’s US data centers alone consumed 16.3 billion liters of water, including 12.7 billion liters of freshwater (Clancy, 2022) (Li et al., 2023). That’s just as much as the annual consumption of a mid-sized city. According to data published in 2023, a single conversation with ChatGPT (spanning 20 to 50 interactions) is equivalent to drinking a 500ml bottle (DeGeurin et al., 2023). While this may not seem significant on an individual scale, ChatGPT has currently over 200 million active users, engaging in multiple conversations daily (Singh, 2024). GPT-3, an AI model developed by OpenAI, reportedly consumed approximately 700,000 liters of water only during its training phase (Li et al., 2023). When scaled up to consider all functioning and developing AI models along with their data centers, this leads to billions of liters of water being consumed only for cooling purposes. However, not all AI models are equal in their water demands. While smaller models require less computational power, and thus less water for cooling, larger, more advanced models like GPT-4 demand significantly more resources. And of course, as AI models become more sophisticated and popularized, they also become more resource-intensive, both in terms of energy and water.
(Cruchet & MacDiarmid, 2023)
AI’s Water Crisis: Implications
The high water consumption of AI systems and data centers has significant environmental and societal consequences, particularly in water-scarce regions and less developed countries.
Escalating Water Scarcity: In regions where water is already scarce, data centers add to the problem. A clear example is Google’s data center in South Carolina, which raised alarms over its massive water withdrawals in an area often hit by droughts (Moss, 2017). As AI’s growth drives up demand for these centers, we’re likely to see more conflicts between tech giants and local communities fighting for the same limited resources.
Strain on Ecosystems: Data centers don’t just impact human communities; they affect nature too. When large amounts of water are diverted for industrial use, natural ecosystems suffer. Less water means habitat loss for animals and severe disruptions to the local environment, throwing entire ecosystems out of balance (Balova & Kolbas, 2023).
Widening the Digital Divide: The high water and energy demands of AI data centers often mean they are built in regions with abundant resources, leaving less developed areas at a disadvantage. These centers are often built in resource-rich regions, close to users, to reduce latency and cut down on data transmission costs. It makes sense from a business perspective—faster data, lower costs. But what happens to the areas that lack water, energy, and infrastructure? They get left behind, further widening the existing digital divide.
Drying Out AI: Smart Solutions for Water Use
While the current water consumption rates may seem unsustainable, there are solutions – though their plausibility and long-term impact vary.
1. Water-Efficient Cooling Technologies: One promising solution is the adoption of more water-efficient cooling technologies. Some companies are experimenting with air cooling or liquid cooling systems that don’t rely on water. For example, Google’s data center in Finland introduced the first ever system using cold seawater for cooling, drastically reducing freshwater consumption (Miller, 2011). However, not all data centers can be located near natural water sources that can be sustainably tapped.
2. Renewable Energy Transitions: While much of AI’s water footprint comes from electricity generation, transitioning data centers to renewable energy sources like wind and solar could reduce the indirect water use associated with thermoelectric plants (Arts, 2024).
(Lenovo StoryHub, 2024)
3. Transparency and Accountability: One of the most plausible and immediately impactful steps is for tech companies to be more transparent about their water usage. Publicly reporting on their water consumption and environmental impact could put pressure on companies to adopt more sustainable practices. Microsoft and Google have already pledged to become “water positive” by 2030, meaning they aim to replenish more water than they consume (Clancy, 2021). While this goal is ambitious, its success will depend on innovations in both technology and infrastructure.
Other specialists have proposed relocating data centers to Nordic countries like Iceland or Sweden, in a bid to utilize ambient, cool air to minimize carbon footprint, a technique called “free cooling” (Monserrate, 2022). However, network signal latency issues make this dream of a haven for green data centers largely untenable to meet the computing and data storage demands of the wider world.
Will AI ever be sustainable?
AI’s water footprint is a pressing environmental issue that must be addressed alongside energy and carbon concerns. Though constant advancements are being made, there is still much to explore regarding AI’s water consumption. Further research is needed in areas such as:
investigation of the environmental trade-offs of AI usage;
exploration of alternative cooling methods for data centers;
assessment of the feasibility of building AI systems that are less resource-intensive;
analysis of the scalability of current solutions like seawater cooling or closed-loop cooling systems,
to ensure the long-term sustainability of AI technologies.
As students and future innovators, understanding these invisible costs is the first step toward making informed and conscious choices. Whether by adjusting our daily digital habits, supporting companies with sustainable practices, or advocating for responsible AI development, we all have a role to play in ensuring that AI can thrive without draining the planet’s resources. By demanding more transparency from the tech industry and pushing for the adoption of more water-efficient technologies, we can help to navigate the future of AI toward a more sustainable and unbiased path.
References
Arts, M. (2024). Designing green energy data centres. Royal HaskoningDHV. https://www.royalhaskoningdhv.com/en/newsroom/blogs/2023/designing-green-energy-data-centres
Balova, A., & Kolbas, N. (2023, August 20). Biodiversity and Data Centers: What’s the connection? Ramboll. https://www.ramboll.com/galago/biodiversity-and-data-centers-what-s-the-connection
Clancy, H. (2021). Diving into ‘water positive’ pledges by Facebook, Google. Trellis. https://trellis.net/article/diving-water-positive-pledges-facebook-google/
Clancy, H. (2022, November 22). Sip or guzzle? Here’s how Google’s data centers use water – Trellis. GreenBiz. Retrieved September 15, 2024, from https://trellis.net/article/sip-or-guzzle-heres-how-googles-data-centers-use-water/
Cruchet, N., & MacDiarmid, A. (2023, November 21). Datacenter Water Usage: Where Does It All Go? Submer. Retrieved September 16, 2024, from https://submer.com/blog/datacenter-water-usage/
DeGeurin, M., Ropek, L., Gault, M., Feathers, T., & Barr, K. (2023). ‘Thirsty’ AI: Training ChatGPT Required Enough Water to Fill a Nuclear Reactor’s Cooling Tower, Study Finds. Gizmodo. https://gizmodo.com/chatgpt-ai-water-185000-gallons-training-nuclear-1850324249
Digital Realty. (2023). The Future of Data Center Cooling: Innovations for Sustainability. Digital Realty. https://www.digitalrealty.com/resources/articles/future-of-data-center-cooling
Farfan, J., & Lohrmann, A. (2023). Gone with the clouds: Estimating the electricity and water footprint of digital data services in Europe. Energy Conversion and Management. https://www.sciencedirect.com/science/article/pii/S019689042300571X
James, K. (2024, July 19). Semiconductor manufacturing and big tech’s water challenge | World Economic Forum. The World Economic Forum. Retrieved September 16, 2024, from https://www.weforum.org/agenda/2024/07/the-water-challenge-for-semiconductor-manufacturing-and-big-tech-what-needs-to-be-done/
Li, P., Ren, S., Yang, J., & Islam, M. (2023, October 29). Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. arXiv. http://arxiv.org/pdf/2304.03271
Miller, R. (2011). Google Using Sea Water to Cool Finland Project – Google Using Sea Water to Cool Finland Project. Data Center Knowledge. https://www.datacenterknowledge.com/hyperscalers/google-using-sea-water-to-cool-finland-project
Monserrate, S. G. (2022, February 14). The staggering ecological impacts of computation and the cloud. MIT Schwarzman College of Computing. Retrieved September 16, 2024, from https://computing.mit.edu/news/the-staggering-ecological-impacts-of-computation-and-the-cloud/
Moss, S. (2017). Google’s plan to use aquifer for cooling in South Carolina raises concerns. Data Center Dynamics. https://www.datacenterdynamics.com/en/news/googles-plan-to-use-aquifer-for-cooling-in-south-carolina-raises-concerns/
Petrakopoulou, F. (2021). Defining the cost of water impact for thermoelectric power generation. Energy Reports. https://www.sciencedirect.com/science/article/pii/S2352484721002158
Robinson, D. (2024, February 29). Growing water use a concern for chip industry and AI models. The Register. Retrieved September 16, 2024, from https://www.theregister.com/2024/02/29/growing_water_use_ai_semis_concern/
Singh, S. (2024). ChatGPT Statistics (SEP. 2024) – 200 Million Active Users. DemandSage. Retrieved September 15, 2024, from https://www.demandsage.com/chatgpt-statistics/
Torcellini, P., Long, N., & Judkoff, R. (2023). Consumptive Water Use for U.S. Power Production. NREL. https://www.nrel.gov/docs/fy04osti/33905.pdf
How Will Generative AI Be Used in the Future? Answer: AutoGen
21
October
2023
No ratings yet.
The generative AI we know of today is ChatGPT, Midjourney, and DALL·E 3 and many more. This generative AI is very good and advanced, but there are some flaws, like not being able to perform long iterations. Now there is something new called AutoGen. AutoGen is an open-source project from Microsoft that was released on September 19, 2023. AutoGen at its core, is a generative AI model that works with agents; those agents work together in loops. Agents are in essence, pre-specified workers that can become anything, so there are agents that can code well and agents that can review the generated code and give feedback. Agents can be made to do anything and become experts in any field, from marketing to healthcare.
An example of what AutoGen can do is the following: if I want to write some code to get the stock price of Tesla, I could use ChatGPT, and it will output some code. Most of the time, the code that is written by chatGPT via the OpenAI website will have some errors. But with AutoGen, there are two or more agents at work: one that will output code and the second one that is able to run the code and tell the first model if something is wrong. This process of generating the code and running the code will go on until the code works and results in the correct output. This way, the user does not have to manually run the code and ask to fix the errors or other problems with AutoGen it is done automatically.
I also tried to create some code with AutoGen. I first installed all the necessary packages and got myself an API key for openAI GPT4. Then I started working on the code and decided to create the game “Snake”. Snake is an old and easy game to create, but it might be a challenge for AutoGen. I started the process of creating the snake game, and it had its first good run. I was able to create the first easy version of the game. I then came up with some iterations to improve the game. The game now also has some obstacles that, if the snake bumps into one, the game will end. This was also made by AutoGen without any problems. After palying around, I was really amazed at how powerful this AutoGen is, and I can only imagine what else can be created with AutoGen.
AutoGen is a very promising development and will be the future of professional code development or atomization tasks. If the large language models (LLMs) get more powerful, this AutoGen will also be more powerful because all the individual agents will be more powerful. It is interesting to follow this development and see if this AutoGen could create games that are not yet existing.