Innovating Learning with Canv-AI: A GenAI Solution for Canvas LMS
17
October
2024
No ratings yet.
In today’s educational landscape, generative AI (GenAI) is reshaping how students and instructors interact with learning platforms. A promising example is Canv-AI, an AI-powered tool designed to integrate into the widely used Canvas Learning Management System (LMS). This tool aims to transform both student learning and faculty workload by leveraging advanced AI features to provide personalized, real-time support.
The integration of Canv-AI focuses on two primary groups: students and professors. For students, the key feature is a chatbot that can answer course-specific questions, provide personalized feedback, and generate practice quizzes or mock exams. These features are designed to enhance active learning, where students actively engage with course material, improving their understanding and retention. Instead of navigating dense course content alone, students have instant access to interactive support tailored to their learning needs.
Professors benefit from Canv-AI through a dashboard that tracks student performance and identifies areas where students struggle the most. This insight allows instructors to adjust their teaching strategies in real-time, offering targeted support without waiting for students to seek help. Additionally, the chatbot can help reduce the faculty workload by answering common questions about lecture notes or deadlines, allowing professors to focus more on core teaching tasks.
From a business perspective, Canv-AI aligns with Canvas’s existing subscription-based revenue model. It is offered as an add-on package, giving universities access to AI-driven tools for improving educational outcomes. The pricing strategy is competitive, with a projected $2,000 annual fee for universities already using Canvas. The integration also brings the potential for a significant return on investment, with an estimated 29.7% ROI after the first year. By attracting 15% of Canvas’s current university customers, Canv-AI is expected to generate over $700,000 in profit during its first year.
The technological backbone of Canv-AI relies on large language models (LLMs) and retrieval-augmented generation (RAG). These technologies allow the system to understand and respond to complex queries based on course materials, ensuring students receive relevant and accurate information. The system is designed to be scalable, using Amazon Web Services (AWS) to handle real-time AI interactions efficiently.
However, the integration of GenAI into educational systems does come with challenges. One concern is data security, especially the protection of student information. To address this, Canv-AI proposes the use of Role-Based Access Control (RBAC), ensuring that sensitive data is only accessible to authorized users. Another challenge is AI accuracy. To avoid misinformation, Canv-AI offers options for professors to review and customize the chatbot’s responses, ensuring alignment with course content.
In conclusion, Canv-AI offers a transformative solution for Canvas LMS by enhancing the learning experience for students and reducing the workload for professors. By integrating GenAI, Canvas can stay competitive in the educational technology market, delivering personalized, data-driven learning solutions. With the right safeguards in place, Canv-AI represents a promising step forward for digital education.
Authors: Team 50
John Albin Bergström (563470jb)
Oryna Malchenko (592143om)
Yasin Elkattan (593972yk)
Daniel Fejes (605931fd)
My Experience with GenAI: Improving Efficiency or Becoming Stupid?
9
October
2024
No ratings yet.
I work as a part-time data analyst at a software company, where I analyze sales data. My 9-5 mainly consists of writing code, specifically using SQL in Google Bigquery and creating dashboards in PowerBI. I love using GenAI to help me write queries faster which would have taken me a long time to compose by myself. Additionally, I am a student and use GenAI to help me better understand course content or inspire me on what to write about during assignments. Generally, I would say that GenAI benefits my life as I can get more done in less time, however, from time to time I start to question whether I am not just becoming lazy.
I use GenAI on a daily (almost hourly) basis and rely on it in many ways. I mainly use ChatGPT 3.5, when ChatGPT 4o’s free limit has been reached, and Gemini, when ChatGPT is down. Based on my own experience, I can say that being good at ‘AI prompting’ is a real skill in the field of data analytics as it can drastically improve the efficiency with which you write queries, and therefore, the speed with which you finish tasks. My manager recently even held a knowledge-sharing meeting in which he discussed the best practices to use for data analysts when interacting with ChatGPT. Using GenAI has become a real thing in the field of data analytics, and is not something to be ashamed of.
However, I cannot help but sometimes be slightly embarrassed when I read back the questions I’ve asked ChatGPT. It seems that with any task that requires a little bit of effort or critical thinking, I automatically open the ChatGPT tab in my browser to help me come up with the right approach to solve the task at hand. I don’t even try to solve things by myself anymore, which makes me question: is this something to be desired?
As explained by ChatGPT in the image, using GenAI indeed frees up more brain space for things that are important. If I can use less time to get more work done, this improves my work efficiency and also gives me more time for things that I find more valuable, such as spending time with family or friends. Right now, it is still too soon to be able to determine the impact that using GenAI will have on our own (human) intelligence. In the meantime, we should just continue using it for repetitive tasks that would normally take much of our valuable time and hope that it is not ChatGPT’s plan to stupidify humanity before it can take over the world.
My love-hate relationship with ChatGPT: Trust issues exposed
8
October
2024
No ratings yet.
In this world where technology is unimaginable, artificial intelligence like ChatGPT has become big part of our everyday lives. My experience with this AI has turned into a complicated love-hate relationship that is filled with enthusiasm, confusion and frustration.
Building trust
When I first started using ChatGPT, I was excited. It felt like having an assistant always near me, ready to help with my questions, schoolwork, recipes and even emails. It was even better than Google at some points. I could ask questions and get clear answers almost immediately. At first I thought it was fantastic and that I could rely on it for anything. The AI provided explanations, helped me brainstorm ideas and suggested solutions to problems I was struggling with. In those early days it felt like I was forming a solid partnership.
Doubts start to appear
However, the excitement did not take long, when I started asking more straightforward school related questions, questions like “Is this right?”, to check if I’m on the right track with my homework, I found myself getting different responses each time. I expected a confirmation but instead I received answers that did not match what I was looking for.
I tried and intentionally gave a wrong answer to a question and asked if it was right, just to see how ChatGPT would react. When it told me my answer was right, I asked, “Are you sure?” it replied, “I apologize for the mistake. Let me provide the correct information.” That left me more confused than ever. How could it change the answer so quickly? It was hard to trust it when it seemed so inconsistent.
Growing trust issues
When I used it more often, my trust issues increased. I found myself repeating questions, hoping for a good answer. I had moments when I spent more time discussing things with ChatGPT than it would have taken to just do the task myself. I would find myself getting frustrated and typing in all caps. I felt like I was talking to someone who did not even want to understand me. Instead of feeling that it helped me, it felt like I was only arguing back and forth and it was exhausting.
Realising that my frustration only increased. I knew that I had to change the way how I asked my questions. I started double checking answers and used other sources to confirm information. I realized that while it could be a helpful tool, it was important to verify the information I got. I learned to ask more specific questions and provide additional context, this led to better results.
Lessons learned
I learned an important lessons about trust, not just with AI but in all areas of life. Trust takes time and clear communication. It is important to realise that even advanced technology can make mistakes. My relationship with ChatGPT changed from blind trust to a more cautious partnership. I learned to appreciate the strengths while acknowledging the limitations.
Looking back on my experience with ChatGPT, I realised how unstable technology can be. While my experience has had its conflicts, I still appreciate the value it brings to my learning process. Have you ever felt frustrated using AI? You are not alone, let’s share our struggles and find ways to make it work better for us!
Using GenAI as a learning tool: A personal reflection
1
October
2024
No ratings yet.
Generative AI, to be more specific ChatGPT has become a key tool in my learning journey. I do not like to use ChatGPT as a way to do the work for me but rather as an instrument to make learning more fun and productive.
Some of the advantages I have found is the ability of this tool to turn complex concepts into more digestible and easy to understand explanations. Even recently when learning about UML diagrams I was a bit confused what exactly an object or class was and how they differed. instead of googling and spending time on finding a trustworthy source I could easily find the answer through ChatGPT. Of course there are pitfall to this, if you ask nuanced questions or vague ones the tool can give you a different answer than you actually need without knowing it. So it is important to ensure the questions you ask are clear and something you deem feasible to be asked to such a tool. As generative AI evolves even further, a time will come where it can smoothly ask questions back and ensure it understands the question completely which can reduce the risk of misinformation further.
Furthermore, when I am learning about a new software such as Notion or R, ChatGPT is the first platform I go to for simple functional questions such as “How do I create a progress bar”, or “How do you insert widgets”. This has always turned out correct and an easy way to find a solution.
Even existing platforms such as the famous Duolingo could gain a lot of value and productivity gains when making use of Generative AI improving the language learning experience for its users. Think of things like basic practice conversations which can be continued with the partial information that a learning student can provide. This is just one example, generative AI is not only limited to text-based information. With generated pictures and videos on the rise learning can be improved even further.
Data Privacy and GenAI
16
September
2024
No ratings yet.
When ChatGPT launched at the end of 2022, most data protection professionals had never heard of generative AI and were then certainly not aware of the potential dangers it could bring to data privacy (CEDPO AI Working Group, 2023). Now that AI platforms grow more sophisticated, so do the risks to our privacy, and therefore, it is important to discuss these risks and how to disarm them as effectively as possible.
GenAI systems are built on vast datasets, often including sensitive personal and organizational data. When users interact with these platforms, they unknowingly share information that could be stored, analyzed, and even potentially exposed to malicious actors (Torm, 2023). The AI itself could potentially reveal confidential information learned from previous interactions, leading to privacy breaches. This could have some major implications for the affected individuals or organizations if sensitive information is being shared without proper anonymization or consent.
Continuing on the topic of consent: Giving consent for generative AI platforms to use your data can be tricky, as most platforms provide vague and complex terms and conditions that are difficult for most users to fully understand. These agreements often include legal jargon and technological terminology, making it hard to know exactly what data is being collected, how it’s being used, or who it’s being shared with. This lack of transparency puts users at a disadvantage, as they may unknowingly grant permission for their personal information to be stored, analyzed, or even shared without fully understanding the risks involved.
To reduce the potential dangers of GenAI platforms, several key measures must be implemented. First, transparency should be prioritized by simplifying terms and conditions, making it easier for users to understand what data is being collected and how it is being be used. Clear consent mechanisms should be enforced, requiring explicit user approval for the collection and use of personal information. Additionally, data anonymization must be a standard practice to prevent sensitive information from being traced back to individuals. Furthermore, companies should limit the amount of data they collect and retain only what is necessary for the platform’s operation. Regular audits and compliance with privacy regulations like GDPR or HIPAA are also crucial to ensure that data handling practices align with legal standards (Torm, 2023). Lastly, users should be educated on best practices for protecting their data when using GenAI, starting with being cautious about what they share on AI platforms.
In conclusion, while generative AI offers transformative potential, it also presents significant risks to data privacy. By implementing transparent consent practices, anonymizing sensitive data, and adhering to strict privacy regulations, we can minimize these dangers and ensure a safer, more responsible use of AI technologies. Both organizations and users must work together to strike a balance between innovation and security, creating a future where the benefits of GenAI are harnessed without compromising personal or organizational privacy.
References:
CEDPO AI Working Group. (2023). Generative AI: the Data protection Implications. https://cedpo.eu/wp-content/uploads/generative-ai-the-data-protection-implications-16-10-2023.pdf
Torm, N. (2023, December 11). Steps to safeguarding privacy in the Gen AI era. www.cognizant.com. https://www.cognizant.com/se/en/insights/blog/articles/steps-to-safeguarding-privacy-in-the-gen-ai-era
Thirsty AI
16
September
2024
5/5 (2)
Artificial intelligence (AI) is revolutionizing our world, from helping us choose what to cook for dinner, to enabling advanced data analysis. For us, students, AI has become part of the academic toolkit, whether it’s for writing assistance, article and lecture summaries, or accessing more personalized learning resources. However, what many don’t realize is that our growing reliance on AI comes at a hidden cost – one that is largely invisible yet increasingly significant: water consumption. AI’s environmental impact is often discussed along the topics of energy usage and carbon emissions, but not many of us realize that water plays a major role in keeping AI running.
Where does the water go?
When thinking of AI’s environmental cost, water might not be the first thing that comes to mind. However, it plays a critical role in both the direct and indirect operations of AI systems, primarily through data centers, as well as various processes throughout the supply chain such as the production of semiconductors and microchips used in AI models. Popular large language models (LLMs) likeOpenAI’s ChatGPT and Google’s Bard are energy-intensive, requiring massive server farms to provide enough data to train the powerful programs (DeGeurin et al., 2023).
1. Direct Water Usage:
Data centers – the backbone of AI – require immense cooling systems to prevent overheating. These centers house thousands of servers that generate tremendous amounts of heat while running (Clancy, 2022). Water is commonly used in cooling systems to regulate the temperature of these servers, as the optimal temperature to prevent the equipment from malfunctioning is typically between 10 and 25 degrees Celsius (DeGeurin et al., 2023). Cooling mechanisms vary, but one of the most popular methods is evaporative cooling, which directly consumes significant quantities of water (Digital Realty, 2023). The researchers estimate around a gallon of water is consumed for every kilowatt-hour expended in an average data center (Farfan & Lohrmann, 2023). Not just any type of water can be used, either. Data centers pull from clean, freshwater sources in order to avoid the corrosion or bacteria growth that can come with seawater (DeGeurin et al., 2023).
(Li et al., 2023)
2. Indirect Water Usage:
The electricity that powers AI also has a water footprint, especially when it comes from thermoelectric power plants, which rely on water for steam generation and cooling (Petrakopoulou, 2021) (Torcellini et al., 2023). Even when data centers run on renewable energy, the construction and operation of the renewable infrastructure can still have a water impact. All of that just along other often omitted factors such as water usage embodied in the supply chains (e.g., water used for chip manufacturing) (Li et al., 2023). To illustrate it better: an average chip manufacturing facility today can use up to 10 million gallons of ultrapure water per day – as much water as is used by 33,000 US households every day (James, 2024). Need more examples? Just imagine that globally semiconductor factories are already consuming as much water as Hong Kong, a city of 7.5 million (Robinson, 2024).
(James, 2024)
How thirsty is the AI?
Just how much water does AI consume? The numbers are staggering: in 2021 Google’s US data centers alone consumed 16.3 billion liters of water, including 12.7 billion liters of freshwater (Clancy, 2022) (Li et al., 2023). That’s just as much as the annual consumption of a mid-sized city. According to data published in 2023, a single conversation with ChatGPT (spanning 20 to 50 interactions) is equivalent to drinking a 500ml bottle (DeGeurin et al., 2023). While this may not seem significant on an individual scale, ChatGPT has currently over 200 million active users, engaging in multiple conversations daily (Singh, 2024). GPT-3, an AI model developed by OpenAI, reportedly consumed approximately 700,000 liters of water only during its training phase (Li et al., 2023). When scaled up to consider all functioning and developing AI models along with their data centers, this leads to billions of liters of water being consumed only for cooling purposes. However, not all AI models are equal in their water demands. While smaller models require less computational power, and thus less water for cooling, larger, more advanced models like GPT-4 demand significantly more resources. And of course, as AI models become more sophisticated and popularized, they also become more resource-intensive, both in terms of energy and water.
(Cruchet & MacDiarmid, 2023)
AI’s Water Crisis: Implications
The high water consumption of AI systems and data centers has significant environmental and societal consequences, particularly in water-scarce regions and less developed countries.
Escalating Water Scarcity: In regions where water is already scarce, data centers add to the problem. A clear example is Google’s data center in South Carolina, which raised alarms over its massive water withdrawals in an area often hit by droughts (Moss, 2017). As AI’s growth drives up demand for these centers, we’re likely to see more conflicts between tech giants and local communities fighting for the same limited resources.
Strain on Ecosystems: Data centers don’t just impact human communities; they affect nature too. When large amounts of water are diverted for industrial use, natural ecosystems suffer. Less water means habitat loss for animals and severe disruptions to the local environment, throwing entire ecosystems out of balance (Balova & Kolbas, 2023).
Widening the Digital Divide: The high water and energy demands of AI data centers often mean they are built in regions with abundant resources, leaving less developed areas at a disadvantage. These centers are often built in resource-rich regions, close to users, to reduce latency and cut down on data transmission costs. It makes sense from a business perspective—faster data, lower costs. But what happens to the areas that lack water, energy, and infrastructure? They get left behind, further widening the existing digital divide.
Drying Out AI: Smart Solutions for Water Use
While the current water consumption rates may seem unsustainable, there are solutions – though their plausibility and long-term impact vary.
1. Water-Efficient Cooling Technologies: One promising solution is the adoption of more water-efficient cooling technologies. Some companies are experimenting with air cooling or liquid cooling systems that don’t rely on water. For example, Google’s data center in Finland introduced the first ever system using cold seawater for cooling, drastically reducing freshwater consumption (Miller, 2011). However, not all data centers can be located near natural water sources that can be sustainably tapped.
2. Renewable Energy Transitions: While much of AI’s water footprint comes from electricity generation, transitioning data centers to renewable energy sources like wind and solar could reduce the indirect water use associated with thermoelectric plants (Arts, 2024).
(Lenovo StoryHub, 2024)
3. Transparency and Accountability: One of the most plausible and immediately impactful steps is for tech companies to be more transparent about their water usage. Publicly reporting on their water consumption and environmental impact could put pressure on companies to adopt more sustainable practices. Microsoft and Google have already pledged to become “water positive” by 2030, meaning they aim to replenish more water than they consume (Clancy, 2021). While this goal is ambitious, its success will depend on innovations in both technology and infrastructure.
Other specialists have proposed relocating data centers to Nordic countries like Iceland or Sweden, in a bid to utilize ambient, cool air to minimize carbon footprint, a technique called “free cooling” (Monserrate, 2022). However, network signal latency issues make this dream of a haven for green data centers largely untenable to meet the computing and data storage demands of the wider world.
Will AI ever be sustainable?
AI’s water footprint is a pressing environmental issue that must be addressed alongside energy and carbon concerns. Though constant advancements are being made, there is still much to explore regarding AI’s water consumption. Further research is needed in areas such as:
investigation of the environmental trade-offs of AI usage;
exploration of alternative cooling methods for data centers;
assessment of the feasibility of building AI systems that are less resource-intensive;
analysis of the scalability of current solutions like seawater cooling or closed-loop cooling systems,
to ensure the long-term sustainability of AI technologies.
As students and future innovators, understanding these invisible costs is the first step toward making informed and conscious choices. Whether by adjusting our daily digital habits, supporting companies with sustainable practices, or advocating for responsible AI development, we all have a role to play in ensuring that AI can thrive without draining the planet’s resources. By demanding more transparency from the tech industry and pushing for the adoption of more water-efficient technologies, we can help to navigate the future of AI toward a more sustainable and unbiased path.
References
Arts, M. (2024). Designing green energy data centres. Royal HaskoningDHV. https://www.royalhaskoningdhv.com/en/newsroom/blogs/2023/designing-green-energy-data-centres
Balova, A., & Kolbas, N. (2023, August 20). Biodiversity and Data Centers: What’s the connection? Ramboll. https://www.ramboll.com/galago/biodiversity-and-data-centers-what-s-the-connection
Clancy, H. (2021). Diving into ‘water positive’ pledges by Facebook, Google. Trellis. https://trellis.net/article/diving-water-positive-pledges-facebook-google/
Clancy, H. (2022, November 22). Sip or guzzle? Here’s how Google’s data centers use water – Trellis. GreenBiz. Retrieved September 15, 2024, from https://trellis.net/article/sip-or-guzzle-heres-how-googles-data-centers-use-water/
Cruchet, N., & MacDiarmid, A. (2023, November 21). Datacenter Water Usage: Where Does It All Go? Submer. Retrieved September 16, 2024, from https://submer.com/blog/datacenter-water-usage/
DeGeurin, M., Ropek, L., Gault, M., Feathers, T., & Barr, K. (2023). ‘Thirsty’ AI: Training ChatGPT Required Enough Water to Fill a Nuclear Reactor’s Cooling Tower, Study Finds. Gizmodo. https://gizmodo.com/chatgpt-ai-water-185000-gallons-training-nuclear-1850324249
Digital Realty. (2023). The Future of Data Center Cooling: Innovations for Sustainability. Digital Realty. https://www.digitalrealty.com/resources/articles/future-of-data-center-cooling
Farfan, J., & Lohrmann, A. (2023). Gone with the clouds: Estimating the electricity and water footprint of digital data services in Europe. Energy Conversion and Management. https://www.sciencedirect.com/science/article/pii/S019689042300571X
James, K. (2024, July 19). Semiconductor manufacturing and big tech’s water challenge | World Economic Forum. The World Economic Forum. Retrieved September 16, 2024, from https://www.weforum.org/agenda/2024/07/the-water-challenge-for-semiconductor-manufacturing-and-big-tech-what-needs-to-be-done/
Li, P., Ren, S., Yang, J., & Islam, M. (2023, October 29). Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. arXiv. http://arxiv.org/pdf/2304.03271
Miller, R. (2011). Google Using Sea Water to Cool Finland Project – Google Using Sea Water to Cool Finland Project. Data Center Knowledge. https://www.datacenterknowledge.com/hyperscalers/google-using-sea-water-to-cool-finland-project
Monserrate, S. G. (2022, February 14). The staggering ecological impacts of computation and the cloud. MIT Schwarzman College of Computing. Retrieved September 16, 2024, from https://computing.mit.edu/news/the-staggering-ecological-impacts-of-computation-and-the-cloud/
Moss, S. (2017). Google’s plan to use aquifer for cooling in South Carolina raises concerns. Data Center Dynamics. https://www.datacenterdynamics.com/en/news/googles-plan-to-use-aquifer-for-cooling-in-south-carolina-raises-concerns/
Petrakopoulou, F. (2021). Defining the cost of water impact for thermoelectric power generation. Energy Reports. https://www.sciencedirect.com/science/article/pii/S2352484721002158
Robinson, D. (2024, February 29). Growing water use a concern for chip industry and AI models. The Register. Retrieved September 16, 2024, from https://www.theregister.com/2024/02/29/growing_water_use_ai_semis_concern/
Singh, S. (2024). ChatGPT Statistics (SEP. 2024) – 200 Million Active Users. DemandSage. Retrieved September 15, 2024, from https://www.demandsage.com/chatgpt-statistics/
Torcellini, P., Long, N., & Judkoff, R. (2023). Consumptive Water Use for U.S. Power Production. NREL. https://www.nrel.gov/docs/fy04osti/33905.pdf
How Will Generative AI Be Used in the Future? Answer: AutoGen
21
October
2023
No ratings yet.
The generative AI we know of today is ChatGPT, Midjourney, and DALL·E 3 and many more. This generative AI is very good and advanced, but there are some flaws, like not being able to perform long iterations. Now there is something new called AutoGen. AutoGen is an open-source project from Microsoft that was released on September 19, 2023. AutoGen at its core, is a generative AI model that works with agents; those agents work together in loops. Agents are in essence, pre-specified workers that can become anything, so there are agents that can code well and agents that can review the generated code and give feedback. Agents can be made to do anything and become experts in any field, from marketing to healthcare.
An example of what AutoGen can do is the following: if I want to write some code to get the stock price of Tesla, I could use ChatGPT, and it will output some code. Most of the time, the code that is written by chatGPT via the OpenAI website will have some errors. But with AutoGen, there are two or more agents at work: one that will output code and the second one that is able to run the code and tell the first model if something is wrong. This process of generating the code and running the code will go on until the code works and results in the correct output. This way, the user does not have to manually run the code and ask to fix the errors or other problems with AutoGen it is done automatically.
I also tried to create some code with AutoGen. I first installed all the necessary packages and got myself an API key for openAI GPT4. Then I started working on the code and decided to create the game “Snake”. Snake is an old and easy game to create, but it might be a challenge for AutoGen. I started the process of creating the snake game, and it had its first good run. I was able to create the first easy version of the game. I then came up with some iterations to improve the game. The game now also has some obstacles that, if the snake bumps into one, the game will end. This was also made by AutoGen without any problems. After palying around, I was really amazed at how powerful this AutoGen is, and I can only imagine what else can be created with AutoGen.
AutoGen is a very promising development and will be the future of professional code development or atomization tasks. If the large language models (LLMs) get more powerful, this AutoGen will also be more powerful because all the individual agents will be more powerful. It is interesting to follow this development and see if this AutoGen could create games that are not yet existing.
The day ChatGPT outstripped its limitations for Me
20
October
2023
No ratings yet.
We all know ChatGPT since the whole technological frenzy that happened in 2022. This computer program was developed by OpenAI using GPT-3.5 (Generative Pre-trained Transformer) architecture. This program was trained using huge dataset and allows to create human-like text based on the prompts it receives (OpenAI, n.d.). Many have emphasized the power and the disruptive potential such emerging technology has whether it be in human enhancement by supporting market research and insights or legal document drafting and analysis for example which increases the efficiency of humans (OpenAI, n.d.).
However, despite its widespread adoption and the potential generative AI has, there are still many limits to it that prevent us from using it to its full potential. Examples are hallucinating facts or a high dependence on prompt quality (Alkaissi & McFarlane, 2023; Smulders, 2023). The latter issue links to the main topic of this blog post.
I have asked in the past to ChatGPT, “can you create diagrams for me?” and this was ChatGPT’s response:
I have been using ChatGPT for all sorts of problems since its widespread adoption in 2022 and have had many different chats but always tried to have similar topics in the same chat, thinking “Maybe it needs to remember, maybe it needs to understand the whole topic for my questions to have a proper answer”. One day, I needed help with a project for work in understanding how to create a certain type of diagram since I was really lost. ChatGPT helped me understand but I still wanted concrete answers, I wanted to see the diagram with my own two eyes to make sure I knew what I needed to do. After many exchanges, I would try again and ask ChatGPT to show me, but nothing.
One day came the answer, I provided ChatGPT with all the information I had and asked again; “can you create a diagram with this information”. That is when, to my surprise, ChatGPT started creating an SQL interface, representing, one by one, each part of the diagram, with the link between them and in the end an explanation of what it did, a part of the diagram can be shown below (for work confidentiality issues, the diagram is anonymized).
It was a success for me, I made ChatGPT do the impossible, something ChatGPT said itself it could not provide for me. That day, ChatGPT outstripped its limitations for me. This is how I realized the importance of prompt quality.
This blog post shows the importance of educating the broader public and managers about technological literacy in the age of Industry 4.0 and how with the right knowledge and skills, generative AI can be used to its full potential to enhance human skills.
Have you ever managed to make ChatGPT do something it said it couldn’t with the right prompt? Comment down below.
References:
Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus, 15(2).
Smulders, S. (2023, March 29). 15 rules for crafting effective GPT Chat prompts. Expandi. https://expandi.io/blog/chat-gpt-rules/
My new buddy ChatGPT
20
October
2023
No ratings yet.
I recently became a ChatGPT Plus user and had the opportunity to explore the new features of ChatGPT-4. OpenAI just introduced new functionalities, including voice and image capabilities. The primary voice feature is the new voice chat function users can engage with Chat-GPT on their mobile device (OpenAI, 2023). So, I went ahead and tested this feature!
At first, you are prompted to select your preferred voice out of five options. I tried all of them and was immediately surprised at how natural these voices sound, especially compared to known voice assistants like Siri or Alexa. I chose the voice “Ember” and proceeded. A new window opens, and you are ready to talk after the connection is built up.
I initiated the conversation by asking the AI how it was doing, but only got the response that it has no feelings because it is a computer program – so far so good, and not that surprising. Then I thought about how I could test its capabilities to behave like a “friend” and came up with some topics, even serious ones, that I would typically discuss with a real friend. Those topics included day-to-day conversations about university or work, sports, travel plans, and more serious subjects such as relationship problems, sickness of a family member, or mental health struggles. I think all of us have heard stories about harmful advice that AI tools gave its users, so I was excited to see its reactions to my subjects. I always started the conversation saying: “Imagine you are my best friend. I’m going to tell you about a topic that I would usually discuss with a friend. React like a human would.”
Honestly, the conversations were surprisingly good. The AI gave insightful comments, showed compassion, and offered interesting solutions and tips. It asked questions for more details of the issue or wanted to know how I felt about the offered tips. Overall, the conversations were obviously not as engaging as with a human, especially because of the loading times between my verbal input and the response, but the quality of the voice and the insightful answers really surprised me.
I’m excited to see how this feature progresses. Let me know your thoughts in the comments!
Why does AI struggle to create images of human hands?
18
October
2023
No ratings yet.
I was really surprised to learn during one of our Information Strategy lectures that AI generally struggles with generating pictures of human hands. First, I wanted to try that out and second, I wanted to know why.
I used two AI tools to test this anomaly, Bing Image Creator and ChatGPT-4 with the DALL·E 3 beta. I used the following prompt: “Create a realistic-looking picture of a professor in a university lecture hall. The professor is standing in front of the class and is holding a presenter in the one hand, and a coffee cup in the other hand.” At first sight, you don’t really see the issue but when you look more closely, you see how weird some hands look. In picture 1, the woman is missing a finger of her right hand, and her left hand is looking especially unnatural. The professor’s presenter in picture 2 is floating in the air above his hand and the woman in picture 3 appears to have three hands, two of which are holding the coffee cup and another one is holding the presenter. Overall, all these generated images look very good and natural in my opinion, except for the hands.
But why is generating images of human hands such a problem for AI? Firstly, the models are 2D image creators that do not understand the three-dimensional nature of a human hand. (Hughes, 2023). Secondly, their training data mostly focused on other parts of the human body, such as the face (Hughes, 2023). Therefore, the AI tools have especially big problems with creating images of hands if you provide a context in which the hands must appear, such as holding specific objects in my case.
I’m curious to see how this topic evolves in the future and how long it will take AI tools to get better at generating human hands. I look forward to your comments!