How Generative AI Changes Art

15

September

2023

No ratings yet.

Since OpenAI’s LLM-based chatbot ChatGPT launched at the end of 2022, public interest in generative AI tools has grown exponentially. This hype has also extended to text-to-image generators which allow users to turn any textual prompt into an image – without any limits to creativity. DALL E-2, which was developed by the founders of ChatGPT, might be the most famous of these applications. To get a better understanding of the capabilities of such text-to-image generators, I’ve been experimenting with Craiyon, a free ad-supported tool that is often described as a scaled-down version of OpenAI’s DALL E-2. As a user, you can choose from different image styles, ranging from art to drawings to photographs. From my personal experience, Craiyon is quite good at creating non-realistic images such as cartoon-style drawings or paintings, but has difficulties creating realistic images (e.g., portraits of humans). Here, Craiyon seems to lack capabilities compared to other more advanced AI tools like DALL-E 2 or Midjourney. Despite that, it is amazing how easily Craiyon can create pictured based on what you describe in words

The potential of these image-to-text generations is almost unlimited as they could change the way visual content is created in the future. Tools like DALL E-2 could create realistic, immersive visual effects for films, shows, games or other entertainment forms (Bassey, 2023). Further, they could be used for voice dubbing or for altering the physical appearance of actors (e.g. aging or de-aging) (Bassey, 2023). However, as we explore the possibilities of AI for enhancing creativity, it is crucial to also consider potential risks of such tools.

First, with the widespread accessibility of such generative AI tools that allow to create realistic images or even audio files, it becomes easy for anyone to manipulate videos, audio or images and use them for spreading misinformation. Such “deepfakes” could, for example, be used for influencing public opinion or elections by spreading false information about political figures or events (Bassey, 2023). Although there are ongoing efforts to establish content authentication standards, it will take time until these standards are widely adopted (Lomas, 2023).

Second, it is difficult to determine the ownership of AI-generated content since they rely on extensive datasets. This gives rise to several questions such as: Who owns the generated content – the user who put in the prompt, the AI, or the developers who created the AI? Can AI-generated content be copyrighted? Should artists be able to opt-out of their art being used as data points for AI models? Is AI-generated content really art? Until now, there is no definite answer to these questions. A good example for this dilemma is “Edmond de Belamy,” an AI-generated portrait which was sold at a Christie’s auction for $432,000 in 2018 (Jones, 2022). This portrait was made using a Generative Adversarial Network (GAN) and resembles the artwork of other artists. This has led to discussions about whether it might violate the IP rights of artists who make similar portraits (Newman & Gibson, 2020).

In conclusion, AI text-to-image generators, such as Craiyon, offer exciting opportunities for creativity but also pose ethical dilemmas and potential risks. With the advancement of these technologies, we must establish ethical and legal frameworks and promote responsible use to ensure the integrity of AI-driven creativity.

References:

Bassey, S. (2023). The Rise of Deepfake Technology: Navigating the Realm of AI-Generated Images & Videos. hackernoon.com. https://hackernoon.com/navigating-deepfake-technology-and-the-realm-of-ai-generated-images-and-videos

Jones, J. (2022, October 19). A portrait created by AI just sold for $432,000. But is it really art? The Guardian. https://www.theguardian.com/artanddesign/shortcuts/2018/oct/26/call-that-art-can-a-computer-be-a-painter

Lomas, M. (2023, June 6). Europe wants platforms to label AI-generated content to fight disinformation. TechCrunch. https://techcrunch.com/2023/06/06/eu-disinformation-code-generative-ai-labels/

Newman, J., & Gibson, S. M. (2020). Blurring the lines: When AI creates art is it copyrightable? Lexology. https://www.lexology.com/library/detail.aspx?g=d9168101-70e2-47df-bbb1-985b98de7c21

Please rate this

The Hidden Environmental Cost of Generative AI

14

September

2023

No ratings yet.


Generative AI has completely changed the way we interact with technology by enabling computers to generate human-like text, images, and more. While AI systems like ChatGPT and Google’s Bard lack a physical presence, their impact on the environment is far from negligible. These powerful AI systems are powered by networks of servers in data centers around the world, which require large amounts of energy and water to operate. Unfortunately, companies like OpenAI, Google, and Microsoft remain secretive about the water and energy usage of their AI models (Singh, 2023). As a result, the public remains in the dark about the extent of the environmental impact.

Due to their complexity, the training of AI models requires astronomical computational demands. A good example is OpenAI’s GPT-3, which was launched in 2020, and has been trained with a staggering 175 billion parameters (Korngiebel & Mooney, 2021). It is estimated that training such a large AI models can emit over 626,000 pounds of carbon dioxide equivalent emissions (Hao, 2019). This is equivalent to nearly five times the lifetime emissions of an average car in the United States (Hao, 2019). And this estimate does not even include the energy required to maintain these AI models! As AI’s influence continues to expand, the demand for more powerful machine learning models will only grow, leading to increased data usage and power consumption (Gartner, 2022). If we persist with our current AI practices, it is predicted that by 2030, the energy requirements for machine learning training, data storage, and processing could account for as much as 3.5% of global electricity consumption (Gartner, 2022).

While energy consumption is a significant concern, the water footprint of generative AI often remains overlooked. Data centers that power AI computations need substantial amounts of water for cooling and maintenance. Although exact figures remain unknown, a study suggests that training GPT-3 in Microsoft’s US data centres could have consumed 700,000 litres of fresh water (Li et al., 2023). Furthermore, to maintain the servers at optimal temperatures, a water bottle’s worth of fresh water is consumed for every 20 to 50 prompts given to the AI (Gendron, 2023).

Fortunately, the AI community is starting to embrace more sustainable practices in response to these environmental concerns. These sustainable practices include the use of specialized energy-efficient hardware, optimizing code or utilizing transfer learning techniques (Gartner, 2022).

In conclusion, generative AI tools have undeniably unlocked unprecedented capabilities. However, their ecological footprint is an issue that cannot be ignored. To reduce the environmental impact of generative AI tools, it is crucial to embrace green computing practices. In the ever-advancing world of AI, striking a balance between innovation and environmental stewardship is a necessity. We must ensure that as AI continues to thrive, it does so sustainably.

References:

Gartner. (2022, October 18). Gartner unveils top predictions for IT organizations and users in 2023. https://www.gartner.com/en/newsroom/press-releases/2022-10-18-gartner-unveils-top-predictions-for-it-organizations-and-users-in-2023-and-beyond

Gendron, W. (2023, April 14). ChatGPT needs to “drink” a water bottle’s worth of fresh water for every 20 to 50 questions you ask, researchers say. Business Insider. https://www.businessinsider.com/chatgpt-generative-ai-water-use-environmental-impact-study-2023-4?international=true&r=US&IR=T

Hao, K. (2020, December 7). Training a single AI model can emit as much carbon as five cars in their lifetimes. MIT Technology Review. https://www.technologyreview.com/2019/06/06/239031/training-a-single-ai-model-can-emit-as-much-carbon-as-five-cars-in-their-lifetimes/

Korngiebel, D. M., & Mooney, S. D. (2021). Considering the possibilities and pitfalls of Generative Pre-trained Transformer 3 (GPT-3) in healthcare delivery. Npj Digital Medicine, 4(1). https://doi.org/10.1038/s41746-021-00464-x

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI Less “Thirsty”: Uncovering and addressing the secret water footprint of AI models. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2304.03271

Singh, M. (2023, June 14). As the AI industry booms, what toll will it take on the environment? The Guardian. https://www.theguardian.com/technology/2023/jun/08/artificial-intelligence-industry-boom-environment-toll

Please rate this