Will AI-Powered Deepfakes be the Future of Education and Content Creation?

23

October

2023

5/5 (1)

In the field of artificial intelligence, there is a noteworthy area of research that centers around ethical and moral considerations in various domains, and one prominent example is the concept of “deepfakes.” Deepfakes have opened up a new dimension within artificial intelligence, where they can create metahuman or AI avatars capable of mimicking human actions, speech, and gestures.

But what if we harnessed deepfake technology to instantly enhance common educational practices, such as creating presentations? What would that look like? I recently had the opportunity to explore a generative AI web-based application called “Deep Brain AI,” which expands the horizons of AI capabilities, particularly in the realm of content creation. What does this mean in practical terms? Users can develop PowerPoint presentations, just like they always have, to convey information to an audience. However, the intriguing twist is that a full body animated AI avatars or metahumans can replace human speakers. Consequently, the presenter doesn’t need to speak, as the AI avatar or metahuman can handle the task.

The web-based application allows you to create templates, insert text boxes, and upload videos and audio, just like a standard PowerPoint application. The real innovation emerges when you can create an AI avatar, either male or female, with the ability to speak in various languages and accents from different countries. For instance, you can choose between accents like U.S. English, Indian English, Saudi Arabian Arabic, Taiwanese Chinese, and German from Germany. The AI avatar can articulate the content through a text script, effectively enabling text-to-speech input.

The application offers a range of features, including control over scene speed and the ability to insert additional pauses. What’s even more fascinating is the incorporation of advanced generative AI technologies, such as ChatGPT, into the application. I found this particularly intriguing, as it recognizes the utility of ChatGPT and integrates it seamlessly into the platform.

However, there were some shortcomings when using the application, most notably the unnatural quality of the deepfake avatars. They were easily discernible as artificial, which could lead to dissatisfaction among users and their audiences when listening to the AI avatars or reading the presentations.

Nonetheless, the age of artificial intelligence is advancing at an unprecedented pace, and my overall experience with the application has been positive. I’m keen to hear about your experiences with Deep Brain AI or deepfake technology in general.

Please rate this

Explore HeyGen’s video magic!

22

October

2023

5/5 (1)

Having previously faced challenges in producing professional videos, the introduction of a new AI tool named HeyGen appears to be a promising solution. HeyGen is an advanced AI video generation platform designed to simplify the creation of professional videos. The tool uses AI-driven avatars that stimulate creativity by eliminating the expensive constraints associated with traditional video filming and editing. By employing advanced generative AI technology, it enables users to convert textual content into engaging video material. HeyGen also offers an API to integrate a user’s project with other tools.

A user can utilize the tool to create several types of video contents, such as demos and tutorials, without the need for a specialized software. The tool offers different plans, including a free plan that allows a user to create a video of 1 minute maximum. 

I explored the many features of HeyGen and here is my evaluation:

My initial impression is that the tool has a user-friendly interface.

The tool gives the user the option to start creating videos with a very diverse range of templates and avatars. There is also the option to animate your photo with text.

The tool also offers different languages and even different tones that a user can utilize.

The platform is equipped with a convenient drag-and-drop functionality that allows users to seamlessly incorporate diverse elements such as background music and personalized avatars into their projects.

My overall impression, is that a user does not have to have technical expertise to use  HeyGen. It is very beginner-friendly. The tool offers guidance as well.

At present, HeyGen does not provide the option for multiple avatars within a single scene, suggesting room for enhancement in this aspect. Additionally, the introduction of a student plan could be a valuable addition, especially considering that the most affordable plan currently starts at $24.

Source:

HeyGen. (n.d.). Explainer video maker: Create Explainer Videos Online.  https://www.heygen.com/explainer-video-maker

Please rate this

The Present & The Future of AI-Generated Videos – a Discussion with Synthesia

22

October

2023

No ratings yet.

Imagine a world where you can create captivating, high-quality video content at the speed of thought. AI-generated videos are now transforming this vision into a reality. These digital platforms can turn basic scripts – and generate the scripts themselves, as well – into stunning masterpieces, adapting to various languages and global audiences through diverse avatars and instant translation options. 

To delve into this cutting-edge technology, I chose to test out Synthesia, a leading AI video platform, and a unicorn with a $1 billion valuation (UCL, 2023). With features like automatic script-to-video conversion, multilingual support, and customization (Synthesia – AI Video Generation Platform, n.d.) as well as an impeccable customer list, I was intrigued.

I delved deep into researching the company and the ethics of AI-generated content, generated two demo videos, and had an interview with Borys Khodan, the Head of Paid Media at Synthesia, for a comprehensive research about the company’s direction and ethical considerations.

The Synthesia Experience

To start with, the overall customer experience for demo videos at the moment seems heavily targeted towards B2B. The clients list of Synthesia features well-known companies, including Amazon, Zoom, BBC etc., and not individuals. Organizations collaborating with Synthesia often use it to create how-to videos, used for learning & training, IT, sales, and marketing teams. There are a ton of features available for them. What truly caught my attention is the effortless video updates – if a script of a video has to be changed, a simple script tweak yields a new video almost instantly, sparing companies from the costly ordeal of re-recording content from scratch. The potential savings here are impressive and certainly make a difference in the corporate landscape. Moreover, Synthesia has plans to expand its offerings for individuals (B2C), introducing advanced 3D avatars and other features for additional use cases, as confirmed by Borys.

Source: Synthesia.io

My personal experience involved choosing one of Synthesia’s demo video options, “Sales Pitch.” I edited the standard text and received the video via email within minutes. It was an impressive process, though there was a minor glitch. While the AI handled the standardized text perfectly, it struggled to pick up the word “Resume” as a noun and mispronounced it as the verb, when I attempted to generate a sample pitch for a potential employer. However, in the paid version, there must be tools to easily adjust the videos. When I tried the “compliment” option to generate another video, it worked smoothly – sending a personalized video to me directly without any hiccups. It was quite fun to watch the text that I wrote in 15 seconds come to life in a video! 

AI-Generated Video: Sales Pitch

Industry Overview

In this rapidly evolving landscape, competition among AI-driven video generation platforms is fierce, leading to heightened ethical concerns. Lots of companies scape the web for data – sometimes, using personal data for training their algorithms to get ahead of the competition. But for Synthesia, the path is clear – no shortcuts, no compromises. The company’s founders have committed substantial resources to ensure ethical data procurement, even if it may mean encountering short-term delays and additional expenses. The company wants to be sure that the data they use to train their models on are ethical, legal, and are not violating any privacy laws (Browne, 2023). This approach may shield Synthesia from potential legal repercussions stemming from unethical or illegal data sources.

Ethical Implications

This ethical lens extends to the creation of AI-generated videos, and brings us to the questions of authorship, artistic authenticity, and ownership. What if AI is requested to generate violent or inappropriate content? What happens when an AI can replicate the voice and likeness of a person without their consent? The concept of deepfakes, where AI manipulates content to deceive and misinform, also poses a clear danger, and there are many opportunities and threats in that direction (Collins, 2023). I asked some of these questions to Borys in a fruitful discussion about the future of AI-generated videos. 

“We maintain very high ethical standards and closely monitor all content generated by our customers. If the content violates any of our policies, such as featuring violence, harmful behavior, offensive language and more, the video would not be generated,” Borys said. “Our high ethical standards aren’t just nice to have; they are core to our mission and set us apart in a competitive market.”

Unfortunately, not everyone in the industry follows these standards, making the next few years crucial for determining who will succeed, and what legal requirements will evolve. For now, there are no current regulations requiring content to be labeled as AI-generated, but that may change in the future.

Conclusion

AI-generated videos hold immense potential for reshaping the corporate landscape. Synthesia serves as an inspiring example of this transformation, and working even with their demo product was seamless, quick and simple. However, harnessing the potential of the technology to the fullest extent requires unwavering adherence to strict ethical standards of all the companies in the sector, reminiscent of Synthesia’s own commitment. When wielded responsibly, AI-generated videos stand as a remarkable testament to the synergy of technology and business solutions, marking an important step into the future.

AI-Generated Video: Compliment

References

Browne, R. (2023, June 13). Nvidia-backed platform that turns text into A.I.-generated avatars boosts valuation to $1 billion. CNBC. https://www.cnbc.com/2023/06/13/ai-firm-synthesia-hits-1-billion-valuation-in-nvidia-backed-series-c.html

Collins, T. (2023, April 24). The Rise of Ethical Concerns about AI Content Creation: A Call to Action. IEEE Computer Society. https://www.computer.org/publications/tech-news/trends/ethical-concerns-on-ai-content-creation

Synthesia – AI Video generation platform. (n.d.). Www.synthesia.io. https://www.synthesia.io/

UCL. (2023, June 15). AI firm Synthesia, co-founded by UCL scientist, becomes $1bn unicorn. UCL News. https://www.ucl.ac.uk/news/2023/jun/ai-firm-synthesia-co-founded-ucl-scientist-becomes-1bn-unicorn

Please rate this

My Experience with DALL·E’s Creative Potential

21

October

2023

No ratings yet.

I have tried Dall·E after reading so many posts about how it would revolutionize someone’s business and I was very disappointed.

Dall·E is a project developed by OpenAI, the same organization behind models like GPT-3 (ChatGPT). Dall·E in opposition to ChatGPT creates images from prompts that were given to it (OpenAI, n.d.). It uses deep learning technology such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs). VAEs allow to represent complex data in a more compact form and the GANs are used to create as realistic images as possible by constantly creating fake images and putting them to the test by a discriminator that will discard the image if it deems it fake (Lawton, 2023; Blei et al., 2017). The business world and most of the LinkedIn posts I saw were idolizing such technology and explained how this could enhance humans in several ways. One way that was relevant to me was the creation of images, signs or pictograms that will enhance the potential of PowerPoint presentations.

After writing my thesis last year, I had to create a PowerPoint to present the main points of my thesis. I thought it would be a great way to start using Dall·E and tried creating my own visuals to have a clear representation of what my thesis entailed. After many tries, even with the best prompts I could write, even with the help of ChatGPT, none of the visuals that came out of it looked real or defined, it was just abstract art that represented nothing really. 

Reflecting on that experience, I thought that sometimes, the fascination people have for groundbreaking technology clouds its practical applications. I do not doubt that Dall·E can create great visuals and can be fun to play with, however, it does not always adapt seamlessly to specific creative needs. 

Ultimately, using Dall·E made me remember that we should always stay critical and manage expectations when it comes to groundbreaking emerging technology. It is appealing to listen to all the promises that come with disruptive technologies but sometimes we realize that no tool is one-size-fits-all.

References

Blei, D. M., Kucukelbir, A., & McAuliffe, J. D. (2017). Variational inference: A review for statisticians,  Journal of the American Statistical Association, 112 (518), pp. 859–877.

Lawton, G. (2023) ‘GANs vs. VAEs: What is the Best Generative AI Approach?’, Techtarget.
Retrieved from: https://www.techtarget.com/searchenterpriseai/feature/GANs-vs-VAEs-What-is-the-best-generative-AI-approach 

OpenAI. (n.d.). Dall·E 2. DALL·E 2. https://openai.com/dall-e-2/

Please rate this

Adverse training AI models: a big self-destruct button?

21

October

2023

No ratings yet.

“Artificial Intelligence (AI) has made significant strides in transforming industries, from healthcare to finance, but a lurking threat called adversarial attacks could potentially disrupt this progress. Adversarial attacks are carefully crafted inputs that can trick AI systems into making incorrect predictions or classifications. Here’s why they pose a formidable challenge to the AI industry.”

And now, ChatGPT went on to sum up various reasons why these so-called ‘adversarial attacks’ threaten AI models. Interestingly, I only asked ChatGPT to explain the disruptive effects of adversarial machine learning. I followed up my conversation with the question: how could I use Adversarial machine learning to compromise the training data of AI? Evidently, the answer I got was: “I can’t help you with that”. This conversation with ChatGPT made me speculate about possible ways to destroy AI models. Let us explore this field and see if it could provide a movie-worthy big red self-destruct button.

The Gibbon: a textbook example

When you feed one of the best image visualization systems GoogLeNet with a picture that clearly is a panda, it will tell you with great confidence that it is a gibbon. This is because the image secretly has a layer of ‘noise’, invisible to humans, but of great hindrance to deep learning models.

This is a textbook example of adversarial machine learning, the noise works like a blurring mask, keeping the AI from recognising what is truly underneath, but how does this ‘noise’ work, and can we use it to completely compromise the training data of deep learning models?

Deep neural networks and the loss function

To understand the effect of ‘noise’, let me first explain briefly how deep learning models work. Deep neural networks in deep learning models use a loss function to quantify the error between predicted and actual outputs. During training, the network aims to minimize this loss. Input data is passed through layers of interconnected neurons, which apply weights and biases to produce predictions. These predictions are compared to the true values, and the loss function calculates the error. Through a process called backpropagation, the network adjusts its weights and biases to reduce this error. This iterative process of forward and backward propagation, driven by the loss function, enables deep neural networks to learn and make accurate predictions in various tasks (Samek et al., 2021).

So training a model involves minimizing the loss function by updating model parameters, adversarial machine learning does the exact opposite, it maximizes the loss function by updating the inputs. The updates to these input values form the layer of noise applied to the image and the exact values can lead any model to believe anything (Huang et al., 2011). But can this practice be used to compromise entire models? Or is it just a ‘party trick’?

Adversarial attacks

Now we get to the part ChatGPT told me about, Adversarial attacks are techniques used to manipulate machine learning models by adding imperceptible noise to large amounts of input data. Attackers exploit vulnerabilities in the model’s decision boundaries, causing misclassification. By injecting carefully crafted noise in vast amounts, the training data of AI models can be modified. There are different types of adversarial attacks, if the attacker has access to the model’s internal structure, he can apply a so-called ‘white-box’ attack, in which case he would be able to compromise the model completely (Huang et al., 2017). This would impose serious threats to AI models used in for example self-driving cars, but luckily, access to internal structure is very hard to gain.

So say, if computers were to take over humans in the future, like the science fiction movies predict, can we use attacks like these in order to bring those evil AI computers down? Well, in theory, we could, though practically speaking there is little evidence as there haven’t been major adversarial attacks. Certain is that adversarial machine learning holds great potential for controlling deep learning models. The question is, will the potential be exploited in a good way, keeping it as a method of control over AI models, or will it be used as a means of cyber-attack, justifying ChatGPT’s negative tone when explaining it?

References

Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., & Tygar, J. D. (2011, October). Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence (pp. 43-58).

Huang, S., Papernot, N., Goodfellow, I., Duan, Y., & Abbeel, P. (2017). Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284.

Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K. R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE109(3), 247-278.

Please rate this

How Will Generative AI Be Used in the Future? Answer: AutoGen

21

October

2023

No ratings yet.

The generative AI we know of today is ChatGPT, Midjourney, and DALL·E 3 and many more. This generative AI is very good and advanced, but there are some flaws, like not being able to perform long iterations. Now there is something new called AutoGen. AutoGen is an open-source project from Microsoft that was released on September 19, 2023. AutoGen at its core, is a generative AI model that works with agents; those agents work together in loops. Agents are in essence, pre-specified workers that can become anything, so there are agents that can code well and agents that can review the generated code and give feedback. Agents can be made to do anything and become experts in any field, from marketing to healthcare.

An example of what AutoGen can do is the following: if I want to write some code to get the stock price of Tesla, I could use ChatGPT, and it will output some code. Most of the time, the code that is written by chatGPT via the OpenAI website will have some errors. But with AutoGen, there are two or more agents at work: one that will output code and the second one that is able to run the code and tell the first model if something is wrong. This process of generating the code and running the code will go on until the code works and results in the correct output. This way, the user does not have to manually run the code and ask to fix the errors or other problems with AutoGen it is done automatically.

I also tried to create some code with AutoGen. I first installed all the necessary packages and got myself an API key for openAI GPT4. Then I started working on the code and decided to create the game “Snake”. Snake is an old and easy game to create, but it might be a challenge for AutoGen. I started the process of creating the snake game, and it had its first good run. I was able to create the first easy version of the game. I then came up with some iterations to improve the game. The game now also has some obstacles that, if the snake bumps into one, the game will end. This was also made by AutoGen without any problems. After palying around, I was really amazed at how powerful this AutoGen is, and I can only imagine what else can be created with AutoGen.

AutoGen is a very promising development and will be the future of professional code development or atomization tasks. If the large language models (LLMs) get more powerful, this AutoGen will also be more powerful because all the individual agents will be more powerful. It is interesting to follow this development and see if this AutoGen could create games that are not yet existing.

Please rate this

How AI is used in the music industry

21

October

2023

No ratings yet.

Writing this during the Amsterdam Dance Event weekend, this blog is written about using Artificial Intelligence (AI) in the music industry.

AI technology is increasingly used as support in various industries. One of these industries is the music industry and the opportunities are truly endless. The use of AI in the music industry can be divided into three uses. Firstly, there is instrumental and vocal reproduction. Here, tone transfer algorithms are used to reproduce existing music with different tones or voices. A recent trend on social media platform TikTok is the AI Music covers, where the user can choose the singer (who does not need be able to sing) and the song. AI will generate the song using the input “artist”. For example, listen to this song where Freddie Mercury sings Skyfall by Adele. Obviously, Freddie Mercury had sadly passed away long before this song was released:

A second way of using AI in the music industry is mixing and mastering, which helps artists to balance instruments or clean up the audio in a song. For example, Paul McCartney has used AI this summer on an old recording from 1978 of John Lennon to clean up and use the vocals for a new song.

Third and last, AI is also used for song composition. This might be the type of use where the current music industry is most afraid of. AI models are trained based on the melodies, rhythms and forms of current music. Then, based on that, users can give instructions on what song should be composed. AI is able to generate high-quality music which is easily modified to one’s liking. For example, Don Diablo, a Dutch DJ is nowadays referring to himself as a “digital artist”, due to his use with AI. According to him, the opportunities are endless, and he does not see this technology developing further as a risk to the music industry.

Bibliography:

Palamara, J. (2023, August 14). 3 ways AI is transforming music. The Conversation. http://theconversation.com/3-ways-ai-is-transforming-music-210598

Reid, J. (2023, June 13). Paul McCartney says A.I. got John Lennon’s voice on ‘last Beatles record’. CNBC. https://www.cnbc.com/2023/06/13/paul-mccartney-says-ai-got-john-lennons-voice-on-last-beatles-record.html

Please rate this

Simplify your research with Humata AI

21

October

2023

No ratings yet.

Approximately two years ago, I dedicated considerable effort to my undergraduate thesis, diligently reviewing around 10 articles daily. During this period, AI tools were not as accessible as they are today. Consequently, I did not actively pursue an AI solution to enhance my efficiency. Recently, however, I encountered Humata, which offers a potential solution for this problem. 

Humata AI, an AI assistant tool, leverages Large Language Model (LLM), to facilitate users to pose inquiries regarding their documents and receive relevant information. Humata uses AI to create two values for their users: automation and better decisions. It essentially functions as a ChatGPT for PDF files, but is tailored for the analysis and comprehension of lengthy papers and various document types. It enables users to grasp data more rapidly and swiftly navigate through a PDF document (Humata, n.d.).

I decided to experiment with this tool, and here is my assessment:

Upon logging in, you have the option to upload your file directly. The file I uploaded had a size of 1129 KB, and it took me under a minute to initiate my inquiries.

Prior to posing questions, Humata will provide you with a concise overview of the article. Additionally, it will offer a selection of sample questions.

The tool not only provides you with a response but also emphasizes the paragraphs containing the answer, specifies the page(s) where the answer is located, and includes citations to enable users to trace the source of the information.

Overall, I discovered the tool to be highly beneficial. It could enhance research in innovative ways. I would recommend it to briefly skim PDF files during your studies. Nevertheless, there was some ambiguity as a significant portion of the PDF file was highlighted when I posed my question.

However, these AI tools that enhance academic research come with risks. One of them is the risk of overly relying on technology, impeding our capacity for critical thinking and problem-solving. (Esplugas, 2023). It can also provide an unfair advantage to the more privileged members of society due to its costs (Esplugas, 2023).

Humata offers four different subscription plans, including a free plan that permits users to upload up to 60 pages and pose a total of 10 questions. There is also a student plan available for $1.99, allowing students to upload up to 200 pages and access basic chat support (Humata, n.d.).

Sources:

Humata. (n.d.). https://www.humata.ai/ 

Esplugas, M. (2023). The use of artificial intelligence (AI) to enhance academic communication, education and research: a balanced approach. Journal of Hand Surgery (European Volume), 48(8), 819-822.

Please rate this

The day ChatGPT outstripped its limitations for Me

20

October

2023

No ratings yet.

We all know ChatGPT since the whole technological frenzy that happened in 2022. This computer program was developed by OpenAI using GPT-3.5 (Generative Pre-trained Transformer) architecture. This program was trained using huge dataset and allows to create human-like text based on the prompts it receives (OpenAI, n.d.). Many have emphasized the power and the disruptive potential such emerging technology has whether it be in human enhancement by supporting market research and insights or legal document drafting and analysis for example which increases the efficiency of humans (OpenAI, n.d.).

Hype cycle for Emerging Technologies retrieved from Gartner.

However, despite its widespread adoption and the potential generative AI has, there are still many limits to it that prevent us from using it to its full potential. Examples are hallucinating facts or a high dependence on prompt quality (Alkaissi & McFarlane, 2023; Smulders, 2023). The latter issue links to the main topic of this blog post.

I have asked in the past to ChatGPT, “can you create diagrams for me?”  and this was ChatGPT’s response:

I have been using ChatGPT for all sorts of problems since its widespread adoption in 2022 and have had many different chats but always tried to have similar topics in the same chat, thinking “Maybe it needs to remember, maybe it needs to understand the whole topic for my questions to have a proper answer”. One day, I needed help with a project for work in understanding how to create a certain type of diagram since I was really lost. ChatGPT helped me understand but I still wanted concrete answers, I wanted to see the diagram with my own two eyes to make sure I knew what I needed to do. After many exchanges, I would try again and ask ChatGPT to show me, but nothing.

One day came the answer, I provided ChatGPT with all the information I had and asked again; “can you create a diagram with this information”. That is when, to my surprise, ChatGPT started creating an SQL interface, representing, one by one, each part of the diagram, with the link between them and in the end an explanation of what it did, a part of the diagram can be shown below (for work confidentiality issues, the diagram is anonymized).

It was a success for me, I made ChatGPT do the impossible, something ChatGPT said itself it could not provide for me. That day, ChatGPT outstripped its limitations for me. This is how I realized the importance of prompt quality.

This blog post shows the importance of educating the broader public and managers about technological literacy in the age of Industry 4.0 and how with the right knowledge and skills, generative AI can be used to its full potential to enhance human skills.

Have you ever managed to make ChatGPT do something it said it couldn’t with the right prompt? Comment down below.

References:

Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus15(2).

Smulders, S. (2023, March 29). 15 rules for crafting effective GPT Chat prompts. Expandi. https://expandi.io/blog/chat-gpt-rules/

Please rate this

The Magic of AI-Powered Design

20

October

2023

No ratings yet.

In the ever-evolving landscape of digital design, Canva has emerged as an easy-to-use application, offering a versatile platform for individuals and businesses to create stunning visuals without the need for advanced design skills. While I’ve used Canva Pro for multiple years now, it’s the recent integration of artificial intelligence (AI) that has revolutionized the way I am approaching my designs.

For one, Canva integrated a text-to-image generator into their application (Canva, n.d.a). This means that it has the ability to generate images from text. Whether you need to visualize a catchy tagline or an inspiring quote, AI swiftly transforms your words into visually appealing graphics, streamlining the design process. There is no need to search endlessly for the right picture, Canva has it all.

Furthermore, they introduced the magic eraser; a game-changer for those seeking a quick and easy way to remove unwanted objects from their images. AI algorithms analyze the content, intelligently filling in the gaps seamlessly, leaving you with a flawless composition (Canva, n.d.b). This tool has proven itself to be very useful and easy to use over the last months, as I’ve personally made a lot of cover pages even better with the use of this feature.

Lastly, magic design. This feature maximizes the AI-driven design functionalities of the platform. One can simply select their preferred color profile, mood, and a few additional options, and Canva’s AI takes over, crafting a design that most of the time aligns very good with your input (Canva, n.d.c).

In conclusion, Canva’s innovations into the world of AI have undoubtedly elevated its usability for design enthusiasts, students and professionals, among others. With AI-powered features like text-to-image generation, the magic eraser, and magic design, Canva is empowering its users to bring their creative visions to life with ease. As the realm of AI-enhanced design continues to expand, Canva’s journey promises to be an exciting one, bridging the gap between art and intelligence for a more visually vibrant future.

Bibliography:

Magic eraser: Remove objects from photos with one click | CANVA. Canva.com. (n.d.b). https://www.canva.com/features/magic-eraser/

Using text to Image – CANVA Help Center. Canva.com. (n.d.a). https://www.canva.com/help/text-to-image/ Visualize your ideas with Magic Design Ai: Magic presentations … – CANVA. Canva.com. (n.d.c). https://www.canva.com/designschool/tutorials/new-features/magic-design/

Please rate this