Google Bard versus Chat GPT: Which is better?

12

October

2023

No ratings yet.

One of the most well-known forms of generative AI is AI-powered language models. Language models such as Chat GPT played a big role in the AI’s popularity, introducing new and impressive AI models on the market for people to use for free. Bigger companies quickly caught on to the hype for this emerging technology and started developing their own AI-powered language models. One of the bigger tech companies that did this was Google with their model Bard. 

The premise of this post was my interest in what another AI language model from a tech giant like Google had to offer compared to Chat GPT. Hence, I decided to test different aspects of both Bard and Chat GPT and compare the results. For this comparison I used the newest online version of Google Bard as of October 2023 and Chat GPT 3.5.

In my testing of both models, I looked at three main scenarios in which these language models may be used. The three different categories of prompts were; to write an email, solve a math problem, and retrieve information from a text. These different scenarios capture different aspects of the ability of the models therefore it is of interest to see how they react in each situation. 

Starting with writing an email, I gave both Bard and Chat GPT the prompt to “Write a short sick email to my teacher”. The results can be seen in the figures below with Figure 1 being the response from Bard and Figure 2 being the response from Chat GPT. From these figures, it is clear to see that both AI-powered language models took a different approach to fulfilling my request. In general, both emails communicated the correct message and were complete. The email Chat GPT produced was considerably longer than that which Bard produced while basically communicating the same message. Since the prompt was to write a short email Bard did perform better considering the email was concise while Chat GPT’s email was not. Bard also gave tips on potential additions or adjustments that could be made to the email to better cater to the needs of the user. Overall, I would say that Bard gave me a better outcome for the “short sick email” that was requested due to the extra tips it gave and that Chat GPT’s email was relatively long. 

The second aspect that was compared was how well each model could solve a math problem. I gave both models derivative and differentiation questions from Khan Academy and examined how they would be approached. One difficult aspect of this was putting the questions into the text boxes of the models because of the notations of the questions. However, in the end with some adjustments, I managed to ask the question so that it was properly understood in the majority of cases. The first question was finding the value of a basic derivative, the question can be seen in Figure 3, and the calculations to Bard and Chat GPT in Figure 4 and Figure 5 respectively. Bard gave an answer of -10 while Chat GPT gave an answer of 2, which both ended up being incorrect as the correct answer was -3. The second question, seen in Figure 6, given to the models had a similar result to the first. The calculations of Bard can be seen in Figure 7 and the calculations of Chat GPT can be seen in Figure 8. In this case, Bard answered ⅓ and Chat GPT answered 2 while the correct answer was ⅔ meaning the answers were wrong again. The response Bard gave to the problems was slightly easier to follow than with Chat GPT due to the formating. With Bard I also had the ability to upload a picture which made it easier to enter more complex equations that were difficult to copy with text. However in the case of Math problems, both models were incompetent and could not correctly complete the requests.  

The last aspect that was tested was the ability the models had to extract information from a text. In this case, the first section of the Wikipedia page of Tesla (Figure 9) was taken as the text for the models to analyze. I gave both Bard and Chat GPT the question “ What is Tesla known for based on this text?” (The text being that of Figure 9) The responses of Bard and Chat GPT can be seen in Figure 10 and Figure 11 respectively. Both models gave a complete and correct answer to the question covering them in appropriate depth. One factor of the answers that was a notable difference was Chat GPT had a much longer list with more aspects than Bard did. Chat GPT also included numbers and statistics in its answer while Bard did not use these at all. It was also beneficial that Chat GPT had a one-sentence sum up at the end of its answer to make the final idea clear to the reader. Bard did not do this but it did include some additional information about the company. 

Overall, they were sufficient in their abilities to extract information out of the text but Chat GPT did it better in terms of the amount of information, depth, and variety of information. Analyzing and comparing both Bard and Chat GPT it was clear to see that even though they have the same function, their end result is noticeably different. Bard was slightly better in the area of creating content such as an email with basic criteria while Chat GPT was slightly better at extracting information from a text. In terms of their abilities in completing math problems both models were relatively incompetent. I have mostly used Chat GPT in the past, but after comparing both, I would also like to use Bard in the future. Both models have their own strengths and weaknesses and therefore it is important to know which one works better for you in certain use cases. I enjoyed the process of exploring these AI-powered language models and hope to continue using and exploring them in the future.

Please rate this

A Beginners Experience using Generative AI text-to-image modeling

26

September

2023

No ratings yet.

Personally, I have little experience using generative AI to create images and visuals; therefore, I decided to do some exploring. There are currently already a couple big generative AI programs, such as Midjourney and Dalle E, so I decided to first explore those. However, to my disappointment, these programs did offer free versions for users to try out at this point in time. According to Midjourney, this is because there were too many users, and therefore you must subscribe in order to use their services. The fact that these technologies are no longer available for free shows the growth of these technologies in the last few years and the growth of users. In the past, many of these technologies were free to use and try out, but that is no longer the case due to their growth in popularity. For the generative AI image programs I could find that were free to try, they often offered a couple free image generations before you had to pay for more.

After some more research, I found a website called Feng My Shui, where I was able to experiment using the Stable Diffusion XL model. Stable Diffusion is an open-source AI platform that allows users to create images using prompts and descriptions. I experimented with some basic prompts to test how the software worked and was highly impressed with the results. One example of the prompts I entered was “Rotterdam Skyline in Cartoon Style.”. From that, I got a selection of images that were accurate and generally of good quality. (Figures 1 and 2)

I decided to try another program to see how it would compare to the results I received with Stable Diffusion XL. I found another free trial program called Freepik that also allowed me to use prompts and descriptions to generate images. I again used the prompt “Rotterdam Skyline in a Cartoon Style” and received much different results than before. The main difference I noticed was that Freepik had a much different style to that of Stable Diffusion XL, even though given the same prompt. (Figures 3 and 4) The images were again high-quality renders and but less accurate than to what Stable Diffusion XL provided. It is clear to see that it is a skyline of some sort in Cartoon style but it is hard to tell it is Rotterdam compared to Figure 1 and 2. I also tried some more complex prompts using Freepik to test the accuracy of the generations. I used the prompts “A student studying economics using a computer in a library”  and “A forest with vibrant plants and a small stream of water running through it during a storm”. (Figures 5 and 6) Again, I was impressed by the accuracy of the images. Both generations were accurate and of high quality, but they did not exactly manage all the details. For example, it is not clear that the student is studying economics, and certain parts of the image were not well generated, like the fingers of the student.

After trying these generative AI image programs, I was very impressed and look forward to keep using them in the future. I did notice that not every program has the same type of result with the same prompts; therefore, it is important to explore and find a generative AI image program that fits your personal needs and preferences. However, like discussed before, these programs keep getting harder to use for free, and I noticed that the biggest platforms are not currently available to try for free. I hope to be able to try these in the future to compare them to the programs that I have used so far. I plan to keep on using programs like this to experiment and learn about them, and I expect that they will continue to develop at a great rate.

Please rate this