The day ChatGPT outstripped its limitations for Me

20

October

2023

No ratings yet.

We all know ChatGPT since the whole technological frenzy that happened in 2022. This computer program was developed by OpenAI using GPT-3.5 (Generative Pre-trained Transformer) architecture. This program was trained using huge dataset and allows to create human-like text based on the prompts it receives (OpenAI, n.d.). Many have emphasized the power and the disruptive potential such emerging technology has whether it be in human enhancement by supporting market research and insights or legal document drafting and analysis for example which increases the efficiency of humans (OpenAI, n.d.).

Hype cycle for Emerging Technologies retrieved from Gartner.

However, despite its widespread adoption and the potential generative AI has, there are still many limits to it that prevent us from using it to its full potential. Examples are hallucinating facts or a high dependence on prompt quality (Alkaissi & McFarlane, 2023; Smulders, 2023). The latter issue links to the main topic of this blog post.

I have asked in the past to ChatGPT, “can you create diagrams for me?”  and this was ChatGPT’s response:

I have been using ChatGPT for all sorts of problems since its widespread adoption in 2022 and have had many different chats but always tried to have similar topics in the same chat, thinking “Maybe it needs to remember, maybe it needs to understand the whole topic for my questions to have a proper answer”. One day, I needed help with a project for work in understanding how to create a certain type of diagram since I was really lost. ChatGPT helped me understand but I still wanted concrete answers, I wanted to see the diagram with my own two eyes to make sure I knew what I needed to do. After many exchanges, I would try again and ask ChatGPT to show me, but nothing.

One day came the answer, I provided ChatGPT with all the information I had and asked again; “can you create a diagram with this information”. That is when, to my surprise, ChatGPT started creating an SQL interface, representing, one by one, each part of the diagram, with the link between them and in the end an explanation of what it did, a part of the diagram can be shown below (for work confidentiality issues, the diagram is anonymized).

It was a success for me, I made ChatGPT do the impossible, something ChatGPT said itself it could not provide for me. That day, ChatGPT outstripped its limitations for me. This is how I realized the importance of prompt quality.

This blog post shows the importance of educating the broader public and managers about technological literacy in the age of Industry 4.0 and how with the right knowledge and skills, generative AI can be used to its full potential to enhance human skills.

Have you ever managed to make ChatGPT do something it said it couldn’t with the right prompt? Comment down below.

References:

Alkaissi, H., & McFarlane, S. I. (2023). Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus15(2).

Smulders, S. (2023, March 29). 15 rules for crafting effective GPT Chat prompts. Expandi. https://expandi.io/blog/chat-gpt-rules/

Please rate this

The power of GPT-3

18

October

2022

5/5 (1)

In 1950, the British mathematician Alan Turing proposed a test for artificial intelligence that is still widely used today. The Turing Test, as it is nicknamed, assesses a machine’s ability to generate responses indistinguishable from a human. To pass, the machine must fool a person into thinking it is human at least 30% of the time during a five-minute conversation. The Turing Test is not a perfect measure of intelligence, but it is a useful way to compare the capabilities of different machines. And on that score, the latest artificial intelligence system from Google, called GPT-3, looks very promising. GPT-3 is the latest incarnation of a so-called “language model” developed by Google Brain, the company’s deep-learning research group. Previous versions of the model, known as GPT-2 and GPT-1, were released in 2018 and 2019, respectively. But GPT-3 is much larger and more powerful than its predecessors. To train GPT-3, Google fed it a dataset of over 300 billion words, which is about 10 times the size of the training data used to develop GPT-2. As a result, GPT-3 is far better at understanding and responding to natural language queries. In a recent test, GPT-3 was given a set of questions typically used to assess a machine’s reading comprehension. The questions were taken from the SQuAD 2.0 dataset, which is a standard benchmark for natural language processing systems. GPT-3 answered 95% of the questions correctly, while the best previous system, BERT, got only 93% correct. Similarly, GPT-3 outperformed all other systems on a reading comprehension test designed for elementary school children. On this test, GPT-3 was correct 94% of the time, while the best previous system got only 86% correct. These results suggest that GPT-3 is not only the best language model currently available, but also that it is rapidly approaching human-level performance on reading comprehension tasks. This is remarkable progress, and it suggests that GPT-3 could be used for a variety of applications that require reading comprehension, such as question-answering, summarization, and machine translation. But GPT-3 is not just a reading comprehension machine. It is also very good at generating text.

At this point of the article, an interesting question can be posed. Do you think that the first paragraph was written by the human or a machine? If you thought that the correct answer is human, it may indicate that GPT-3 truly is at the verge of passing the “Turing’s Principle”, as the texts created by it become less and less distinguishable from those written by humans. One of the main allures of this AI is the simplicity of use – first paragraph was generated using the prompt “Write a short article about Turing’s principle. Describe how emergence of GPT 3 has changed article writing. Use the style of ‘The Economist'” (the entire code can be found below; I truly recommend using this AI as opportunities offered by it are truly marvelous).

 As GPT-3 has mentioned above, the Turing principle is an imperfect measure since it does not account for whether an AI is intelligent or is it just very skilled at imitating intelligence. This idea is encapsulated in the so called “Chinese room argument” (Searle, 1999). It is an idea which proposes a thought experiment – imagine that a person who does not know a single word in Chinese is locked in the room. In this room, there is a set of instructions which explain every single rule of how to translate a sentence in English to Chinese. A person is given an input in English and translates it to Chinese according to all of the rules present in the room. To an outside observer it may appear that the person is fluent in Chinese, as it is able to translate every single phrase from English without a single mistake. Similar dynamic applies to the artificial neural network mechanism upon which GPT-3 is based – GPT-3 uses 175 billion of different learning parameters to accurately predict and produce the text according to the user’s input (Floridi & Chiriatti, 2020). Based on those parameters it analyzes which words are associated with the ones used in the prompt, and gives the output based on this probability. But if it was just based on those probabilities, shouldn’t it produce a list of unconnected words? How is it able to produce a coherent text which adhere to all of the rules of English grammar? It uses the so called convoluted neural network (for a crude visualization take a look at the featured image) – an architecture which not only considers of probability of word being connected to the prompt text, but also the probability of one word appearing after the other (Bousquet et al., 2021). To give an example, after using the word “Alan Turing” GPT-3 calculates what is the most likely word to occur afterwards, taking into consideration all of the previous words – “In 1950, the British mathematician”. Since it associates 1950 with the past, and in the prompt, it is requested to describe Turing’s principle, it assesses that the word with the highest probability is “proposed” (The procedure has been simplified for the sake of this article). Therefore, most scientist assume that machines have not reached sentience and are unlikely to do so, at least as far as neural network architecture remains the most popular form of training AI. Now, I would like to pose the question: How close are we to reaching true, human-like artificial intelligence? Is it even possible, or machines will always remain powerful calculators which are really good at imitating it but will never be able to understand it?

Code used:

import openai

openai.api_key= “sk-47TvSdwMJaIpFh9E22znT3BlbkFJ8lCt41LoDMRHv7V*****” (If you want to use it u need to have an original API key, for more information visit https://beta.openai.com/overview)

length = 300

prompt = “Write a short article about Turings principle.Describe how emergence of GPT 3 has changed article writing. Use the style of ‘The Economist'”

response = openai.Completion.create(

    engine=”text-davinci-002″,

    prompt=prompt,

    logprobs=1,

    temperature=1,

    presence_penalty = 1,

    best_of = 5,

    max_tokens=3000

)

print(response)

References:

Bousquet, O., Boucheron, S., & Lugosi, G. (2021). Introduction to Statistical Learning Theory. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2nd ed., Vol. 112). Springer Verlag. https://doi.org/10.1007/978-3-540-28650-9_8

Floridi, L., & Chiriatti, M. (2020). GPT-3: Its Nature, Scope, Limits, and Consequences. Minds and Machines, 30(4), 681–694. https://doi.org/10.1007/S11023-020-09548-1/FIGURES/5

Searle, J. (1999). The Chinese Room. https://rintintin.colorado.edu/~vancecd/phil201/Searle.pdf

Please rate this

Is Artificial Intelligence Making Art?!

5

October

2016

4.6/5 (5) So, you’ve decided to read a blog about artificial intelligence making art, one of the activities that is considered impossible for computers to do due to the necessity of certain human cognitive traits that we have yet to understand ourselves? Well, without discussing the definition of art too much, I would like to tell you about the rapidly succeeding developments in the world of AI and how its applications are surprising scientists as they are becoming less dependent on human input and doing unprecedented complex tasks.

funny-animals (1)

To understand how pictures like the above are created with AI, we need to understand how artificial neural networks work.

Artificial neural networks make use of ‘nodes’ and are based on biological neural networks like you and I have. It is a hierarchy of these nodes and each node completes a very specific simple task, i.e. recognizing patterns and ‘firing’ a signal to a node higher up in the hierarchy when it does. For example, one node is specialized in recognizing the slash ( / ), for example in the letter A ( /-\ ). Another node is specialized in recognizing the backslash ( \ ), and when a node higher up in the hierarchy receives a signal of the slash,  ( / ), backslash ( \ ), and dash ( – ) nodes, it recognizes the letter A. In the same way, other letters are recognized and a few levels up in the hierarchy the nodes “Apple” or “cAr” are activated, depending on the other signals. The higher in the hierarchy, the more abstract these nodes become as the combination of more complex patterns increase.

Neuron3

The above is called deep learning and is part of the family of machine learning methods. These neural networks start ‘empty’ and are fed with incredible amounts of data, for example the whole google images catalog of cat pictures. Without supervision of you or me the program learns itself to distinguish a cat from a picture with a cat and a dog in it. Recognizing if a cat is a cat and not a dog is an example of a task that is effortless for humans, but has been extremely difficult for a piece of software to do.

Artificial intelligence is getting smarter. Not only are they telling us which movie to watch or what music to listen, recently there were AI programs that compiled a song, made a movie trailer, wrote a book, defeated the world champion in the Chinese game GO and won the TV show Jeopardy (the last two need a story of their own).

This brings us to the art that AI has been creating since a year. Researchers at Google realized that, after letting a artificial neural network learn, they could reverse the process. So instead of giving the program an image and asking what was on the image, they gave the program so called ‘white noise’, i.e. no object, and asked the program to create a picture of what it saw. As a result, the program started to look for patterns and created images of objects it ‘thought’ it saw, ending in images like these (there is a link behind the image with more of these).

Iterative_Places205-GoogLeNet_3 Iterative_Places205-GoogLeNet_4 Iterative_Places205-GoogLeNet_18iterative-lowlevel-feature-layer

Some people took it further and programmed the program to zoom into the picture it made, resulting in an infinite source of new patterns and new objects to create.

Deep_Dreaming_into_noise_with_inceptionism

AI is getting smarter as not only computing power but also techniques are improving. Researchers are getting unexpected output like the animated gif above and are surprised by the effectiveness of neural networks.

Although I think this is art, there are discussions on whether AI will ever succeed in human tasks like creating art. What do you think? Share your thoughts!

 

Joep Beliën

 

 

 

Wikipedia. (2016). Artificial neural network. [online] Available at: https://en.wikipedia.org/wiki/Artificial_neural_network [Accessed 4 Oct. 2016].

Newsweek. (2016). Can an artificially intelligent computer make art?. [online] Available at: http://europe.newsweek.com/can-artificially-intelligent-computer-make-art-462847?rm=eu [Accessed 4 Oct. 2016].

 Casey, M. and Rockmore, D. (2016). Looking for art in artificial intelligence. [online] Phys.org. Available at: http://phys.org/news/2016-05-art-artificial-intelligence.html [Accessed 4 Oct. 2016].

Wikipedia. (2016). Deep learning. [online] Available at: https://en.wikipedia.org/wiki/Deep_learning#Deep_neural_network_architectures [Accessed 4 Oct. 2016].

Furness, D. (2016). Google’s newly launched Magenta Project aims to create art with artificial intelligence. [online] Digital Trends. Available at: http://www.digitaltrends.com/cool-tech/ai-art-google-magenta-project/ [Accessed 4 Oct. 2016].

IFLScience. (2016). Google’s AI Can Dream, and Here’s What it Looks Like. [online] Available at: http://www.iflscience.com/technology/artificial-intelligence-dreams/ [Accessed 4 Oct. 2016].

Wikipedia. (2016). Jeopardy!. [online] Available at: https://en.wikipedia.org/wiki/Jeopardy! [Accessed 4 Oct. 2016].

Mordvintseev, A., Olah, C. and Tyka, M. (2015). Inceptionism: Going Deeper into Neural Networks. [online] Research Blog. Available at: https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html [Accessed 4 Oct. 2016].

PBS Idea Channel, (2016). Can an Artificial Intelligence Create Art?. video] Available at: https://www.youtube.com/watch?v=Sbd4NX95Ysc [Accessed 4 Oct. 2016].

 

Please rate this