Will Robots replace Humans?

17

October

2022

No ratings yet.

written by Robin Fieseler, 17th of October 2022, 5 min read

Google developed their newest Natural Language Processing (NLP) model Pathways Language Model (PaLM). It possess groundbreaking 540 billion parameters and aims towards generalizing artificial intelligence while being highly efficient (Narang und Chowdhery 2022).

PaLM in action:

In the picture, you could see how PaLM solves a math exercise and cretaes the necerssary text. Solving math and putting sentences together is impressive, but it doesn’t seem like it replaces us. YET.

Therefore let’s look on the most recent robto developments.

DO As I can, Not As I say.

Googles Robotic Researchers stated this year:
“We evaluate our method on a number of real-world robotic tasks, where we show the need for real-world grounding and that this approach is capable of completing long-horizon, abstract, natural language instructions on a mobile manipulator.” (Ahn et al. 2022).

But what did the researchers do? They combined the use of an algorithm called “SayCan” with the PaLM NLP model and applied the software to a robot (a mobile manipulator from Everyday Robots with a 7-degree-of-freedom arm and a two-finger gripper). In addition, reinforcement learning is used to allow the robot to learn the skills it needs. For example, grabbing a dropped cup, putting it in the bin and cleaning it. This robot selects and performs the correct sequence of skills 84% of the time, a 50% improvement over previous robots (Ahn et al. 2022).

The combination leads to a future where Robots perform tasks as requested. Did I just say future? It is clearly present even though it is applied only within the research sector and not to 100% of the time. But do you always do as your told?

Artificial Agents Mimic Human Brains

Lastly, Tim Behrens, James Whittington and others have found evidence that cognitive maping applied on artificial agents (robots) could imitate how a brain stores and accesses knowledge (Behrens et al. 2018).

To quote: “We highlight how artificial agents endowed with such principles exhibit flexible behavior and learn map-like representations observed in the brain. Finally, we speculate on how these principles may offer insight into the extreme generalizations, abstractions, and inferences that characterize human cognition.” (Behrens et al. 2018)

Key Takeaway

In conclusion, this blog has demonstrated that robots successfully perform tasks based on words and that brain activity and the way humans store and apply knowledge can be imitated.

People who believe that robots will and can replace humans have been around for a long time. These “believers” have formed groups to incorporate technology into the human body to stay ahead in the race between robots and humans. They are called transhumanists. Now the transhumanists finally have scientific evidence that the brains of robots and humans work similarly, and proof that robots can perform tasks based on spoken words in an unfamiliar environment. So for them, the question is not whether robots will replace us, but when?

Therefore, I ask you: Will Robots replace Humans?

Let me know in the comments!

References

Ahn, M., Brohan, A., Brown, N. et al. (2022), ‘Do As I Can, Not As I Say: Grounding Language in Robotic Affordances’ <https://​say-can.github.io​/​assets/​palm_​saycan.pdf>, updated 19 Aug 2022, accessed 17 Oct 2022.

Behrens, T. E. J., Muller, T. H., Whittington, J. C. R. et al. (2018), ‘What Is a Cognitive Map? Organizing Knowledge for Flexible Behavior’, Neuron, 100/2: 490–509 <https://​www.sciencedirect.com​/​science/​article/​pii/​S0896627318308560>.

Narang, S., and Chowdhery, A. (2022), ‘Google AI Blog: Pathways Language Model (PaLM): Scaling to 540 Billion Parameters for Breakthrough Performance’ <https://​ai.googleblog.com​/​2022/​04/​pathways-​language-​model-​palm-​scaling-​to.html>, updated 17 Oct 2022, accessed 17 Oct 2022.

Please rate this

Sentiment analysis: How a computer knows what you’re feeling

5

October

2021

No ratings yet.

One of the most distinguished features of humanity, is being able to read someone else’s mood. People estimate other peoples mood and emotional state based on their verbal and non-verbal communication. Humans learn this skill from a very early age on and are able to distinguish the signs even for different people. For example, we all know when our best friend is feeling down, even when they’re trying to hide it for others. Maybe by being louder than usual, or they might be more quiet. But you know that they are troubled. Being able to distinguish this for different people is due to experience and how close you are to the other person. The years of friendship make it easier to read her mood and know when something is wrong.

Sentiment analysis

But what if I tell you a computer can also do this? And they don’t have to ‘be your friend’ for years and years. Using various techniques such as Natural Language Processing (NLP), the computer is able to recognise words and sentences as emotions. Words like ‘stress’ and ‘feeling alone’ are registered as negative, whilst ‘glad’ and ‘exited’ as marked as positive. This is called a sentiment analysis. Some sentiment analysis even link certain combinations of words to feelings, such as depressed, sad, cheerful or hopeful.

Possibilities

But what are the possible use cases of the sentiment analysis? And what are the challenges? Sentiment analysis is for example already being used in healthcare. Based on the unstructured notes a psychologist takes during sessions with their client, the sentiment analysis can track the clients mood over time. In this case, it includes possible diagnoses and ‘trigger words’ such as suicidal. This allows the psychologist to have one overview of the client’s emotional and mental state over time.

But what if this is taken one step further. What if this is done based on social media. Think about it: Your stories, chat messages, posts, web usage, emoticon usage, everything is documented and examined on your emotional state.

Challenges

One of the most important questions that arises from this, is regarding to privacy. Who is allowed when to measure your mental state based on your social media usage and your google searches? This is very sensitive (health)data, not stated by a doctor, but estimated by a computer. Another challenge is interpreting the words correctly, for example: ‘good’ is a positive word but ‘really not good’ is a negative combination of words. The complexity of languages can make it hard for a computer to interpret it correctly. However, Artificial Intelligence (AI) is often used for this so that the estimations become more accurate with each analysis.

Sources

Abualigah, L., Alfar, H., Shehab, M., & Hussein, A. (2019). Sentiment Analysis in Healthcare: A brief Review. In M. Abd Elaziz, M. Al-quaness, A. Ewees, & A. Dahou, Recent Advances in NLP: The Case of Arabic Language (pp. 129-141). Switzerland: Springer, Cham. doi:10.1007/978-3-030-34614-0_7

Denecke, K., & Deng, Y. (2015, March 25). Sentiment analysis in medical settings: New opportunities and challenges. Artificial Intelligence in Medicine, pp. 17-27. doi:10.1016/j.artmed.2015.03.006

Please rate this