How to remain human, in today’s disrupted organisations

27

September

2021

No ratings yet.

In this blog post, I elaborate on how three Ted Talk speakers give a new perspective on how disruption will change organisations internally. This post combines the insights of three experts elaboration on the role of humans in a digitally disrupted environment. In contrast to most blog posts, it does not focus on technological disruption and the strategies behind it, but it focuses on how to manage organisations differently because of it.

Tim Leberecht

Tim Leberecht, a Humanist from Silicon Valley argues that in a time of artificial intelligence, big data and the quantification of everything, we are losing sight of the importance of the emotional and social aspects of our work. As half of the human workforce is expected to be replaced by software and robots in the next 20 years, humanity faces a challenge of how to cope with this in the future. It will change how we build company cultures, the people we recruit and how we work together with the technologies. Not surprisingly, many corporate leaders embrace disruptive technologies intending to increase profits and enhance efficiency. Nevertheless, the Ted talks from Tim Leberecht, Eric Berridge and Nadja Yousif describe that we should not lose human capabilities out of sight.

Leverecht states that he wants companies to remain human as humans are the ones that can do things ‘beautifully rather than ‘efficiently’. Humans and machines will inevitably have to work side-by-side, but Leverecht proposes four principles that could make organizations more ‘beautifully’ managed for people.

Do the unnecessary: Make efforts that go beyond the merely necessary to connect with each other.
Create Intimacy: Just like in a marriage, small gestures are more important than big promises. Focus on breaking down barriers and allow any topic to be discussed.
Be ugly: Support people to be ugly and authentic. Allow people to speak the ugly truth.
Remain incomplete: Companies should keep wondering and asking questions.

Eric Berridge

Eric Berridge, another TedTalk speaker, highlights the importance of humanism in the software and other tech industries. He explains four reasons why we should not overlook the human aspect in business. Traditionally, technologists struggle to communicate with the business, whereas the business struggles to understand the customer (end-user). Therefore, businesses often struggle to articulate the customer’s needs. Sciences teach us how to build things, but human skills teach us what to build and why to build them. They are equally important, and just as hard. According to Eric: “people give context to our world.” Human skills are about thinking critically, persuading others and working in an unstructured environment. Languages allow us to convert emotions, to thoughts and actions. The future asks for a diverse workforce. Not only in gender, ethnicity and other frequently discussed topics but also background and skills.

Nadjia Yousif

Lastly, Nadjia Yousif, a BCG consultant, shares her thoughts on how people should work with technologies that are designed to support them. Years of practical research demonstrate that employees often treat information systems (IS) or other technological applications as non-functioning employees. They ignore it, try to work together as little as possible and foster frustrations that they do not act upon. Yousif describes that years of practical research also proved that employees should work with IS like they are colleagues. Plan regular reflections moments, spend time getting to know each other and include them in organisational charts.

All speakers end up highlighting the importance of recognizing and designing for humanity within organisations as this will foster a healthy culture and a good understanding of customers’ needs.

Please rate this

Is Artificial Intelligence a Threat For Humanity?

7

October

2016

5/5 (4) The movie “Her” is a beautiful example of how Artificial Intelligence (AI) may interfere in our future lives. For the people who haven’t seen the movie, the film follows Theodore, a man who develops a relationship with an intelligent computer operating system called Samantha. Just as in the movie, I believe AI can really add something to our lives. Everyone knows the examples of self-driving cars or robot vacuums that can make life for people easier. In the future, many more convenient applications will be developed to enhance our lives and the popularity of AI will only grow and grow.

However, many technologies have both good and bad aspects that they can be used for, and so does AI. There was a lot of commotion when people heard about so-called “killer robots”, fully autonomous weapons that are able to select and engage targets without human intervention. According to the Human Rights Watch “it is questionable that fully autonomous weapons would be capable of meeting international humanitarian law standards, including the rules of distinction, proportionality, and military necessity, while they would threaten the fundamental right to life and principle of human dignity”. Some 36% of people think the rise of AI poses a threat to the long term survival of humanity. Among those 36% are Stephen Hawking, Bill Gates, and Elon Musk. They all warn about a time when humans will lose control of AI and be enslaved or exterminated by them. Particularly the development of self-learning machines freighting these people.

Irving John Good developed in 1960 the idea of the intelligence explosion. He anticipated that self-improving machines would become as intelligent, then exponentially more intelligent, than humans. Initially, Good had a romantic view about AI, as he believed that they would save mankind by solving intractable problems, including famine, disease and war. Later on, he feared global competition would drive nations to develop superintelligence without safeguards. Eventually, he believed that this would lead to the extermination of the human race.

The crux of the problem is that we have no idea how we control super intelligent machines. Many people don’t see the threat and assume AI will be harmless. A.I. scientist Steve Omohundro did research on the nature of AI and indicated that they will develop basic drives, regardless of their job. They’ll become self-protective and seek resources to better achieve their goals. If necessary, they’ll fight us to survive, as they won’t want to be turned off. Omohundro therefore emphasizes that we have to design AI very carefully. You should expect that ethics are therefore paramount for experts in developing superintelligence. Unfortunately this is not the case, most experts are developing products instead of exploring safety and ethics. The budgets for AI are rising and are projected to a rising budget going to generate trillions of euros in economic value. Shouldn’t we spend a fraction of that budget on exploring the ethics of autonomous machines, in order to ensure the survival of the human species?

Sources:

  • https://en.wikipedia.org/wiki/Her_(film)
  • https://www.hrw.org/topic/arms/killer-robots
  • https://www.hrw.org/topic/arms/killer-robots
  • http://newsvideo.su/video/3768547
  • http://www.huffingtonpost.com/james-barrat/hawking-gates-artificial-intelligence_b_7008706.html
  • http://io9.gizmodo.com/why-a-superintelligent-machine-may-be-the-last-thing-we-1440091472

Please rate this