Why and how should Metaverse ethics be established promptly?

9

October

2022

No ratings yet.

Even though we do not have the Metaverse yet, the ethical rules which accompany its implementation should be created and set as standard as soon as possible. Shaping the ethics of the Metaverse is especially vital since we will not encounter 1 single platform, but rather many of them, each of them having a different vision for designing our virtual reality. Based on early experiments with digital environments, we can expect a significant number of bullying/ harassing incidents, if no regulation is implemented as to the Metaverse ethics.

What should be prevented, is the self-regulation in form of internal ethical boards which was applied in the case of AI technology. In my opinion, we can not expect that companies that create “independent” ethical boards within the company truly have the public interest in mind. Rather, they likely won’t address issues (such as enforcing racial biases through AI) in ways that would harm their financial condition, with Facebook as an example of a company that could substantially limit the abuses on the platform with AI, but will not do so, since it would mean decreased user engagement on the platform. Thus, we can not trust that the companies themselves will address an ethical implementation of Metaverse since it is likely to collide with their financial profit.

The issue of ethics in the Metaverse should be addressed by an independent, worldwide board, which would introduce an effective oversight, taking into account the security and privacy of the Metaverse users’. In contrast to the AI being mostly governed by soft law (ethical guidelines, which don’t legally bind organizations), Metaverse should, in my opinion, be governed by hard law, as it is even more threatening to users’ privacy and safety (Jobin et al., 2019). The question, however, remains: would countries agree to adopt the hard law? And wouldn’t it limit the development of Metaverse?

Sources:

Jobin, A., Ienca, M. and Vayena, E., 2019. The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9).

Entsminger, J., 2022. Who will establish Metaverse ethics?. Project Syndicate. Available at: https://bit.ly/3Mhnnlk  (Accessed: 9th October 2022).

TechDesk, 2022.  8 things you can’t do in the metaverse: A look into this new virtual world. TheIndianEpress.Available at: https://indianexpress.com/article/technology/crypto/8-things-you-cant-do-in-the-metaverse-a-look-into-this-new-virtual-world-8156570/ (Accessed: 8th October 2022).

Please rate this

Roboethics: Are robots like Tesla Optimus a tread to humanity?

6

October

2022

No ratings yet.

One of the most genius people on this earth, Elon Musk, came out with the news this week that a Tesla robot will be on the market in 3-5 years. This AI-driven robot will be called Tesla Optimus and should cost around $20000. The purpose of the robot is to help with everyday tasks, such as delivering parcel or watering plants (McCallum, 2022).
That Tesla is coming out with an AI-driven robot seems strange, as Elon Musk has often spoken out about the dangers of Artificial Intelligence, saying, for example, that robots will one day be smarter than humans. He even calls AI as humanity’s “biggest existential threat” (BBC News, 2017). Yet he says the Tesla Optimus will not be a danger to humanity because Tesla adds safeguards, such as a stop button (McCallum, 2022). It is therefore good to think about where the boundaries are with regard to designing humanoid robots.

Despite robots only starting to become truly realistic in recent years, Isaac Asimov (1941) wrote about ‘The Three Laws of Robotics’ over 80 years ago:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Later, the EPSRC (Bryson, 2017) added the following five principles:

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

These laws and principles indicate that robots are there to help people and not to hurt people. In addition, humans should always retain power over robots and not the other way around. This seems logical, but with the rapid growth rise of AI, robots may one day become smarter than humans. Therefore, I think this is the time when there should be strict and clear laws around designing robots. Robots should always be limited so that they can never be smarter than humans.
If proper regulations are put in place, I think robots can be of great value to humanity. Think for example of humanoid robots in healthcare, these robots can ensure that more people can receive good quality care at the same time. I am curious to see how AI driven robots will evolve in the coming years, at least we can say that robots are no longer the future, but they are the present!

Bryson, J. J. (2017, April 3). The meaning of the EPSRC principles of robotics. Connection Science, 29(2), 130–136. https://doi.org/10.1080/09540091.2017.1313817

Asimov, I. (1941). Three laws of robotics. Asimov, I. Runaround.

McCallum, B. S. (2022, October 1). Tesla boss Elon Musk presents humanoid robot Optimus. BBC News. Retrieved October 6, 2022, from https://www.bbc.com/news/technology-63100636

BBC News. (2017, August 21). Musk warns of “killer robot” arms race. Retrieved October 6, 2022, from https://www.bbc.com/news/business-40996009

Please rate this

Who is responsible for decisions made by algorithms?

9

September

2022

No ratings yet.

The number of processes that are being taken over by Artificial Intelligence (AI) is rapidly increasing. Additionally, the results of these processes are not solely to assist humans in their decision-making process anymore: the results yielded by the algorithm contain the decision, more often than not. In these cases, if there is a human being involved in the process, he/she most of the time simply needs to adhere to the results of the algorithm (Bader & Kaiser, 2019). 

This brings up the accountability question: in case the algorithm misperforms, who is accountable for the consequences? The most logical options are either the designers/ creators of the algorithm or the users of the algorithm. However, both options don’t seem to have an immediate preference over the other, since both possibilities raise a lot of potential difficulties.

Firstly, assessing accountability at the designers or creators of the algorithm raises concerns. One of the first scientists to be concerned about accountability in the use of computerized systems was Helen Nissenbaum. In 1996, much ahead of her time, she wrote a paper in which she described four barriers that obscure accountability in a computerized society. These four barriers are rather self-explanatory: many handsbugscomputer as scapegoat, and ownership without liability (Nissenbaum, 1996). To this day, these four barriers very well illustrate the difficulty to designate accountability when a process is aided by (or even fulfilled by) an algorithm (Cooper et al., 2022). 

Secondly, placing responsibility on the user is difficult, as, in a significant proportion of the cases, the user has zero to very little influence on the content of the algorithm. Also, as stated before, users are sometimes obliged to adhere to the outcome presented to them by the algorithm (Bader & Kaiser, 2019). 

Currently, most case studies show that the creators of the algorithms sign off their accountability to the users during the acquisition of the product containing the algorithm. For example, when buying a Tesla with ‘Full Self-Driving Capability’, Tesla simply states that these capabilities are solely included to assist the driver and that therefore, the driver is responsible at all times (Tesla, 2022; Ferrara, 2016). 

In my opinion, it would be wise to explore the gap in the possibilities of what can be done to not only legally (as Tesla does), but also morally, sign-off accountability to the users of the algorithm. Maybe already during the design phase of the algorithm. A proposed research question that could be addressed can be stated as follows: 

“What can be done about the design of an artificially intelligent algorithmic system to maintain accountability on the user side?”

References

  1. Bader, V., & Kaiser, S. (2019). Algorithmic decision-making? The user interface and its role for human involvement in decisions supported by artificial intelligence. Organization26(5), 655–672. https://doi.org/10.1177/1350508419855714
  2. Cooper, A. F., Laufer, B., Moss, E., & Nissenbaum, H. (2022). Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning. ArXiv:2202.05338 [Cs]http://arxiv.org/abs/2202.05338
  3. Ferrara, D. (2016). Self-Driving Cars: Whose Fault Is It? Georgetown Law Technology Review1, 182.
  4. Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics2(1), 25–42. https://doi.org/10.1007/BF02639315
  5. Tesla. (2022). Autopilot and Full Self-Driving Capabilityhttps://www.tesla.com/support/autopilot

Please rate this

Dark Patterns: Hotel California or Roach Motel?

10

October

2021

No ratings yet.

Fans of Hotel California by the Eagles will most likely recognise the famous last lines, which say: ‘You can check out any time you like. But you can never leave!’. While there are multiple interpretations on what this may refer to, this phrase can also be applied to how digital interfaces can be designed to make you do things that you did not want to do. Interface designers use all sorts of tricks to make sure that you, the user, is nudged into specific behavior which is beneficial for their purposes. Not convinced that this is actually true? Have you ever tried to cancel your subscription but you couldn’t find an easy way to do it? Or have you ever clicked a piece of normal looking content, only to find out that it was a disguised advertisement? These are all examples of dark patterns: design that is intentionally crafted in such a way that it is misleading or complex to perform certain tasks.

Case: Amazon’s Roach Motel

If you have ever tried to delete your Amazon account but gave up trying, I don’t blame you. The interface design of Amazon’s website has been intentionally crafted in such a way to discourage users into performing an action that hurts the company. Not only is the option buried deep in the website, it is also not located in an intuitive location. Take a look at the fragment below (0:19 – 1:41) to see the amount of hoops you have to go through.

Which dark patterns exist?

Harry Brignull is an expert in the field of user experience, who coined the term ‘dark patterns’ back in 2010. On his website, darkpatterns.org, he shares his findings of the types of dark patterns along with examples that you have probably already encountered at some point. Below a small overview of dark patterns which you are likely to come across:

  • Roach Motel: Just like the Hotel California, it is easy to get in – put near impossible to get out. The Amazon case is a good example of this dark pattern: signing up is very easy, but deleting your account is nearly impossible if you don’t know where to look.
  • Bait and switch: When you expect a specific thing to occur, but something else occurs instead. Think of online stores luring you in with low prices, only to see that additional charges are applied in the checkout. Or Microsoft’s attempt to ‘misguide’ users into upgrading to Windows 10.
  • Confirmshaming: Trying to guilt the user into a specific action, where the decline option is worded in such a way to shame the user. Think of wordings such as: ‘No thanks, I don’t want to benefit from this exclusive discount’.

What can we do about dark patterns?

As long as interface designers are able to nudge users into the behavior of their liking, dark patterns will most likely never cease to exist. Though, there is hope. According to Harry Brignull, the best weapon against dark patterns is to be aware of the presence of such patterns and to shame the companies who use them. LinkedIn, for example, has settled a lawsuit for $13 million for utilising dark patterns to trick users into inviting their network to the platform. While in practise this only implied a mere 10 dollars for every user affected, it does show that there is awareness of such malpractices.

References

https://www.youtube.com/watch?v=kxkrdLI6e6M
https://blog.ionixxtech.com/how-to-avoid-dark-patterns-in-ux/
https://www.darkpatterns.org/types-of-dark-pattern

Please rate this

Chatbot Explosion

18

September

2021

No ratings yet.

Chatbots are popping up more and more at companies. As primarily it was used in customer service, but the shift has also taken place in improving customer experience (CX), business efficiencies, and these are just a few to name it. Chatbots are often termed as virtual agents, digital assistants, virtual customer assistants, and conversational AI bots.

Unforgettable COVID-19

COVID-19 is working as a catalyst for 76 percent of enterprises to invest in long-term IT reforms [1]. Due to the COVID-19 situation, businesses are digitizing to safeguard staff and service consumers who are experiencing mobility issues. Artificial intelligence-powered (AI-powered) chatbot spending will reach $78 billion in 2022, a massive increase from the $24 billion forecast in 2018 [1]. Software is the fastest-growing technology category, with AI/cognitive systems accounting for 40% of the market, with a five-year compound annual growth rate (CAGR) of 43% [1]. Deep learning and machine learning applications (wide applications across all sectors) and conversational AI are the two verticals where investments are concentrated (chatbots, personal assistants, virtual agents, etc.). The United States has the greatest market in conversational AI in terms of size and growth, while Southeast Asia has the fastest CAGR [1]. With increased private equity investment in AI/machine learning, the United States will dominate the incorporation of conversational AI. Furthermore, rising government spending on AI-powered technology will hasten industry expansion. The demand for increased functionality and value is driving an explosion of investment and interest as more consumers and businesses employ chatbots.

Explosion? Why?

When it comes to quick answers, 74% of customers end up choosing AI chatbots [1]. Companies who use AI chatbots in retail have witnessed a 47 percent increase in efficiency, a 40 percent increase in inventiveness, and a 36 percent increase in helpfulness [1]. Increased need for lower AI chatbot development costs, greater customer service, and omni-channel development are all driving growth.

Roadblock?

Although the chatbot market is still in its early stages, Europe lags behind other countries because to data privacy, ethical concerns, fear of failure, and market uncertainty [1]. The language barrier is now the most significant obstacle in the way of chatbots in underdeveloped countries. It would be straightforward if all interactions were conducted in English. Other languages, on the other hand, can be far more sophisticated in terms of syntax and organization. To feel natural to customers and so improve the customer experience, chatbots must be schooled in the complexities of the language [1].

Each organization that commits to a chatbot in its business operations decides for itself the regulations regarding the development of the chatbot. Among other factors, a chatbot can handle sensitive data. So here and there may vary the extent to which the chatbot is transparent.

I personally believe that it should be clearly stated in advance what will be done with the data and in whose hands it ends up. If I would like to use a chatbot myself, the questions will refer to more informative answers that you can get within a split second via the chat. No more and no less.

Now, I am curious to what extent will you trust a chatbot with sharing data (for example, personal data) and why?

[1] Hoang, T. (2021, 17 mei). The AI Chatbot Explosion in Various Regions around the World. Discover.Bot. https://discover.bot/bot-talk/ai-chatbot-in-various-regions/

Please rate this

Ethical considerations from future development and dependence on AI

8

October

2020

No ratings yet. Continuous breakthroughs in AI technology allow us to tackle ever more complicated problems with it that were previously exclusively within the domain of human cognitive problem solving. As advances in the technology have marched along from the first AI programs in the 1950s that could play amateur-level checkers, the excitement for the possibilities held within AI grew in parallel to the complexity of tasks it was able to solve. One key component of solving complex problems effectively however, which is intrinsic to human nature, is understanding the context of the surrounding world in which you are trying to solve the problem. Although humans can make AI more intelligent, in the sense that it can complete evermore complicated tasks at scale, the desired outcomes are increasingly more volatile as AI tries to find the most effective answer without necessarily a regard for the natural world.

A recent example of this is the public outcry over the ‘A-level’ results which were predicted by AI for the first time this year. Normally students would sit  ‘A level’ exams, based on which they would receive offers from universities. Prior to these exams, teachers would provide estimated grades which students could already use to get preliminary offers from universities. However due to the public health crisis caused by Covid-19, this system was disrupted and the UK’s assessment regulator Ofqual was tasked to find another way for students to obtain their ‘A-level’ results. Their solution was to use a mathematical algorithm which used two key pieces of information: “the previous exam results of schools and colleges over the last 3 years, and the ranking order of pupils based on the teacher estimated grades” (Melissa Fai, 2020). The result? Almost 40% of all 700,000 estimated scores were downgraded, causing numerous students to be rejected from universities they had been conditionally accepted to (Adams, 2020). Furthermore, the majority of the downgraded students came from state schools.

 

Although the UK government announced in August this year that they would reverse the grading to match more closely with the estimates provided by the teachers, its clear that for some of the students the damage has already been done. Affected students would not go to their desired university, or decide not to go to university at all and postpone their higher education by at least a year. Looking back critically, its evident that the ethical impacts of the mathematical algorithm were not considered before it was launched or simply ignored. Given the near limitless potential of AI in all facets of our lives in the future, its crucial that ethical considerations become a central component of the AI development process.

References

Adams, R. Barr, C. Weale S. (2020). ‘A-level results: almost 40% of teacher assessments in England downgraded’, The Guardian13 August. Available at: https://www.theguardian.com/education/2020/aug/13/almost-40-of-english-students-have-a-level-results-downgraded (Accessed: 8 October 2020).

Fai, M, Bradley, J, & Kirker, E 2020, Lessons in ‘Ethics by Design’ from Britain’s A Level algorithm, Gilbert + Tobin, viewed 8 October 2020,< https://www.gtlaw.com.au/insights/lessons-ethics-design-britains-level-algorithm>.

Please rate this

Down the YouTube Rabbit Hole

7

October

2020

5/5 (1)  

Over the past few weeks, a lot has been said (including on this blog) about how social media has been impacting the offline world in a negative way. After watching “The Social Dilemma”, which launched on Netflix last September, we started to think about how these platforms are selling our attention as a commodity and leading to an increasingly polarized society, harming democracies around the world. Some people decided to take it one step further and deleted accounts, turned off notifications and stopped clicking on recommended content – just as suggested in the documentary by the whistleblowers who helped creating these platforms. I was one of those people – until I wasn’t anymore!

Interestingly enough, shortly after watching the documentary I started to receive tons of recommendation of content that addressed the same issues, especially on YouTube and Facebook. Isn’t it funny how the algorithm can work against itself? In the beginning, I was decided not to click on any of the suggested videos even though the content seemed quite interesting. Instead, I decided to do my own research on topics such as data privacy, surveillance capitalism or ethical concerns when designing technology. However, the more research I would do the more recommendations I would get – unexpected, uh?

So, one lazy Sunday afternoon I gave in to temptation and clicked on a video that was recommended to me by YouTube – it was a really interesting Ted Talk by techno-sociologist Zeynep Tufekci, which dug a little deeper into some of the question raised in “The Social Dilemma”. Needless to say, one hour later I had already watched 5 more Tedtalks – I admit it, I felt into the Youtube Rabbit Hole!

However, I cannot say that I regret my decision as I gained really interesting insights from these recommendations. After all, that’s how this recommendation system is supposed to work, right? In particular, I was a able to find some answers to a question that had been in my mind for a while: “But what can we do to stop the negative effects of social media while still valuing freedom of speech as a pillar of the internet?”

Even though a lot has been said about the threats arising from the widespread use of social media, I haven’t come across tangible solutions for this issue. Sure, we can turn notifications off, but that won’t tackle the problem at its core! But in two very enlightening Ted Talks by Claire Wardle (misinformation expert) and Yasmin Green (research director a unit of Alphabet focused on solving global security challenges through technology) I was able to find some clarity. According to them, there are three areas that we can act upon to create a better digital and physical world:

  • Tech Companies – first of all, if any advances are going to be made, we need technology platforms to be on board. As an eternal optimist, I do believe that tech leaders are aware of the challenges they face and are certainly trying to find solutions. As Yasmeen Green explains, Google already successfully developed what they called the “Redirect Method”, which targeted people who made searched related to joining terrorist groups. For example, when a Google search about extremist content was made the first result would be an add inviting them to watch a video about more moderate content. Furthermore, the targeting would not be made based on the user profile, but on the specific question that was asked. What if we could use the “Redirect Method” to stop the spread of conspiracies theories or misinformation about climate change? It would be great for society, although probably not so profitable for the tech giants ?
  • Governments – Although tech companies have their fair share of responsibilities, at the moment they are “grading their own homework” and regulating themselves, making it impossible for us to know if interventions are working. That’s where governments come in place. But a challenge this big doesn’t simply call on local or even national regulators. What we really need is global response to regulate the information ecosystem. Or, as Brad Smith (Microsoft’s President) puts it, we need a “Digital Geneva Convention” that holds tech platforms accountable and prevents coordinated social attacks on democracy.
  • We the People – While we would love to place our hopes on Governments to solve this situation for us, it is undeniable that most lawmakers are struggling to keep up with a rapidly changing digital world. From time to time, a US Senate Committee investigating tech companies will originate a few memes as we see that lawmakers have a difficult time understanding what they’re talking about – I will leave you my favorite down below! That’s why we need to take the matter into our own hands and a way to do it is, as Claire Wardle puts it “donate our social data to science”. Millions of datapoints on us are already collected by social media platforms anyway, but what if we could use them to develop a sort of centralized open repository of anonymized data, built on the basis of privacy and ethical concerns? This would create transparency and allow technologists, journalists, academics and society as a whole to better understand the implications of our digital lives.

Overall, I recognize that these solutions are not perfect or complete. But I do believe that they provide a starting point to “build technology as human as the problems we want to solve”.

 

 

Sources

Smith, B., 2017. The Need For A Digital Geneva Convention – Microsoft On The Issues. [online] Microsoft on the Issues. Available at: www.blogs.microsoft.com [Accessed 6 October 2020].

Shead, S., 2020. Netflix Documentary ‘The Social Dilemma’ Prompts Social Media Users to Rethink Facebook, Instagram And Others. [online] CNBC. Available at: www.cnbc.com [Accessed 6 October 2020].

Green, Y., 2018. Transcript Of “How technology can fight extremism and online harassment”. [online] Ted.com. Available at: www.ted.com [Accessed 6 October 2020].

Wardle, C., 2019. Transcript Of “How you can help transform the internet into a place of trust” [online] Ted.com. Available at: www.ted.com [Accessed 6 October 2020].

Tufekci, Z., 2017. Transcript Of “We’re building a dystopia just to make people click in ads” [online] Ted.com. Available at: www.ted.com [Accessed 6 October 2020].

Please rate this

Remote work: Surveillance vs. Privacy

5

October

2020

No ratings yet. During these Covid-19 times, working from home has become the norm after the applied measures worldwide in order to help maintaining social distancing. Due to the inability of checking on their employees as they work from home, many companies have been afraid of a decreasing productivity at work. Consequently, employers have increasingly felt the need to introduce more and more surveillance tools that would help them monitor and track how their employees are spending their time as they work from home.

From applications that help keeping track of the time spent on each tool, to software that tracks the steps you take every day. Companies are going one step beyond basic time-tracking tools such as Toggl, and are beginning to opt for the introduction of ever more invasive programs such as Hubstaff – a software that monitors employees’ performance by taking snapshots of their monitors and calculating a productivity score based on the keystrokes, mouse movements, time spent and websites visited.

A software that is programmed to take snapshots of an individual’s screen as they are having a confidential videoconference; or that tracks down the user’s GPS coordinates as they go to the nearest cafeteria for a short coffee break, surely raises too many ethical concerns as it violates both the employees’ privacy as well as the privacy of those with whom they interact.

Not only does it raise ethical concerns, but it has also been proven that employee surveillance reduces trust between employees and their employers, driving down motivation and engagement. The ‘stress-inducing, demotivating and dehumanizing’ practice of monitoring – as described by a manager in the report Workplace technology: the employee experience – hinders employee autonomy and proactivity, both of which are necessary for a healthy and thriving company in the current digital era.

The use of surveillance software was initially intended to keep workers engaged and ensure their productivity. However, an unsurprising counter-effects occurs due to the implement of these tools: employee engagement and productivity declines together with the quality of the work they do. This is not a “vindictive reaction” to the application of such tool, but simply the result of feeling controlled and uncomfortable at (remote) work.

In a time where GDPR laws are being enforced to protect people’s privacy, how can such surveillance tools be accepted? Why are they still being implemented despite the evidence on its counter-effects?

Monitoring might be necessary and helpful to companies to make sure the work is being done and to collect data that might be used to improve the company’s strategy – especially when remote-work becomes the norm. However, managers must think through very carefully the techniques they are going to use: how will they be implemented, to what degree will they invade their employee’s privacy, and what are the possible consequences. In a time where remote work is the model to follow, how do we find the balance between tracking for improvement and respect for people’s privacy?

 

Satariano, A. How my boss monitors me while I work from home. The New York Times, 2020. https://www.nytimes.com/2020/05/06/technology/employee-monitoring-work-from-home-virus.html

Jones, L. “I monitor my staff with software that takes screenshots”. BBC News, 2020. https://www.bbc.com/news/business-54289152

Chartered Institute of Personnel and Development (CIPD). Workplace technology: the employee experience. July 2020, UK. https://www.cipd.co.uk/Images/workplace-technology-1_tcm18-80853.pdf

Kensbock, J.M., Stöckmann, C. “Big brother is watching you”: surveillance via technology undermines employees’ learning and voice behavior during digital transformation. J Bus Econ, 2020. https://doi.org/10.1007/s11573-020-01012-x

Please rate this

Can Morality Be Programmed Into AI Systems?

18

October

2019

No ratings yet. For many years, experts have been warning about the unanticipated effects of general artificial intelligence (AI). For example, Elon Musk is of the opinion that AI may constitute a fundamental risk to the existence of human civilization, and Ray Kurzweil predicts that by 2029 AIs will be able to outsmart us human beings. [1]

Such scenarios have called for incorporating AI systems with a sense of ethics and morality. While general AI is still far away, morality in AI is already a widely discussed topic today (for example the trolley problem in autonomous cars). [2] [3]

So, where would we need to start in order to give machines a sense of ethics? According to Polonski, there are three ways to start designing more ethical machines [1]:

  1. Explicitly defining ethical behavior: AI researchers and ethicists should start formulating ethical values as quantifiable parameters and come up with explicit answers and decision rules to ethical dilemmas.
  2. Crowdsourcing human morality: Engineers should collect data on ethical measures by using ethical experiments (for example see http://moralmachine.mit.edu/) [4]. This data should then be used to train AI systems appropriately. Getting such data, however, might be challenging because ethical norms cannot always be standardized.
  3. Making AI systems more transparent: While we know that full algorithmic transparency is not feasible, there should be more transparency on how engineers quantified ethical values before programming them, as well as the outcomes that the AI has produced as a result of these choices. Here, guidelines implemented by policymakers could help.

However, in my opinion, it is very hard to implement ethical guidelines into AI systems. As we humans usually tend to rely on gut feelings, I am not sure if we even would be capable of expressing morality and ethics in measurable metrics. Also, do we really know what morality is? Isn’t this also subjective? While there are things that could be morally right for us here in Western Europe, they might not be morally right in other countries. Therefore, I remain curious whether morality and ethics will in the future be explicitly programmed into AI systems. What do you think? Is it even necessary to program morality into AI systems?

 

References

[1]: Polonski, V. (2017). Can we teach morality to machines? Three perspectives on ethics for artificial intelligence. Retrieved from https://medium.com/@drpolonski/can-we-teach-morality-to-machines-three-perspectives-on-ethics-for-artificial-intelligence-64fe479e25d3

[2]: Hornigold, T. (2018). Building a Moral Machine: Who Decides the Ethics of Self-Driving Cars?. Retrieved from https://singularityhub.com/2018/10/31/can-we-program-ethics-into-self-driving-cars/

[3]: Nalini, B. (2019). The Hitchhiker’s Guide to AI Ethics. Retrieved from https://towardsdatascience.com/ethics-of-ai-a-comprehensive-primer-1bfd039124b0

[4]: Hao, K. (2018). Should a self-driving car kill the baby or the grandma? Depends on where you’re from. Retrieved from: https://www.technologyreview.com/s/612341/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/

Please rate this

A robotic workforce: fact or fiction?

16

October

2019

4.67/5 (3) Our current workplace is becoming increasingly digital and automated. Employees fear that robots will eventually overtake their jobs, as was the case with manufacturing and is currently happening with administrative tasks (BCG, 2015). But is this really the case? Are we heading towards a future in which all jobs will be automated and performed by a robotic worker? In this blog I want to share my opinion on the debacle about automation in the future workforce.

 

Fear of losing a job has always been present in the background, but one paper, written by Frey and Osborne (2013) about the future of innovation and employment, caused a lot of fear among the current workforce a couple of years ago. The authors claimed that half of the current jobs will be automated in the near future. For many people this will, of course, be very frightening to hear about. However, is this really the case? According to OECD (2013), who wrote an article in direct response to Frey and Osborne, only 9 percent of all jobs could be fully automated. This difference is explained by the fact that Frey and Osborne included all jobs in their percentages no matter if they would be fully automated in the future or only minor parts would be automated or performed by a robot.

This exact point is, in my opinion, of key importance in the job automation discussion. Naturally, it is unavoidable that certain jobs or parts of it will be automated in the future. A robot is after all cheaper and less prone to errors than a human worker (Romero et al., 2016). The inference should not be made, however, that human workers will not be of value anymore in the future workplace. The majority of the jobs still have to be performed manually. Think of jobs in which cognitive skills are necessary, complex decisions have to be made and where the human touch is a key factor. Jobs in healthcare or strategy-making are very clear examples of where human workers will still be needed in the future. Automation will mostly play a central role in tasks such as processing huge amounts of data, moving information from one place to another or in tasks that are very repetitive.

As a result, it is true that workers will need to learn new skills to be able to interact and collaborate with these robots (BCG, 2015). Nowadays, it is very accessible for employees to teach themselves skills necessary for automating simple tasks. Programs like UiPath and Blue Prism let you build programs that can do the repetitive tasks for you, without knowing anything of programming yourself.  This way employees do not only learn skills that are future proof, but most importantly, can also be part of the evolution of their job in a proactive way. This will, in addition, take away the fear and misconception from employees with which we started the beginning of this blog. Robots and automation will not take over complete jobs, they will only support you with handling certain tasks.

Taking all of the above into account, my opinion is that the future workforce will stay mostly human. It will, however, be optimized and supported by robots and it would be wise for employees to understand the basics of automation to adapt to the changing workplace. How do you see this? Do you think computers and robots will become smart enough to outcompete all human workers?

 

p.s. In case you are interested in automation and would like to experiment with it yourself, have a look at UiPath, which offers easy to understand automation lessons.

 

References:

BCG. (2015). Man and machine in industry 4.0. How Will Technology Transform the Industrial Workforce Through 2025? Retrieved from https://www.bcg.com/publications/2015/technology-business-transformation-engineered-products-infrastructure-man-machine-industry-4.aspx on 15-10-2019.

Frey, C., B., & Osborne, M. (2015). Technology at work. The future of employment and innovation.

OECD (2016). The Risk of Automation for Jobs in OECD Countries: A Comparative Analysis. OECD Social, Employment and Migration Working Paper. Volume 189.

Romero, D., Bernus, P., Noran, O., Stahre, J., & Fast-Berglund, Å. (2016). The operator 4.0: human cyber-physical systems & adaptive automation towards human-automation symbiosis work systems. In IFIP international conference on advances in production management systems (pp. 677-686). Springer, Cham.

Please rate this