Who is responsible for decisions made by algorithms?

9

September

2022

No ratings yet.

The number of processes that are being taken over by Artificial Intelligence (AI) is rapidly increasing. Additionally, the results of these processes are not solely to assist humans in their decision-making process anymore: the results yielded by the algorithm contain the decision, more often than not. In these cases, if there is a human being involved in the process, he/she most of the time simply needs to adhere to the results of the algorithm (Bader & Kaiser, 2019). 

This brings up the accountability question: in case the algorithm misperforms, who is accountable for the consequences? The most logical options are either the designers/ creators of the algorithm or the users of the algorithm. However, both options don’t seem to have an immediate preference over the other, since both possibilities raise a lot of potential difficulties.

Firstly, assessing accountability at the designers or creators of the algorithm raises concerns. One of the first scientists to be concerned about accountability in the use of computerized systems was Helen Nissenbaum. In 1996, much ahead of her time, she wrote a paper in which she described four barriers that obscure accountability in a computerized society. These four barriers are rather self-explanatory: many handsbugscomputer as scapegoat, and ownership without liability (Nissenbaum, 1996). To this day, these four barriers very well illustrate the difficulty to designate accountability when a process is aided by (or even fulfilled by) an algorithm (Cooper et al., 2022). 

Secondly, placing responsibility on the user is difficult, as, in a significant proportion of the cases, the user has zero to very little influence on the content of the algorithm. Also, as stated before, users are sometimes obliged to adhere to the outcome presented to them by the algorithm (Bader & Kaiser, 2019). 

Currently, most case studies show that the creators of the algorithms sign off their accountability to the users during the acquisition of the product containing the algorithm. For example, when buying a Tesla with ‘Full Self-Driving Capability’, Tesla simply states that these capabilities are solely included to assist the driver and that therefore, the driver is responsible at all times (Tesla, 2022; Ferrara, 2016). 

In my opinion, it would be wise to explore the gap in the possibilities of what can be done to not only legally (as Tesla does), but also morally, sign-off accountability to the users of the algorithm. Maybe already during the design phase of the algorithm. A proposed research question that could be addressed can be stated as follows: 

“What can be done about the design of an artificially intelligent algorithmic system to maintain accountability on the user side?”

References

  1. Bader, V., & Kaiser, S. (2019). Algorithmic decision-making? The user interface and its role for human involvement in decisions supported by artificial intelligence. Organization26(5), 655–672. https://doi.org/10.1177/1350508419855714
  2. Cooper, A. F., Laufer, B., Moss, E., & Nissenbaum, H. (2022). Accountability in an Algorithmic Society: Relationality, Responsibility, and Robustness in Machine Learning. ArXiv:2202.05338 [Cs]http://arxiv.org/abs/2202.05338
  3. Ferrara, D. (2016). Self-Driving Cars: Whose Fault Is It? Georgetown Law Technology Review1, 182.
  4. Nissenbaum, H. (1996). Accountability in a computerized society. Science and Engineering Ethics2(1), 25–42. https://doi.org/10.1007/BF02639315
  5. Tesla. (2022). Autopilot and Full Self-Driving Capabilityhttps://www.tesla.com/support/autopilot

Please rate this

Deepfake Fraud – The Other Side of Artificial Intelligence

8

October

2021

Dangers of AI: How deepfakes through Artificial Intelligence could be used for fraud, scams and cybercrime.

No ratings yet.

Together with Machine Learning, Artificial Intelligence (or: AI) can be considered one of if not the hottest emerging innovations in the field of technology nowadays (Duggal, 2021). AI entails the ability of a computer or a machine to ‘think by itself’, as it strives to mimic human intelligence instead of simply executing actions it was programmed to carry out. By using algorithms and historical data, AI utilizes Machine Learning in order to comprehend patterns and how to respond to certain actions, thus creating ‘a mind of its own’ (Andersen, n.d.). 

History

Even though the initial days of Artificial Intelligence research date back to the late 1950s, the technology has just recently been introduced to the general mass on a wider scale. The science behind the technology is complex, however AI is becoming more widely known and used on a day-to-day basis. This is due to the fact that computers have become much faster and data (for the AI to derive from) has become more accessible (Kaplan & Haenlein, 2020). This allows for AI to be more effective, to the point where it has already been implemented in every-day devices i.e. our smartphones. Do you use speech or facial recognition for unlocking your phone? Do you use Siri, Alexa or Google Assistant? Ever felt like advertisements on social media resonate a bit too much with your actual interests? Whether you believe it or not, it is highly likely that both you and I come into contact with AI on a daily basis.

AI in a nutshell: How it connects to Machine/Deep Learning

That’s good… right?

Although the possibilities for positively exploiting AI seem endless, one of the more recent events which shocked the world about the dangers of AI is a phenomenon called ‘deepfaking’. This is where AI utilizes a Deep Learning algorithm to replace a person from a photo/video with someone else, creating seemingly (!) authentic and real visuals of that person. As one can imagine, this results in situations where people seem to be doing things through media, which in reality they have not. Although people fear the usage of this deepfake technology against celebrities or high-status individuals, this can – and actually does – happen to regular people, possibly you and I.

Cybercrime

Just last month, scammers from all over the world are reported to have been creatively using this cybercrime ‘technique’ in order to commit fraud against, scam or blackmail ordinary people (Pashaeva, 2021). From posing as a wealthy bank owner to extract money from investors, to blackmailing people with videos of them seemingly engaging in a sexual act… as mentioned before, the possibilities for exploiting AI seem endless. Deepfakes are just another perfect illustration of this fact. I simply hope that, in time, the positives of AI outweigh the negatives. I would love to hear your perspective on this matter.

Discussion: Deepfake singularity

For example, would you believe this was actually Morgan Freeman if you did not know about Artificial Intelligence and deepfakes? What do you think this technology could cause in the long term, when the AI develops itself into a much more believable state? Will we be able to always spot the fakes? What do you think this could lead to in terms of possible scamming or blackmailing, if e.g. Morgan Freeman were to say other things…?

References

Duggal, N. (2021). Top 9 New Technology Trends for 2021. Available at: https://www.simplilearn.com/top-technology-trends-and-jobs-article

Andersen, I. (n.d.). What Is AI and How Does It Work? Available at: https://www.revlocal.com/resources/library/blog/what-is-ai-and-how-does-it-work

Kaplan, A., & Haenlein, M. (2020). Rulers of the world, unite! The challenges and opportunities of artificial intelligence. Business Horizons, 63(1). https://doi.org/10.1016/j.bushor.2019.09.003

Pashaeva, Y. (2021). Scammers Are Using Deepfake Videos Now. Available at: https://slate.com/technology/2021/09/deepfake-video-scams.html

Please rate this

Author: Roël van der Valk

MSc Business Information Management student at RSM Erasmus University - Student number: 483426 TA BM01BIM Information Strategy 2022

Living in an algorithmic bubble

4

October

2021

No ratings yet.

Online, many of us are surrounded by views and opinions we agree with. Websites use algorithms that look at things like browsing history and age to offer personalized content and ensure the content shown supports the visitor’s views. These algorithms decide what we view and read online and often exclude opposing perspectives. Because of this, we live in so called ‘filter bubbles’.

Initially, an algorithm that ensures we see content we like and agree with does not sound that bad. However, when we do not see opposing views or opinions we disagree with online, these filter bubbles create echo chambers and we forget that what we see is actually being filtered. In my opinion this is a huge flaw to these otherwise valuable algorithms, because the filter bubbles that arise are distorting our ideas of the world. People are using Facebook as their main news source for example and a significant portion of those people is probably not mindful about what Facebook’s algorithms do. This lack of awareness increases the negative impact of filter bubbles, because the people who are consuming the news do not know that what they see is constantly being filtered to match their opinions and perspectives (FS, 2017; Pariser, 2011). Furthermore, we limit our own experiences and learning possibilities by only viewing filtered content. In my opinion, this extreme content filtering problem is perfectly summed up by Pariser (2011): “A world constructed from the familiar is the world in which there’s nothing to learn.”

Social media platform Tiktok is trying to combat this problem by sporadically adding videos to your feed that are not relevant to your expressed interests. They do this to let their users experience new perspectives or ideas and to increase the diversity of content shown to the users. This is something that other platforms like Facebook, Instagram and YouTube could improve on, as their algorithms still keep users in their own echo chambers (Perez, 2020).

Can we, the content consumers, pop the bubble ourselves? There are some ways we can ‘bypass’ the filter or find less filtered content. First of all, visiting websites that offer a wide range of content is a good start. Websites that show you multiple perspectives help you create a more complete view yourself. Other things that content consumers can do are using Incognito mode and deleting cookies. Both methods will de-personalize your content, because you are giving the algorithms less information. If we become more aware and actively try to find unfiltered, completer content, the filter bubble can be popped (FS, 2017; Pariser, 2011).

References:
FS. (2017, July 3). How Filter Bubbles Distort Reality: Everything You Need to Know. Retrieved 4 October 2021, from https://fs.blog/2017/07/filter-bubbles/

Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. Penguin UK.

Perez, S. (2020, June 18). TikTok explains how the recommendation system behind its ‘For You’ feed works. Retrieved 4 October 2021, from https://techcrunch.com/2020/06/18/tiktok-explains-how-the-recommendation-system-behind-its-for-you-feed-works/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAG37494luglqH9K2xpIfdbz7eMt1NslKsRggWOCjkDR55sH_D_pgWizSYt0N0ERfhD9dlwTrrv1QQbymNfFwkw8L-10oJ-Gy3WSI-Y3Ag0dodCEyWWgPP-f0j03gMdDGv2vw2wqE4F7V_YCDmUuhkq0hZoRiwbugjPAXgI5wrTzH

Please rate this

Why you can not blindly trust algorithms

15

September

2021

5/5 (2)

Be honest. Do you believe that the data is always right? Or that algorithms never make mistakes? While it may be very tempting to hide behind the data and algorithms, you must not forget that with great algorithmic power, comes great responsibility. So, let’s face the truth. As much as we would like to believe that we have perfect data and algorithms, this is more than often not the case. As algorithms increasingly find their way into replacing human decision-making processes, it is important that you understand what the implications and risks are. As of today, algorithms already make high-impact decisions such as: whether or not you are eligible for a mortgage, whether you will be hired, how likely you are to commit fraud and so on. Algorithms are great at finding patterns we are most likely unable to find. But if you are not careful, the algorithm might favour unwanted patterns.

Case: Amazon and AI recruiting

In 2014, Amazon launched an experimental recruitment tool for their technical branch driven by artificial intelligence, which rates incoming applications. The AI model was trained using submitted resumes over a 10-year timespan and prior human recruitment decisions. After a year, however, it was found that the AI model for some reason started to penalise women applicants.

So, what went wrong? As at the time the technical branch was male-dominated, that very same given in the data used to train the AI model had a bias towards men. As a result, Amazon decided to strip indicative information such as name and gender to counter this. Case closed? Well, no. The model had retrained itself a new pattern to penalise resumes including the word ‘women’ (for example, women’s chess club) and all-women’s colleges. In the end, Amazon abandoned the recruitment tool as they were unable to address this issue.

The Black Box

The problem with complex AI models is that it is often very difficult to determine which features in the data were used to find predicting patterns. This phenomenon is also referred to as ‘the black box’; a ‘machine’ which takes a certain input, uses or transforms it in some way, and delivers an output. Though, in many cases, you would want to know how the AI model arrived at a certain decision. Especially in cases where the automated decision could potentially have a significant impact on your personal life (such as with fraud detection).

Profiling and the law

Such automated processing of personal data in order to analyse or predict certain aspects of individuals, is also referred to as ‘profiling’. Legal safeguards against unlawful profiling do exist, for example through the General Data Protection Regulation (GDPR), a legal framework concerning the collection and processing of personal data of individuals in the European Union. So does Article 22 of the GDPR specify that individuals have the right to not be subject to automated processing and profiling which may yield negative (legal) effects.

One popular case in the Netherlands which has had significant implications to individuals, was the SyRI (System Risk Indication) case where the Dutch government used algorithms to detect fraud with social benefits and taxes. The problems of this system were that the amount of data used was unknown, datasets were linked using unknown risk models and ‘suspicious’ individuals were automatically flagged and stored in a dossier without the individual being informed in any way. Individuals affected by such automated decision-making suffered from significant financial and mental issues for several years, before the Dutch court ruled such profiling to be in violation with the European Convention on Human Rights. While the Dutch government has resigned over this case and promised all affected individuals to be compensated, the government has only managed to compensate a fraction of the eligible individuals.

Countering bias

While AI models can achieve high accuracy scores in terms of making correct classifications, this does not automatically mean that the predicted value is fair, free of bias or non-discriminatory. So, what can you do? Here are some pointers according to the FACT principle:

  • Be mindful when processing personal data and beware of the potential implications on individuals. Ensure that decisions are fair and find ways to detect unfair decisions
  • Ensure that decisions are accurate, such that misleading conclusions are avoided. Test multiple hypotheses before deploying your model and make sure that the input data is ‘clean’.
  • Confidentiality should be ensured in order to use the input data in a safe and controlled manner.
  • Transparency is crucial. People should be able to trust, verify and correctly interpret the results.

References

https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

https://www.bbc.com/news/business-44466213

https://gdpr-info.eu/art-22-gdpr/

https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Rechtbanken/Rechtbank-Den-Haag/Nieuws/Paginas/SyRI-legislation-in-breach-of-European-Convention-on-Human-Rights.aspx

https://link.springer.com/article/10.1007/s12599-017-0487-z

Please rate this

How Algorithms Discriminate Against Women: The Hidden Gender Bias

9

September

2020

5/5 (2) In past decades, AI worries have moved from whether it will take over the world, to whether it will take our jobs. Today we have a new, and justifiably serious, concern: AIs might be perpetuating or accentuating societal biases and making racist, sexist or other prejudiced decisions.

 

Machine learning technology is inherently biased

Many believe that software and algorithms that rely on data are objective. But machine learning technology is inherently biased because it works on the fundamental assumption of bias. That is, it is biasing certain input data to map them to other output data points. Of course, there is also the option to directly modify the data that is fed in through techniques like data augmentation to enable less biased data. But there is a problem; humans consciously know not to apply certain kinds of bias, yet, subconsciously they end up applying certain kinds of bias that cannot be controlled.

 

Tech-hiring platform Gild

This being the case, it is not surprising to find hidden biases all around us in the world today. For example, let’s talk about the secretive algorithms that have become increasingly involved in hiring processes. American scientist Cathy O’Neil explains how online tech-hiring platform Gild enables employers to go well beyond a job applicant’s CV, by combining through the trace they leave behind them online. This data is used to rank candidates by ‘social capital’ which is measured through how much time they spend sharing and developing code on development platforms like GitHub or Stack Overflow. 

 

This all sounds very promising, but the data Gild shifts through also reveal other patterns. For instance, according to Gild’s data, frequenting a particular Japanese manga site is a ‘solid predictor for strong coding’. Programmers who visit this site, therefore, receive higher scores. As O’Neil points out, awarding marks for this is a large problem for diversity. She suggests ‘if, like most techdom, that manga site is dominated by males and has a sexist tone, a good number of the women industry will probably avoid it’. 

 

‘Gild undoubtedly did not intend to create an algorithm that discriminated against women. They were intending to remove human biases’

 

In the book “invisible women”, Caroline Criado Perez noted that ‘Gild undoubtedly did not intend to create an algorithm that discriminated against women. They were intending to remove human biases’. However, if managers are not aware of how those biases operate, if they are not collecting data, and if they are taking little time to produce evidence-based processes, an organisation will continue to blindly perpetuate old injustices. Indeed, by not considering how women’s lives differ from men’s, Gild’s coders accidentally created an algorithm with a hidden data bias against women. 

 

But that is not even the worst part. The worst part is that we have no idea about how bad the problem really is. Most algorithms of this kind are kept secret and protected as proprietary code. This implies that we do not know how decisions are being made and what biases they are hiding.  Perez points out, ‘The only reason we know about this potential bias in Gild’s algorithm is because one of its creators happened to tell us’. This, therefore, is a double data gap: (1)  First,  in the knowledge of the coders designing the algorithm, and (2) second, in the knowledge of society at large, about just how discriminatory these AIs can be (Perez, 2020).

 

‘The only reason we know about this potential bias in Gild’s algorithm is because one of its creators happened to tell us’

 

We need more diversity in tech to reduce the hidden gender bias

Many argue that one easy way to combat the hidden gender bias is to increase the diversity of thought through the number of women in tech. According to the World Economic Forum, currently only 22% of AI professionals globally are female, compared to 78% who are male. Additionally, at Facebook and Google, less than 2% of technical roles are filled by black employees. To remove hidden bias in algorithms, tech companies should step up their recruiting practices and increase diversity in technical roles. 

 

Do you have any other suggestions for managers to reduce hidden bias? Or have you come across a type of hidden bias? Feel free to leave a comment.

 

References:

The Guardian (2016). How algorithms rule our working lives. Retrieved from https://www.theguardian.com/science/2016/sep/01/how-algorithms-rule-our-working-lives

Perez, C. C. (2020). Invisible women Data bias in a world designed for men. New York: Abrams Press. 

Forbes (2020). AI Bias Could Put Women’s Lives At Risk – A Challenge For Regulators. Retrieved from  https://www.forbes.com/sites/carmenniethammer/2020/03/02/ai-bias-could-put-womens-lives-at-riska-challenge-for-regulators/#2201ee44534f

World Economic Forum (2020). Assessing Gender Gaps in Artificial Intelligence. Retrieved from  http://reports.weforum.org/global-gender-gap-report-2018/assessing-gender-gaps-in-artificial-intelligence/

Dogtown Media (2019). Can AI’s Racial & Gender Bias Problem Be Solved? Retrieved from https://www.dogtownmedia.com/can-ais-racial-gender-bias-problem-be-solved/

Please rate this

Being Human in the Age of Black Box Algorithms and Subjective Truths

17

October

2019

5/5 (4)

esther-jiao-ADv0GiMBlmI-unsplash
Photo by Esther Jiao on Unsplash

Algorithms are everywhere and play an important role in our daily lives. They decide what we see on our social media feeds, which ads are used to target us and what route we should take to get places.

The problem is that many algorithms are black boxes. They are complex systems that shape our world, whose internal workings are hidden or not easily understood (Oxford English Dictionary Online, 2011). With these algorithms, which often have a complex design, it is unclear how the output or conclusions were reached. With historically little oversight or accountability regarding their design, this problem has a profound effect on society as our day-to-day lives and our personal decisions are increasingly controlled by algorithms (Carey, 2018; Illing, 2018). Most of us have no idea what algorithms are or how exactly we are being influenced by them. And how could we if we cannot look inside, ‘under the hood’? And if we could, if sometimes even the coders that built an algorithm do not know how the system reached its conclusion (Carey, 2018), how should we?

Does this mean that we cannot trust algorithms anymore? Hannah Fry, an Associate Professor in Mathematics at University College London and author of the book “Hello World: Being Human in the Age of Algorithms”, explains in an interview with Sean Illing that our behaviour to algorithms tends to be in extremes (Illing, 2018). On the one hand, we have very high expectations of algorithms and will trust them blindly. On the other hand, as soon as we see that an algorithm or the outcomes are somewhat inaccurate, we do no longer trust them and disregard them. Fry thinks the right attitude is somewhere in the middle: “we should not blindly trust algorithms, but we also should not dismiss them altogether” (Illing, 2018).

Subjective Truths
A larger concern with algorithms is that they often contain the biases of the people who create them and that they reinforce biases and stereotypes we may inherently have, but might now be aware of (Li, 2019). As Bill and Melinda Gates (2019) describe this can even be the result of non-existent or sexist data. This is especially dangerous with black-box algorithms, which do not explain their results to their programmers – let alone to the end-users.

And what if information is deliberately misrepresented or differs depending on who you are or where you are from? Take for example Google Maps. Google claims to be objective in marking disputed regions in various parts of the world (Boorstin, 2009). Depending on from what country you access Google Maps, you will see Crimea portrayed as part of Ukraine or Russia (Usborne, 2016). If you consider that at least 124 countries are involved in a territorial dispute, there is a lot of potential for subjective truths (Galka, n.d.; Metrocosm, 2015). Another example is Apple. If you are in Hong Kong or Macau, from iOS 13.1.1 onwards you will no longer find the Taiwanese flag ?? on the emoji keyboard (Peters & Statt, 2019). Generally, as an user, you are not made aware of these intentional differences, but they do shape our perception of reality.

Conclusion
When it comes to algorithms, the people behind them or really anything in life, you should not blindly trust the information that is presented to you. Besides, as Fry argues, we should not think of algorithms themselves as either good or bad, but we should rather focus on the people behind the scenes that create these algorithms (Illing, 2018). Although algorithms may not be perfect and they often are biased, they still are extremely effective and have made our lives easier.

Whereas endings are is inevitable, the direction of technological progress is not. We have to ensure that technological progress remains aligned with human’s best interests. There might be unintended or undesired consequences, but as French philosopher Paul Virilio said:

“When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash; and when you invent electricity, you invent electrocution…Every technology carries its own negativity, which is invented at the same time as technical progress.” (Virilio, Petit & Lotringer, 1999).

 

References:
Black box. (2011). In Oxford English Dictionary Online. Retrieved 12 October 2019, from https://www-oed-com.eur.idm.oclc.org/view/Entry/282116
Boorstin, B. (2009, December 4). When sources disagree: borders and place names in Google Earth and Maps. Retrieved from https://publicpolicy.googleblog.com/2009/12/when-sources-disagree-borders-and-place.html
Carey, S. (2018). How IBM is leading the fight against black box algorithms. Retrieved 16 October 2019, from https://www.computerworld.com/article/3427845/how-ibm-is-leading-the-fight-against-black-box-algorithms.html
Gates, B. & Gates, M. (2019, February 12). Our 2019 Annual Letter. Retrieved from https://www.gatesnotes.com/2019-Annual-Letter#ALChapter4
Galka, M. (n.d.). Every Disputed Territory in the World [Interactive Map]. Retrieved 16 October 2019, from http://metrocosm.com/disputed-territories-map.html
Illing, S. (2018, October 1). How algorithms are controlling your life. Retrieved from https://www.vox.com/technology/2018/10/1/17882340/how-algorithms-control-your-life-hannah-fry
Li, M. (2019, May 13). Addressing the Biases Plaguing Algorithms. Retrieved from https://hbr.org/2019/05/addressing-the-biases-plaguing-algorithms
Metrocosm. (2015, November 20). Mapping Every Disputed Territory in the World. Retrieved from http://metrocosm.com/mapping-every-disputed-territory-in-the-world/
Peters, J., & Statt, N. (2019, October 7). Apple is hiding Taiwain’s flag emoji if you’re in Hong Kong or Macau. Retrieved from https://www.theverge.com/2019/10/7/20903613/apple-hiding-taiwan-flag-emoji-hong-kong-macau-china
Usborne, S. (2016, Augustus 10). Disputed territories: where Google Maps draws the line. Retrieved from https://www.theguardian.com/technology/shortcuts/2016/aug/10/google-maps-disputed-territories-palestineishere
Virilio, P., Petit, P., & Lotringer, S. (1999). Politics of the very worst. New York: Semiotext(e).

Please rate this

Are You Mental?!

11

October

2018

5/5 (3) The past two days London has been captivated by the first Global Mental Health Summit. Why? Because mental health is becoming one of the biggest health challenges of the 21st century.

A study by Public Health England on cases from 2015 showed  the most common cause of death amongst both males and females between the age of 20 and 34 in the UK is suicide. In 2013, depression was the leading cause of years lived with a disability in 26 countries (Ferrari et al., 2013). In 2014, 19.7% of people aged over 16 in the UK showed symptoms of anxiety or depression (Evans et al., 2016). However, these symptoms are often invisible for outsiders and hard to measure. How do you determine when someone needs help and what help is needed? And why are algorithms important in overcoming one of the biggest challenges of the 21st century?

Now for a second think about the people around you. If you take ten people from your environment, based on the given statistics two of these ten people are struggling with mental health illnesses such as an anxiety disorder or depression. Perhaps you are in the know about your friends’ and family’s mental health, but it is hard to fully understand what is going on in their minds. How can you make sure you notice these little changes in behaviour that occur when someone has a mental illness?

This is where social media and algorithms come into play. Last year Facebook announced it will expand a programme designed to prevent suicide based on a pattern matching algorithm. It scans Facebook posts and comments for word combinations signalling potential suicide threats. After a threat has been identified, it will be reviewed by specialists trained in suicide and self-harm. Most concerning reports will be flagged to receive priority. In the next steps appropriate institutions will be alarmed about the persons discussion to create an appropriate care plan. (NBC News, 2018)

 

 

It is still possible to flag posts manually and in the help centre of social media platforms there are extensive guidelines on what to do when you encounter a worrying post. The main difference with the use of algorithms is the elimination of unpredictability of humans. With the magnitude of posts we see in a day, are we really able to see the impact of a single post of someone who is struggling? And if we do so, are we engaged enough to take appropriate action to support this person? Algorithms provide us with the security that certain posts will be noticed and addressed by specialists.

These systems will not replace current treatment, but they might play an important role in getting the right treatment for the everyone who is not able to find their own way to many systems currently in place. As of now, we do not know how this will impact the suicides rates. Nonetheless, I like to believe this a step into the right direction for big firms like Facebook and Snapchat to take responsibility in overcoming one of the main challenges of our century. What do you think of the role of hub-firms in mental illness signalling, prevention, and treatment?

 

In loving memory of Sam, 5 August 1997 – 25 September 2018.

 

Sources:
Ferrari, A.J., Charlson, F.J., Norman, R.E., Patten, S.B., Freedman, G., Murray, C.J.L., … & Whiteford, H.A., (2013). Burden of Depressive Disorders by Country, Sex, Age, and Year: Findings from the Global Burden of Disease study 2010. PLOS Medicine, 10(11).

NBC News. (2018). Can an algorithm help prevent suicide?. [online] Available at: https://www.nbcnews.com/mach/video/a-facebook-algorithm-that-s-designed-to-help-prevent-suicide-1138895939701?v=railb&

Evans, J., Macrory, I., & Randall, C. (2016). Measuring national wellbeing: Life in the UK, 2016. ONS. [online] Available at: https://www.ons.gov.uk/peoplepopulationandcommunity/wellbeing/articles/measuringnationalwellbeing/2016#how-good-is-our-health.

Please rate this

Artificial Intelligence versus Mankind

28

September

2016

5/5 (2) Introduction:
Humans and computers. Great friends, right? Computers are doing a lot of standardized, boring and labor-intensive work, and therefore they save us a lot of time and money. Furthermore, these systems are able to collect vast amounts of data, which is worth a lot of money. This due to the fact that organizations can analyze this data which provides them with very important insights. All these advantages push organizations into investing lots of money in technology.

Because of these investments a lot of new and innovative software technologies are being developed. For example you can think of speech recognition, online platforms, large databases, artificial intelligence and so on. This blog will elaborate on the developments in artificial intelligence, which I find very interesting.

 

So why is artificial intelligence so interesting to me?
As more and more technologies become available, the programs that are being developed become smarter as well. For example, speech recognition software is a form of artificial intelligence. These programs are algorithms, written by programmers, that become better in the goal that they have inherited from their programmers by self-learning. The data that the system has gathered will be analyzed by the system itself and will help the system to become even smarter and help it to better achieve its goal.

So what if these programs become so smart, that they learn how to write their own code? The software becomes its own programmer. Again, the software has some kind of goal which it inherited from its programmer, but in addition to the current software, it will also be able to write its own code to become better in achieving its original goal.

 

The software that wants to become the smartest thing alive! 
Now imagine that programs are able to write their own code and a programmer has been assigned to develop an algorithm that has the purpose to become as smart as possible. This would be very beneficial for human kind, right? The computer will eventually become able to do all of our highly educated work, plus it could give us the answers to lots of questions which we are currently not able to find the answers for.

ai-image

The computer might also, by that time, be able to interpret what people say, recognize them via camera’s, and be able to interpret their facial expressions. The program can analyze behavior and do all sorts of behavioral studies, whilst simultaneously achieving its goal, becoming smarter. The program will run all kinds of analyses on its data and will rewrite its code when it has found prove that this new code will let it achieve its goal even faster and better. You will get a program that grows in a very fast pace to become the smartest thing alive.

But what if it becomes too smart for us? It knows exactly how we will react on certain things it says or does and it is capable of manipulating people? It will give its goal, becoming smarter, the most priority because it’s the purpose of this software. It will not be bothered by the fact that manipulating people is unethical. It therefore will choose to manipulate people to get more information or for example get access to certain databases.

A documentary I have seen gave the following example:

– When humans put a monkey in a cage and the monkey manages to escape. The humans will always be able to get the monkey back in the cage, because of the fact that we are more intelligent than the monkey.

 

So what would this imply for us?
In the future it will become very important to think well about what kind of frameworks we’ll give to these computers in order to keep them in control and still benefit from them. Of course these developments seem to be far away and we might need to wait a long time before this will become reality. Nevertheless it is very interesting and we have to start thinking about living amongst computers that might even become self-aware and that are way more intelligent than we are right now.

 

I hope you enjoyed reading my blog!

 

 

 

 

Please rate this