Dark Patterns: Hotel California or Roach Motel?

10

October

2021

No ratings yet.

Fans of Hotel California by the Eagles will most likely recognise the famous last lines, which say: ‘You can check out any time you like. But you can never leave!’. While there are multiple interpretations on what this may refer to, this phrase can also be applied to how digital interfaces can be designed to make you do things that you did not want to do. Interface designers use all sorts of tricks to make sure that you, the user, is nudged into specific behavior which is beneficial for their purposes. Not convinced that this is actually true? Have you ever tried to cancel your subscription but you couldn’t find an easy way to do it? Or have you ever clicked a piece of normal looking content, only to find out that it was a disguised advertisement? These are all examples of dark patterns: design that is intentionally crafted in such a way that it is misleading or complex to perform certain tasks.

Case: Amazon’s Roach Motel

If you have ever tried to delete your Amazon account but gave up trying, I don’t blame you. The interface design of Amazon’s website has been intentionally crafted in such a way to discourage users into performing an action that hurts the company. Not only is the option buried deep in the website, it is also not located in an intuitive location. Take a look at the fragment below (0:19 – 1:41) to see the amount of hoops you have to go through.

Which dark patterns exist?

Harry Brignull is an expert in the field of user experience, who coined the term ‘dark patterns’ back in 2010. On his website, darkpatterns.org, he shares his findings of the types of dark patterns along with examples that you have probably already encountered at some point. Below a small overview of dark patterns which you are likely to come across:

  • Roach Motel: Just like the Hotel California, it is easy to get in – put near impossible to get out. The Amazon case is a good example of this dark pattern: signing up is very easy, but deleting your account is nearly impossible if you don’t know where to look.
  • Bait and switch: When you expect a specific thing to occur, but something else occurs instead. Think of online stores luring you in with low prices, only to see that additional charges are applied in the checkout. Or Microsoft’s attempt to ‘misguide’ users into upgrading to Windows 10.
  • Confirmshaming: Trying to guilt the user into a specific action, where the decline option is worded in such a way to shame the user. Think of wordings such as: ‘No thanks, I don’t want to benefit from this exclusive discount’.

What can we do about dark patterns?

As long as interface designers are able to nudge users into the behavior of their liking, dark patterns will most likely never cease to exist. Though, there is hope. According to Harry Brignull, the best weapon against dark patterns is to be aware of the presence of such patterns and to shame the companies who use them. LinkedIn, for example, has settled a lawsuit for $13 million for utilising dark patterns to trick users into inviting their network to the platform. While in practise this only implied a mere 10 dollars for every user affected, it does show that there is awareness of such malpractices.

References

https://www.youtube.com/watch?v=kxkrdLI6e6M
https://blog.ionixxtech.com/how-to-avoid-dark-patterns-in-ux/
https://www.darkpatterns.org/types-of-dark-pattern

Please rate this

Why you can not blindly trust algorithms

15

September

2021

5/5 (2)

Be honest. Do you believe that the data is always right? Or that algorithms never make mistakes? While it may be very tempting to hide behind the data and algorithms, you must not forget that with great algorithmic power, comes great responsibility. So, let’s face the truth. As much as we would like to believe that we have perfect data and algorithms, this is more than often not the case. As algorithms increasingly find their way into replacing human decision-making processes, it is important that you understand what the implications and risks are. As of today, algorithms already make high-impact decisions such as: whether or not you are eligible for a mortgage, whether you will be hired, how likely you are to commit fraud and so on. Algorithms are great at finding patterns we are most likely unable to find. But if you are not careful, the algorithm might favour unwanted patterns.

Case: Amazon and AI recruiting

In 2014, Amazon launched an experimental recruitment tool for their technical branch driven by artificial intelligence, which rates incoming applications. The AI model was trained using submitted resumes over a 10-year timespan and prior human recruitment decisions. After a year, however, it was found that the AI model for some reason started to penalise women applicants.

So, what went wrong? As at the time the technical branch was male-dominated, that very same given in the data used to train the AI model had a bias towards men. As a result, Amazon decided to strip indicative information such as name and gender to counter this. Case closed? Well, no. The model had retrained itself a new pattern to penalise resumes including the word ‘women’ (for example, women’s chess club) and all-women’s colleges. In the end, Amazon abandoned the recruitment tool as they were unable to address this issue.

The Black Box

The problem with complex AI models is that it is often very difficult to determine which features in the data were used to find predicting patterns. This phenomenon is also referred to as ‘the black box’; a ‘machine’ which takes a certain input, uses or transforms it in some way, and delivers an output. Though, in many cases, you would want to know how the AI model arrived at a certain decision. Especially in cases where the automated decision could potentially have a significant impact on your personal life (such as with fraud detection).

Profiling and the law

Such automated processing of personal data in order to analyse or predict certain aspects of individuals, is also referred to as ‘profiling’. Legal safeguards against unlawful profiling do exist, for example through the General Data Protection Regulation (GDPR), a legal framework concerning the collection and processing of personal data of individuals in the European Union. So does Article 22 of the GDPR specify that individuals have the right to not be subject to automated processing and profiling which may yield negative (legal) effects.

One popular case in the Netherlands which has had significant implications to individuals, was the SyRI (System Risk Indication) case where the Dutch government used algorithms to detect fraud with social benefits and taxes. The problems of this system were that the amount of data used was unknown, datasets were linked using unknown risk models and ‘suspicious’ individuals were automatically flagged and stored in a dossier without the individual being informed in any way. Individuals affected by such automated decision-making suffered from significant financial and mental issues for several years, before the Dutch court ruled such profiling to be in violation with the European Convention on Human Rights. While the Dutch government has resigned over this case and promised all affected individuals to be compensated, the government has only managed to compensate a fraction of the eligible individuals.

Countering bias

While AI models can achieve high accuracy scores in terms of making correct classifications, this does not automatically mean that the predicted value is fair, free of bias or non-discriminatory. So, what can you do? Here are some pointers according to the FACT principle:

  • Be mindful when processing personal data and beware of the potential implications on individuals. Ensure that decisions are fair and find ways to detect unfair decisions
  • Ensure that decisions are accurate, such that misleading conclusions are avoided. Test multiple hypotheses before deploying your model and make sure that the input data is ‘clean’.
  • Confidentiality should be ensured in order to use the input data in a safe and controlled manner.
  • Transparency is crucial. People should be able to trust, verify and correctly interpret the results.

References

https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

https://www.bbc.com/news/business-44466213

https://gdpr-info.eu/art-22-gdpr/

https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Rechtbanken/Rechtbank-Den-Haag/Nieuws/Paginas/SyRI-legislation-in-breach-of-European-Convention-on-Human-Rights.aspx

https://link.springer.com/article/10.1007/s12599-017-0487-z

Please rate this