Headline: Autonomous Car Hits Person! Reaction:”Autonomous cars should be abolished!”

1

October

2019

4.5/5 (2)

With Uber, Tesla, Google, and other mobility companies investing in autonomous cars that don’t need anyone behind the wheel from A to B, it is imminent that accidents will happen with these systems. (Interesting Engineering, 2019). However, what do we need to keep in mind when we read the news about (semi)autonomous vehicles being involved in accidents?

 

Having someone, or something, literally take the wheel usually leaves us humans with a helpless feeling. Helplessness is definitely not part of feeling safe and even less part of pushing humans to accept technology. Especially technology that can easily kill either them or other humans.

 

So, whenever news arises that, for instance, Tesla’s Autopilot was involved in an accident, news channel features, online forums and many more media outlets light ablaze with the questioning of autonomous vehicles and whether they should be banned or not (Forbes, 2019). These articles and forums show that people are scared, maybe even rightly so, people are dying because of autonomous cars… right?

 

However, we must consider several things before conclusions are drawn. Firstly, the bias that can easily arise with news of this nature. Second, whether having autonomous cars are per definition ‘fault’ or ‘unsuited’ when they are involved in accidents.

 

Bias: One should always be cautionary when drawing conclusions on data presented by others. This is also the case for autonomous cars. Due to the fact that the safety of autonomous cars is only a topic of conversation when it goes wrong (e.g. a car crash) we are biased towards thinking it is dangerous (Wikipedia, 2019). This is due to how the news works, it is way less detectable and less newsworthy when an autonomous is NOT involved in an accident or narrowly avoided one.

 

Second, even if autonomous cars are sometimes involved in accidents, this does not immediately mean that they are therefore unsuited to replace human drives. The reason is that they don’t have to be perfect. They don’t have to have a 0% crash rate, they only need to have a lower crash rate than humans in order to be ‘worthy’ to take the wheel.

 

In conclusion, I am definitely not saying that autonomous driving is the answer and safe enough to implement. However, I do recommend that we take in numbers not without thinking and quickly judging, but analyze what the numbers mean and what they represent.

 

 

 

 

 

https://interestingengineering.com/are-we-programming-killer-cars-the-ethics-of-autonomous-vehicles

 

https://www.forbes.com/sites/lanceeliot/2019/05/07/an-inconvenient-truth-human-drivers-and-autonomous-cars-mix-like-oil-and-water/#749daf0c3b84

 

https://en.wikipedia.org/wiki/Availability_heuristic

 

Please rate this

Is China’s social credit system as bad as we think?

20

September

2019

5/5 (2)

In 2014 China showed her plans for the implementation of her Social Credit System. China aimed to fully implement it before the end of 2020, but had early on ‘pilots’ running in several areas and cities (the 13th Five Year Plan of China, 2014). Ever since the announcement, references to Black Mirror and a totalitarian government havn’t stopped in the news. But how do we (the West) fare in contrast and how does the Social Credit System even work?

One of the pilots of the Social Credit System works as follows in the city of RongCheng. People in the city start out with a score of 1000. Point will be awarded for socially benevolent behaviour such as volunteering and returning lost wallets. However, points will be deducted if socially bad behaviour is noticed. Although many of us thinks this happens through AI enforced facial recognition, this is mostly not the case. Most areas simply have a person that ‘snoops around’ and writes it down. Good scores can result in social benefits like a discount on riding public transport or a lower interest rate when borrowing money from the government. On the other hand, lower scores can have in negative effects, such as denial of access to train tickets, plane tickets or jobs with high visibility (WIRED, 2018).

At first glance, RongCheng seems like a big success. The city, that was once a mess, is now spotless, car drivers slow down for pedestrian crossways (which happens rarely in China (own experience, 2018)) and many of its inhabitants volunteer at elderly homes. Also, many more people pay back their debt on time, for fear of losing credit points. One could say, “this seems to work and all you need to do is not break the law, which isn’t so hard, right?”. Many Chinese see it this way as well. They believe that it pushes their citizens to be better people and do what’s right (France 24, 2019).

Also, when looking at our countries, we find similar ‘credit scores’. When anyone defaults on a loan from the bank, you can bet that person will not receive a loan again. When having been in contact with the police, it too is registered on our ‘rap sheet’. However, there are some significant differences between our system and the SDS that is going to be implemented in China.

The scope of the SDS is the main difference. Because all of our misdoings are punished within the legal system, the punished can appeal and fight the punishment. However, this is not the case for the SDS. It lacks a presumption of innocence until proven otherwise. One of the core foundations of the Western judicial system.

But we shouldn’t pat ourselves too hard on the back yet. For we can see similar, all be it less invasive, trends appearing. Privileges related to communication, accommodation and transportation are being affected by our behaviour as well. Uber drivers have lost their job for having ratings lower van 4.6 (Forbes, 2014). Insurance companies pull more and more data from your Facebook page and online behaviour. At this moment, this all isn’t the end of the world, because there are multiple competitors in said sectors. However, if this trend continues and huge companies like Uber, Amazon and Facebook gain market share in other markets, we could see socially unwanted behaviour punished by Silicon Valley and no longer Washington D.C.

This bring me to my conclusion about why any social rating system should have limits. Because once a social rating system becomes powerful enough to starve its users from certain services, products or needs, because they aren’t behaving well enough or according to certain guidelines. The owner (or owners) of that social rating system gets to dictate what good or bad behaviour is. This could potentially mean that the rules that we live by are dictated not by law or the Constitution, but by the terms & conditions which (let’s be honest) we all thoughtlessly accept.

Let me know if you agree or if you have a different vision on the case! 🙂

The 13th Five Year plan of the Peoples Republic of China: http://en.ndrc.gov.cn/newsrelease/201612/P020161207645765233498.pdf (17 september, 2019)

The complicate truth of China’s social credit system: https://www.wired.co.uk/article/china-social-credit-system-explained (17 september, 2019)

How Uber’s Driver Policy Could Backfire on the Company:
https://www.forbes.com/sites/ellenhuet/2014/10/30/uber-driver-firing-policy/#3759bd571527 (17 september, 2019)

China ranks ‘good’ and ‘bad’ citizens with social credit system:
https://www.youtube.com/watch?v=NXyzpMDtpSE&t=183s (18 september, 2019)

Please rate this