Smart Home Devices – Convenient Or Invasive?

3

October

2020

No ratings yet.

Amazon has introduced a new product from Ring called the Always Home Cam, a drone including camera that is able to detect intruders when homeowners are away. The drone is autonomous and can also fly around the house to check if for instance windows are left open. The announcement of this product contributes to the ongoing debate about privacy concerns with regards to smart home devices.

More and more people are buying smart home devices. In 2020, almost 30% of the households are in possession of smart home devices according to Statista. It expects this number will increase to over 50% in the next four years. These smart home devices include products like Ring Doorbell, a doorbell that allows you to answer the door outside home, and Google Assistant, which is capable of answering your questions.

As mentioned above, there is debate going on about the potential issues regarding these smart home devices. One example is that smart speakers like Google Assistant are able to listen and collect data at undesired times. Recent research confirms this issue as 59% of the users of smart home devices experience privacy concerns with their devices. Other privacy issues relate to the ability to hack smart home devices. This problem occurred with the Philips Hue Smart Light Bulbs as it was discovered that hackers are capable of breaking into the owner’s Wi-Fi network without knowing the password.

The Always Home Cam started the debate again as many people expressed their privacy concerns with the product. One of this concern is about the increasing acceptance of everyday surveillance. It is becoming normal to have video surveillance around the house and this reduces the space left that is not being watched through cameras. Moreover, existing security cameras are pointing towards one point whereas the Always Home Cam is able to see anything in the house. This reduces the privacy even more.

Overall, I think that people should be aware of the privacy concerns related to these smart devices before buying it. Companies are already able to collect data through our smartphones and other device and I am wondering what the limit of this will be for people.

References:
View at Medium.com
https://www.ft.com/content/8eaf8ee5-b074-4d48-b4fa-15d35a185a5d
https://www.theverge.com/2020/9/25/21455197/amazon-ring-drone-home-security-surveillance-sidewalk-halo-privacy
https://www.theguardian.com/technology/2020/mar/08/how-to-stop-your-smart-home-spying-on-you-lightbulbs-doorbell-ring-google-assistant-alexa-privacy

Please rate this

The consequences of algorithm bias

22

September

2020

4/5 (1)

Algorithms are the drivers of decision making in machine learning. They are made of data provided by humans. It is known that humans are biased and error-prone, but this is also the case for algorithms. Both humans and algorithms make decisions based on available data and experience. If the data provided to algorithms or the way it is developed is biased, it could cause algorithm bias.

Interaction bias is one type of bias that is frequently found in datasets. An example for this bias is facial recognition. A study by Buolamwini (2018) found that datasets for facial recognition are mostly composed of lighter-skinned subjects. The study shows that algorithms perform better in identifying lighter-skinned men compared to darker-skinned woman. As a result, this could cause misidentification or no identification at all. This weekend Zoom encountered problems with their algorithm because of the interaction bias. A black student’s head was removed from the Zoom meeting whenever he would use a virtual background. Another issue caused by this bias was in 2019, when a student from Brown University was mistakenly identified as a suspect in Sri Lanka bombings. This resulted in continuing death threats to the student.

As mentioned above, algorithmic bias can have serious consequences. Some companies are already responding to this problem by abandoning their facial recognition products. For example, Amazon put a one-year pause on its facial recognition product, Rekognition, a few months ago. IBM also responded to this problem by terminating all their research on facial recognition. It is clear that the consequences can be severe, but what do you think is the best way to manage algorithm bias?

References
Buolamwini, J. (2018) Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Retrieved from: http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf
Heilweil, R. (2020) Why algorithms can be racist and sexist. Retrieved from: https://www.forbes.com/sites/cognitiveworld/2020/02/07/biased-algorithms/#320881da76fc
Brown, A. (2020) Biased Algorithms Learn From Biased Data: 3 Kinds Biases Found In AI Datasets. Retrieved from: https://www.forbes.com/sites/cognitiveworld/2020/02/07/biased-algorithms/#6ab8acb376fc
Ivanova, I. (2020) Why face-recognition technology has a bias problem. Retrieved from: https://www.cbsnews.com/news/facial-recognition-systems-racism-protests-police-bias/
Dickey, M. (2020) Twitter and Zoom’s algorithmic bias issues. Retrieved from: https://techcrunch.com/2020/09/21/twitter-and-zoom-algorithmic-bias-issues/

Please rate this