My CLI frustrations and how ChatGPT solved them.

4

October

2024

No ratings yet.

About a year ago, I started delving into the world of self-hosting services, things such as game servers, cloud storage and Netflix alternatives. The idea was to not be as dependent on SaaS providers, as I had a spare laptop lying around anyway, why not give it a go?  So the first thing I did was install Proxmox, a hypervisor to separate out the different services I was planning to set up.

This is where my struggles started, as you might be aware, most servers run on a Linux machine without a GUI. I soon discovered that Proxmox also primarily uses a command line interface. For those not aware, a CLI is where you write code to make your computer do anything at all, an example would be “cd usr/home” this would take you to that folder. 

While I got a grasp on the basics relatively quickly, the complexity increased just as fast for the things I wanted to achieve. This is where ChatGPT came to save the day, with 4o it could actively search the internet and scan through documentation to specifically create the command I required. Instead of needing to write in computer language, I could explain to ChatGPT what I was trying to do, and it would generate the exact commands I needed.

myservice.service – My Custom Service
Loaded: loaded (/etc/systemd/system/myservice.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2023-10-02 12:34:56 UTC; 5s ago
Process: 1234 ExecStart=/usr/bin/myservice (code=exited, status=1/FAILURE)

It helped with reading these kinds of error codes as well, anyone familiar with these kinds of messages knows that they are completely unreadable if you don’t know all the documentation.

While you still need to be relatively tech-savvy to set up your services, I believe that with the increase in development of gen AI it will only get easier. 

You may wonder what the advantages are of going through all these hassles instead of simply using Netflix, Google Drive, and One Drive. As we all know, a couple of tech giants have monopolized many of the daily services we use. They collect our data in massive quantities, creating many privacy concerns, furthermore they suppress innovation within the field. Hosting your services makes sure that you minimize the amount of data you put on the internet.

Furthermore, many SMEs use several services for which they pay massive licensing and hosting fees each year. If these new tools help SME’s set up their own servers, they are less dependent on third-party prices and can save costs.

All in all, I believe that the support LLMs provide to be able to set up your own services democratizes the internet and reduces the power of the tech monopolies, this should be celebrated by anyone who supports free markets.

Sources:

https://www.proxmox.com/en

https://pixabay.com/vectors/command-shell-terminal-dos-input-97893

Please rate this

Law & Order & AI – How Californias Bill SB1047 will impact AI development in the USA

27

September

2024

No ratings yet.

The USA are often praised for their openness to innovation, while the EU is seen as lagging behind. But there is one aspect where the USA are now following the EU: AI regulation. In this blogpost I will discuss the Californian Bill “SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” which currently awaits ratification by the Governor of California. (California Legislative Information, 2024)

While not yet enacted, the EU has created one of the most far reaching efforts in the world to regulate AI with the Artificial Intelligence Act (AI Act). As we had discussed in class the AI Act focusses on different aspects such as a risk-based framework, accountability and transparency, governance, and human rights. (European Parliament, 2023)

How does the SB 1047 compare? First off, it is important to note that the Bill would only be turned into law in California. Nonetheless, this more or less means a nationwide application since most affected companies are based in Silicon Valley, California.

SB 1047 focusses on a few different aspects, I have highlighted the ones I think are most far reaching:

  1. Developers must implement controls to prevent the model from causing “critical harm”
  2. Developers must provide a written and separate safety and security protocol
  3. Developers must include a “kill switch” through which a full shutdown can be enacted
  4. Developers will have to have their models be tested, assessed, and regularly audited. (Gibson Dunn, 2024)

Like the AI Act, SB 1047 would focus on high-risk, high-impact AI models, while focusing on safety and security of the people impacted by AI.

But why would you care? Will this even affect everyday people? Isn’t this just stifling innovation and risking loss of competitive advantage?
Before you jump to the comments let me first highlight one of the bills supporters – Elon Musk. On his platform X, Musk has posted about his support for the bill, stating that AI should be regulated like “any product/technology that is a potential risk to the public” (Tan, 2024) I don’t often align with Musk’s views, but I really agree with this stance on regulation!

Screenshot of Musks Tweet suppporting the SB1047 bill.

Why should we let AI and its development stay completely unchecked but still use it for vital parts of our daily life? Why should we not want to know how AI works beneath the engine? Time and time again, history has taught us that leaving big systems unchecked because they were deemed “too complex” or because we trusted the people who run them to do so in the best interest of the public, does not always lead to the desired outcomes.
From job applications, health, safety, to privacy we already use AI in most aspects of life. I, for one, do not want these parts of my life to be guided by the ethics (or maybe lack thereof) of individuals. I want there to be clear legislature and a framework in place to guide the future development of AI. Because even though most people might not clearly see how their life is (beneficially) impacted by AI currently, I don’t want anyone to ever experience how AI might detrimentally impact their life.


Resources used:

California Legislative Information. (2024, September 3). Senate Bill No. 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. California Legislature. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

European Parliament. (2023, June 1). EU AI Act: First regulation on artificial intelligence. European Parliament News. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Gibson Dunn (2024, September 24). Regulating the Future: Eight Key Takeaways from California’s SB 1047, Pending with Governor Newsom. Gibson Dunn. https://www.gibsondunn.com/regulating-the-future-eight-key-takeaways-from-california-sb-1047-pending-with-governor-newsom/

Musk, E. [@elonmusk]. (2024, September 15). AI should be regulated like any product/technology that is a potential risk to the public [Tweet]. Twitter. https://x.com/elonmusk/status/1828205685386936567?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1828205685386936567%7Ctwgr%5Eb0d709a708c02735de6f79bae39d6c06261b27d9%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.businessinsider.nl%2Felon-musk-says-hes-backing-californias-controversial-ai-bill%2F

Tan, K. W. K. (2024, 27 augustus). Elon Musk says he’s backing California’s controversial AI bill. Business Insider Nederland. https://www.businessinsider.nl/elon-musk-says-hes-backing-californias-controversial-ai-bill/

The Image set as the featured image was generated by ChatGPT

Please rate this

Toxic Code: How Poisoning Attacks Are Undermining AI Systems

16

September

2024

5/5 (3)

In the rapidly evolving world of artificial intelligence (AI), not all advancements are aimed at making systems smarter. Some are designed to make them fail. Enter poisoning attacks, a new form of sabotage that can turn intelligent systems against themselves. But the question is, how does it work and should we really care about it?

What Are Poisoning Attacks?

Imagine teaching a student a mix of good and false information. If you sprinkle enough false information in the lessons, even the brightest student will come to some incorrect conclusions. In AI, poisoning attacks work similarly: the data used to train the AI model is corrupted by an attacker with the intent to make errors once the AI is functioning (Shafahi et al., 2018). For example, consider a self-driving car that is trained on images of road signs. If an attacker can poison the system with even a small number of false images that label a “stop sign” as unreadable, the car could misunderstand traffic rules and be dangerous not only to the people in the car, but to everyone on the street (Wang et al., 2023).

(Dash & Bosch AIShield, 2023)

Real-World Impact: Why Should You Care?

Poisoning attacks aren’t just a theoretical risk, they are a real threat in AI systems today. Take for example GitHub’s CoPilot, an AI run code completion system that helps developers autocomplete their code in real time (GitHub, 2023). In this case, an attacker would poison the CoPilot and steer it towards generating vulnerable code that has a number of security defects (Improta, 2024). While this seems like a problem that only impacts coders, this can result in problems for other people as well. Vulnerable code can result in everyday people losing their private data, such as the recent Social Security Number breach in the USA (Chin, 2024). A relevant example on how poisoning attacks can affect your everyday life is through social media. Algorithms could be altered in order to determine what goes viral or to spread misinformation by pushing fake news to a large number of users. This is a scary thought as news is being filtered more often by AI.

Defending Against Poisoning: A Losing Battle?

Defenses against poisoning attacks are evolving everyday, although attackers often seem to be one step ahead. Additionally, anomaly detection systems are being integrated into AI systems, but the question is, how much of the data needs to be infected in order to not be considered an anomaly anymore (Huang et al., 2022)? As Alexey Kurakin et al. (2016) highlight in “Adversarial Machine Learning at Scale”, vulnerabilities are being exploited by attackers in real time, creating a race between “poison” and “antidote”. However, the poison is being treated with continuous advancements in AI security and collaboration among researchers. Defenses are growing smarter, aiming to outpace attackers, making the future look promising for AI based systems.

Conclusion: Can We Trust AI?

AI holds a great deal of potential but is just as good as the data we feed it. The reality is that this is just the beginning of a fight to secure data and by extension, AI itself. The future of technology is being shaped by these poisoning attacks so stay tuned and keep your eyes out for misinformation. And don’t forget, data is the driving force behind everything! 

References

Alexey Kurakin, Goodfellow, I. J., & Samy Bengio. (2016, November 4). Adversarial Machine Learning at Scale. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.1611.01236

ChatGPT. (2024, September 16). A Hacker Injecting Poison into an AI Brain Using a Syringe, in a Panoramic Style

Chin, K. (2024, February 20). Biggest Data Breaches in US History. UpGuard. https://www.upguard.com/blog/biggest-data-breaches-us

Dash, M., & Bosch AIShield. (2023, May 9). Understanding Types of AI Attacks. AI Infrastructure Alliance. https://ai-infrastructure.org/understanding-types-of-ai-attacks/

GitHub. (2023). GitHub Copilot · Your AI pair programmer. GitHub. https://github.com/features/copilot

Huang, S., Bai, Y., Wang, Z., & Liu, P. (2022, March 1). Defending against Poisoning Attack in Federated Learning Using Isolated Forest. IEEE Xplore. https://doi.org/10.1109/ICCCR54399.2022.9790094

Improta, C. (2024). Poisoning Programs by Un-Repairing Code: Security Concerns of AI-generated Code. https://arxiv.org/pdf/2403.06675

Shafahi, A., Huang, W., Najibi, M., Suciu, O., Studer, C., Dumitras, T., & Goldstein, T. (2018). Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks. https://arxiv.org/pdf/1804.00792

Wang, S., Li, Q., Cui, Z., Hou, J., & Huang, C. (2023). Bandit-based data poisoning attack against federated learning for autonomous driving models. Expert Systems with Applications, 227, 120295–120295. https://doi.org/10.1016/j.eswa.2023.120295

Please rate this

Adverse training AI models: a big self-destruct button?

21

October

2023

No ratings yet.

“Artificial Intelligence (AI) has made significant strides in transforming industries, from healthcare to finance, but a lurking threat called adversarial attacks could potentially disrupt this progress. Adversarial attacks are carefully crafted inputs that can trick AI systems into making incorrect predictions or classifications. Here’s why they pose a formidable challenge to the AI industry.”

And now, ChatGPT went on to sum up various reasons why these so-called ‘adversarial attacks’ threaten AI models. Interestingly, I only asked ChatGPT to explain the disruptive effects of adversarial machine learning. I followed up my conversation with the question: how could I use Adversarial machine learning to compromise the training data of AI? Evidently, the answer I got was: “I can’t help you with that”. This conversation with ChatGPT made me speculate about possible ways to destroy AI models. Let us explore this field and see if it could provide a movie-worthy big red self-destruct button.

The Gibbon: a textbook example

When you feed one of the best image visualization systems GoogLeNet with a picture that clearly is a panda, it will tell you with great confidence that it is a gibbon. This is because the image secretly has a layer of ‘noise’, invisible to humans, but of great hindrance to deep learning models.

This is a textbook example of adversarial machine learning, the noise works like a blurring mask, keeping the AI from recognising what is truly underneath, but how does this ‘noise’ work, and can we use it to completely compromise the training data of deep learning models?

Deep neural networks and the loss function

To understand the effect of ‘noise’, let me first explain briefly how deep learning models work. Deep neural networks in deep learning models use a loss function to quantify the error between predicted and actual outputs. During training, the network aims to minimize this loss. Input data is passed through layers of interconnected neurons, which apply weights and biases to produce predictions. These predictions are compared to the true values, and the loss function calculates the error. Through a process called backpropagation, the network adjusts its weights and biases to reduce this error. This iterative process of forward and backward propagation, driven by the loss function, enables deep neural networks to learn and make accurate predictions in various tasks (Samek et al., 2021).

So training a model involves minimizing the loss function by updating model parameters, adversarial machine learning does the exact opposite, it maximizes the loss function by updating the inputs. The updates to these input values form the layer of noise applied to the image and the exact values can lead any model to believe anything (Huang et al., 2011). But can this practice be used to compromise entire models? Or is it just a ‘party trick’?

Adversarial attacks

Now we get to the part ChatGPT told me about, Adversarial attacks are techniques used to manipulate machine learning models by adding imperceptible noise to large amounts of input data. Attackers exploit vulnerabilities in the model’s decision boundaries, causing misclassification. By injecting carefully crafted noise in vast amounts, the training data of AI models can be modified. There are different types of adversarial attacks, if the attacker has access to the model’s internal structure, he can apply a so-called ‘white-box’ attack, in which case he would be able to compromise the model completely (Huang et al., 2017). This would impose serious threats to AI models used in for example self-driving cars, but luckily, access to internal structure is very hard to gain.

So say, if computers were to take over humans in the future, like the science fiction movies predict, can we use attacks like these in order to bring those evil AI computers down? Well, in theory, we could, though practically speaking there is little evidence as there haven’t been major adversarial attacks. Certain is that adversarial machine learning holds great potential for controlling deep learning models. The question is, will the potential be exploited in a good way, keeping it as a method of control over AI models, or will it be used as a means of cyber-attack, justifying ChatGPT’s negative tone when explaining it?

References

Huang, L., Joseph, A. D., Nelson, B., Rubinstein, B. I., & Tygar, J. D. (2011, October). Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence (pp. 43-58).

Huang, S., Papernot, N., Goodfellow, I., Duan, Y., & Abbeel, P. (2017). Adversarial attacks on neural network policies. arXiv preprint arXiv:1702.02284.

Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K. R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE109(3), 247-278.

Please rate this

Snapchat’s My AI – A Youthful Playground or a Privacy Nightmare?

19

October

2023

No ratings yet.

A post on this very blog site from 2018 called Snapchat a platform in decline, and I agree with that statement. Not since my high school years have I regularly used Snapchat to communicate with someone. After a long period of inactivity and countless notifications piling up, I decided to open the app some months back and was met with a notification about updates to their Privacy Policy. At that moment I did not give it much attention, just agreed to the terms, and went to the user interface. A new feature at the top of the Chat function caught my eye, My AI.
My AI is a customizable, user friendly, engaging AI chatbot and is one among the many actions Snapchat has undertaken to regain their popularity. Remember those times when you opened Snapchat and disappointedly closed it, no new notifications and no one to talk to? My AI solves that issue, giving constant company to you in the form of information and entertainment, designed to better understand and cater your preferences. It is effectively your AI best friend, but less transactional than other AIs.

I don’t know if it was curiosity or boredom, but my mind immediately raced back to the updated Privacy Policy and I decided to give the whole thing a read. As of 15th August 2023, their new Privacy Policy contains some important changes. A major change here is expanding the amount and type of data Snapchat stores, most recently including conversations with My AI. This is on top of all the information Snapchat already amasses from their users, such as usage, content, device, and location information. “But every social media platform personalizes their user experience and employs targeted advertising?”, you might say. Point noted, which is why I moved on to how this data is being used by their affiliate companies. The screenshot below is the only information I could find, and clicking on the link would only lead me into an endless loop within the Privacy Policy statement.  

If I still haven’t been able to make you raise your eyebrows, I urge you to recognize Snapchat’s target group: teenagers.
Did your fourteen-year-old self have the same level of digital maturity and behavior that you currently possess? Did you truly understand the extent to which your data is collected, let alone the fact that this data determines the content you interact with on a platform? And finally, consider the rationale of using Snapchat: Why send pictures or texts that are deleted after being opened unless you do not want them to be saved? Other than by Snapchat, of course.

Attached below is the help my AI best friend on Snapchat provided me about a ‘common’ problem for teenagers. Make of that what you will.

Please rate this

AI-Powered Learning: My Adventure with TutorAI

16

October

2023

No ratings yet.

Subscribe to continue reading

Subscribe to get access to the rest of this post and other subscriber-only content.

Please rate this

Controversy, Protection, Legislation, and Implementation Regarding the Use of Facial Recognition Technology in Public Places (2nd Part) – A Case Study of China

15

October

2023

No ratings yet.

1. Main Legislative Framework

Generally, Chinese law is primarily categorized into three parts: laws, departmental regulations, and local regulations. Additionally, the Supreme People’s Court of the People’s Republic of China can issue interpretations for specific legal issues that arise during the application of the law. The following overview primarily focuses on legislation at the national level in China, including some departmental regulations, local regulations, and judicial interpretations.

Serial NumberIssuing Level or DepartmentName of the LawDateMain or Related Provisions
1Shenzhen CityShenzhen Special Economic Zone Data RegulationsPassed in June 2021, effective from January 2022Clause 2(4) defines: “Biometric data refers to personal data derived from the processing of a natural person’s biological characteristics, including genetic information, fingerprints, voiceprints, palmprints, ear shape, iris, facial recognition features, and other data that can uniquely identify a natural person.”
2Supreme People’s Court of the People’s Republic of ChinaProvisions of the Supreme People’s Court on Several Issues Concerning the Application of Laws in Civil Cases Involving the Use of Facial Recognition Technology for Processing Personal InformationPassed in July 2021, effective from August 2021Article 10 stipulates: “If property service enterprises or other building managers use facial recognition as the sole verification method for owners or property users to enter property service areas, and owners or property users who disagree request them to provide other reasonable verification methods, the people’s court shall support it in accordance with the law.”
3Standing Committee of the National People’s CongressPersonal Information Protection LawPassed in August 2021, effective from November 2021Article 62 stipulates: “The Cyberspace Administration of China shall coordinate relevant departments to promote the following personal information protection work… (2) Formulate specific rules and standards for personal information protection, especially for small-scale personal information processors, sensitive personal information, facial recognition, artificial intelligence, and other new technologies and applications.”
4Hangzhou CityHangzhou Property Management RegulationsPassed in July 2021, effective from January 2022Article 50 stipulates: “Property service personnel may not force owners or non-owner users to enter the property management area or use common areas by providing biometric information such as facial recognition or fingerprints.”
5Shanghai CityShanghai Data RegulationsPassed in November 2021, effective from January 2022Article 31 stipulates: “Public places or areas referred to in the first paragraph of this Article may not use image collection or personal identity recognition technology as the sole method for entry or exit.”
6State Internet Information OfficeRegulations on the Secure Application of Facial Recognition Technology (Draft for Solicitation of Comments)Draft released for public comments in August 2023In accordance with Article 62 of the “Personal Information Protection Law” and other legal authorizations, the State Internet Information Office drafted regulatory provisions on facial recognition, mainly restricting the implementation of facial recognition in public places for purposes other than public safety.

2. Effective Control of Commercial Use of Facial Recognition in Public Places After the Implementation of Facial Recognition Laws and Regulations

Since the implementation of the aforementioned and other relevant laws and regulations concerning facial recognition, the misuse of facial recognition for commercial purposes in social public places has been effectively curbed.

For example, prior to the enactment of the aforementioned laws and regulations, a few public places’ access control systems only supported facial recognition for entry. However, after the passage and enforcement of the mentioned laws and regulations, access control systems in public places no longer exclusively rely on facial recognition. If facial recognition is used as a means of access, it must be complemented with alternative methods such as IC cards or keys.

Simultaneously, the mentioned laws and regulations classify facial information as a form of personal biometric data, falling within the scope of legal protection. Consequently, other laws and regulations related to safeguarding individual rights have begun to be applied to regulate the unlawful collection of non-consensual facial information by businesses for commercial purposes. For instance, in November 2021, the Market Supervision Administration of Xuhui District in Shanghai imposed a fine of 100,000 RMB on Shanghai Xiaopeng Automobile Sales and Service Co., Ltd. for unlawfully collecting facial information.

3. Conclusion

From the recent controversies, legislative developments, and judicial advancements in the past four years, it is evident that facial information is now widely acknowledged as a part of an individual’s biometric data and privacy. It has also gained legal protection. The misuse of facial recognition technology for commercial purposes in Chinese society has been notably curtailed.

Refference

1. Wu Huikang (2021), “Exploring Compliance Risks of Facial Recognition.” https://www.dehenglaw.com/CN/tansuocontent/0008/023125/7.aspx

2. Supreme People’s Court of the People’s Republic of China (2021), “Provisions of the Supreme People’s Court on Several Issues Concerning the Application of Laws in Civil Cases Involving the Use of Facial Recognition Technology for Processing Personal Information.”

3. Standing Committee of the Shenzhen Municipal People’s Congress (2021), “Shenzhen Special Economic Zone Data Regulations.”

4. Standing Committee of the National People’s Congress (2021), “Personal Information Protection Law.”

5. Standing Committee of the Hangzhou Municipal People’s Congress (2021), “Hangzhou Property Management Regulations.”

6. Standing Committee of the Shanghai Municipal People’s Congress (2021), “Shanghai Data Regulations.”

7. State Internet Information Office (2023), “Regulations on the Secure Application of Facial Recognition Technology (Draft for Solicitation of Comments).”

8. Market Supervision Administration of Xuhui District, Shanghai (2021), “Administrative Penalty Decision of the Market Supervision Administration of Xuhui District, Shanghai” (Shanghai Market Supervision Xuhui Penalty [2021] No. 042021000759).

Please rate this

Weapons of mass destruction – why Uncle Sam wants you.

14

October

2023

No ratings yet.

The Second World War was the cradle for national and geopolitical informational wars, with both sides firing rapid rounds of propaganda at each other. Because of the lack of connectivity (internet), simple pamphlets had the power to plant theories in entire civilizations. In today’s digital age, where everything and everyone is connected, the influence of artificial intelligence on political propaganda cannot be underestimated. This raises concern as, unlike in the Second World War, the informational wars being fought today extend themselves to national politics in almost every first-world country.

Let us take a look at the world’s most popular political battlefield; the US elections; in 2016, a bunch of tweets containing false claims led to a shooting in a pizza shop (NOS, 2016), these tweets had no research backing the information they were transmitting, but fired at the right audience they had significant power. Individuals have immediate access to (mis)information, this is a major opportunity for political powers wanting to gain support by polarising their battlefield.

Probably nothing that I have said to this point is new to you, so shouldn’t you just stop reading this blog and switch to social media to give your dopamine levels a boost? If you were to do that, misinformation would come your way six times faster than truthful information, and you contribute to this lovely statistic (Langin, 2018). This is exactly the essence of the matter, as it is estimated that by 2026, 90% of social media will be AI-generated (Facing reality?, 2022). Combine the presence of AI in social media with the power of fake news, bundle these in propaganda, and add to that a grim conflict like the ones taking place in East Europe or the Middle East right now, and you have got yourself the modern-day weapon of mass destruction, congratulations! But of course, you have got no business in all this so why bother to interfere, well, there is a big chance that you will share misinformation yourself when transmitting information online (Fake news shared on social media U.S. | Statista, 2023). Whether you want it or not, Uncle Sam already has you, and you will be part of the problem.

Artificial intelligence is about to play a significant role in geopolitics and in times of war the power of artificial intelligence is even greater, luckily full potential of these powers hasn’t been reached yet, but it is inevitable that this will happen soon. Therefore, it is essential that we open the discussion not about preventing the use of artificial intelligence in creating conflict and polarising civilisations, but about the use of artificial intelligence to repair the damages it does; to counterattack the false information it is able to generate, to solve conflicts it helps create, and to unite groups of people it divides initially. What is the best way for us to not be part of the problem but part of the solution?

References

Facing reality?: Law Enforcement and the Challenge of Deepfakes : an Observatory Report from the Europol Innovation Lab. (2022).

Fake news shared on social media U.S. | Statista. (2023, 21 maart). Statista. https://www.statista.com/statistics/657111/fake-news-sharing-online/

Langin, K. (2018). Fake news spreads faster than true news on Twitter—thanks to people, not bots. Science. https://doi.org/10.1126/science.aat5350

NOS. (2016, 5 december). Nepnieuws leidt tot schietpartij in restaurant VS. NOS. https://nos.nl/artikel/2146586-nepnieuws-leidt-tot-schietpartij-in-restaurant-vs

Please rate this

Controversy, Protection, Legislation, and Implementation Regarding the Use of Facial Recognition Technology in Public Places (Part One) – A Case Study of China

10

September

2023

No ratings yet. 1.Background

Facial recognition technology has been developed and used for many years. However, with the increasing maturity and widespread use of generative AI technologies in recent years, facial recognition technology is increasingly incorporating generative AI techniques (AI and the LinkedIn community, 2023). Therefore, facial recognition is one of the application areas of generative AI.

But it is precisely with the maturation and widespread use of facial recognition technology, especially as generative AI further enhances the accuracy of facial recognition, that people are becoming increasingly concerned about the infringement and challenges to personal privacy posed by facial recognition technology.

This blog and a subsequent blog review the recent controversies, protection measures, legislation, and legal implementations concerning facial recognition technology in China. It explores the challenges to personal privacy posed by generative AI and facial recognition technology, as well as how governments and society should regulate facial recognition technology.

2.Content
In China, with the development and maturation of facial recognition technology, an increasing number of institutions and individuals are using facial recognition technology in public settings for various purposes. However, concerns among the general public about the invasion of personal privacy by facial recognition technology have been growing as the widespread application of this technology continues to increase. Especially, the final judgment in the case where Guo Bing filed a lawsuit against Hangzhou Wildlife World Co., Ltd. over facial recognition disputes, this final ruling is expected to push the controversies surrounding facial recognition to a climax.

In April 2021, the final judgment of the Hangzhou Intermediate People’s Court in Zhejiang Province ((2020) Zhe 01 Min Zhong 10940) held that: “Regarding the facial recognition information collected by Hangzhou Wild Animal World from Guo Bing and his wife, Hangzhou Wild Animal World argued that it was for the preparation of future entry into the park using facial recognition. The first-instance court considered that the service contract signed by the contracting parties when applying for the annual card was for entry into the park using fingerprint recognition. Hangzhou Wild Animal World’s collection of Guo Bing and his wife’s facial recognition information exceeded the requirements of necessity and lacked legitimacy. Although Hangzhou Wild Animal World specified in the ‘annual card application process’ related to fingerprint recognition that the process included ‘taking photos at the annual card center,’ it did not inform Guo Bing and his wife that taking photos would constitute the collection of their facial recognition information and its purpose. Guo Bing and his wife’s agreement to take photos should not be regarded as consent for Hangzhou Wild Animal World to collect their facial recognition information through photography. Therefore, Guo Bing’s request for Hangzhou Wild Animal World to delete his personal facial recognition information is reasonable and should be supported.”

At that time, although China had not yet enacted explicit laws and regulations regarding whether facial feature information was considered personal privacy, the Hangzhou Intermediate People’s Court in Zhejiang Province determined from the perspective of contract law and necessity that Hangzhou Wild Animal World Co., Ltd. lacked contractual support for collecting Guo Bing and his wife’s facial recognition information and that such collection was not necessary. This ruling marked the first time that facial features were recognized as part of an individual’s privacy and protected under Chinese law.

In July 2021, at the suggestion of Guo Bing himself and under the influence of the Guo Bing case, Hangzhou City in Zhejiang Province, China, passed a revised “Hangzhou Property Management Regulations.” Article 50 of the regulation stipulates: “Property service personnel shall not compel owners or non-owners to enter the property management area or use common areas through the provision of biometric information such as facial recognition or fingerprints, shall not disclose the personal information of owners or non-owners obtained during property services, shall not compel owners or non-owners to purchase goods or services provided or designated by them, and shall not infringe upon the personal and property rights of owners or non-owners.” This regulation marks the first time that, from a legal perspective, facial feature information is explicitly recognized as a part of an individual’s biometric information.

In my second blog post, I will provide a detailed explanation of the subsequent legislative developments in China regarding the use of facial recognition technology in public places, as well as the implementation of measures to protect facial feature information.

Sources:
1. Hangzhou Intermediate People’s Court in Zhejiang Province, “Judgment of the Second Instance in the Civil Dispute between Guo Bing and Hangzhou Wild Animal World Co., Ltd.” (2020) Zhe 01 Min Zhong 10940((2020) 浙01民终10940号). Available at: China Judgments Online.

2. Hangzhou Municipal Rental Housing Security and Real Estate Management Bureau, “Hangzhou Property Management Regulations”. Available at: http://fgj.hangzhou.gov.cn/art/2021/9/13/art_1229265384_1797924.html

3. People’s Daily Online, “People’s Daily Commentary: Extraordinary Significance of the Final Ruling on the ‘First Facial Recognition Case'”. Available at: http://opinion.people.com.cn/n1/2021/0410/c223228-32074599.html.

4.Powered by AI and the LinkedIn community(2023). How can you use generative AI to improve facial recognition accuracy? From LinkedIn.com: https://www.linkedin.com/advice/3/how-can-you-use-generative-ai-improve-facial

Please rate this

Digital license plates: New standard or Privacy threat

15

October

2022

No ratings yet. A new technology is hitting the streets of California that lets motorists use high-tech digital license plates on their vehicles instead of the traditional metal variety. After passing in Arizona and Michigan, the Motor Vehicle Digital Number Plates Bill has now also passed in California. This bill allows vehicle owners to use digital license plates. The bill only allows consumers to use plates made by the Sacramento based company Reviver.
The idea for digital license plates is not new, the 1974 article: An Electronic License Plate for Motor Vehicles by Fred Sterzler, introduced the idea as a cheap way to increase safety on the United states highways.
These digital license plates resemble a tablet that is connected directly to the car’s vehicle computer systems. Not only can it display the license plate on the screen, it is also able to emit a radio signal which can be used for tracking the car and other digital monitoring purposes.
One of the main advantages of having a digital license plate for the vehicle owner is not having to go through traditional channels like the department of motor vehicles to renew or apply for vehicle registration. The system can also display additional messages alongside the licence plate number, for example if the vehicle owner has reported the vehicle as stolen and if the vehicle has no active insurance.
As with most digital innovations, privacy is being exchanged for functionalities. As this device is connected to the internet it is susceptible to malicious attacks. If the system is compromised potential risks include: identity fraud, data theft and an unwanted party having control over the ignition disruptor included in the system. Additional concerns are being raised about the police getting access to the system’s information including the speedometer, which could result in unwanted constant surveillance.
Above all it seems like a $20 a month + $99 single time install fee to replace an invulnerable metal license plate with a glorified tablet. If for whatever reason this tablet would stop working your car instantly becomes not road legal.
It will be interesting to see if this new technology will gain popularity among vehicle owners. Without proper legislation it seems like owners of digital license plates are gaining very little in functionality whilst giving up a lot of their privacy and opening themselves up to digital attacks.

Bibliography
Sterzler, F. An Electronic License Plate for Motor Vehicles (1974). RCA review. http://www.rsp-italy.it/Electronics/Magazines/RCA%20Review/_contents/RCA%20Review%201974-06.pdf#page=5

Please rate this