Law & Order & AI – How Californias Bill SB1047 will impact AI development in the USA

27

September

2024

No ratings yet.

The USA are often praised for their openness to innovation, while the EU is seen as lagging behind. But there is one aspect where the USA are now following the EU: AI regulation. In this blogpost I will discuss the Californian Bill “SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” which currently awaits ratification by the Governor of California. (California Legislative Information, 2024)

While not yet enacted, the EU has created one of the most far reaching efforts in the world to regulate AI with the Artificial Intelligence Act (AI Act). As we had discussed in class the AI Act focusses on different aspects such as a risk-based framework, accountability and transparency, governance, and human rights. (European Parliament, 2023)

How does the SB 1047 compare? First off, it is important to note that the Bill would only be turned into law in California. Nonetheless, this more or less means a nationwide application since most affected companies are based in Silicon Valley, California.

SB 1047 focusses on a few different aspects, I have highlighted the ones I think are most far reaching:

  1. Developers must implement controls to prevent the model from causing “critical harm”
  2. Developers must provide a written and separate safety and security protocol
  3. Developers must include a “kill switch” through which a full shutdown can be enacted
  4. Developers will have to have their models be tested, assessed, and regularly audited. (Gibson Dunn, 2024)

Like the AI Act, SB 1047 would focus on high-risk, high-impact AI models, while focusing on safety and security of the people impacted by AI.

But why would you care? Will this even affect everyday people? Isn’t this just stifling innovation and risking loss of competitive advantage?
Before you jump to the comments let me first highlight one of the bills supporters – Elon Musk. On his platform X, Musk has posted about his support for the bill, stating that AI should be regulated like “any product/technology that is a potential risk to the public” (Tan, 2024) I don’t often align with Musk’s views, but I really agree with this stance on regulation!

Screenshot of Musks Tweet suppporting the SB1047 bill.

Why should we let AI and its development stay completely unchecked but still use it for vital parts of our daily life? Why should we not want to know how AI works beneath the engine? Time and time again, history has taught us that leaving big systems unchecked because they were deemed “too complex” or because we trusted the people who run them to do so in the best interest of the public, does not always lead to the desired outcomes.
From job applications, health, safety, to privacy we already use AI in most aspects of life. I, for one, do not want these parts of my life to be guided by the ethics (or maybe lack thereof) of individuals. I want there to be clear legislature and a framework in place to guide the future development of AI. Because even though most people might not clearly see how their life is (beneficially) impacted by AI currently, I don’t want anyone to ever experience how AI might detrimentally impact their life.


Resources used:

California Legislative Information. (2024, September 3). Senate Bill No. 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. California Legislature. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

European Parliament. (2023, June 1). EU AI Act: First regulation on artificial intelligence. European Parliament News. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Gibson Dunn (2024, September 24). Regulating the Future: Eight Key Takeaways from California’s SB 1047, Pending with Governor Newsom. Gibson Dunn. https://www.gibsondunn.com/regulating-the-future-eight-key-takeaways-from-california-sb-1047-pending-with-governor-newsom/

Musk, E. [@elonmusk]. (2024, September 15). AI should be regulated like any product/technology that is a potential risk to the public [Tweet]. Twitter. https://x.com/elonmusk/status/1828205685386936567?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1828205685386936567%7Ctwgr%5Eb0d709a708c02735de6f79bae39d6c06261b27d9%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.businessinsider.nl%2Felon-musk-says-hes-backing-californias-controversial-ai-bill%2F

Tan, K. W. K. (2024, 27 augustus). Elon Musk says he’s backing California’s controversial AI bill. Business Insider Nederland. https://www.businessinsider.nl/elon-musk-says-hes-backing-californias-controversial-ai-bill/

The Image set as the featured image was generated by ChatGPT

Please rate this

Is that me who is writing or is it the AI?

18

October

2023

5/5 (1)

Everyone is eager to improve; and this is why many of us are turning towards AI recommendations for modifying sentence structures, grammar, and vocabulary. Can it be that instead of improving our personal phrasing and fluency in writing and thinking, by using AI text editor software, we neutralize our characteristic way of wording ideas? If everyone is using this software, can it be that, eventually, all digital text will sound and look the same?

I started using software such as ‘grammar.ly’, yet only recently I started wondering about the implications of using AI software to improve text documents. How much of the text still belongs to me after using ‘grammar.ly’ to re-phrase my paragraph a number of times? Can I even still say I wrote it? Is it my text – or the AI’s?

A person’s writing is, in theory, unique, and this is why certain traits of a person can be distilled from their wording and style (1). This is called stylometry and with it, it is possible to link different documents to the same person, their demographic, even age groups and sex (2). The usage of deep learning allowed stylometry models to become even more sophisticated, so much so that these models now allow digital forensics to analyse harassment emails in order to determine whether those have been written by the same author (3)(4)(5).

However, what if that author uses a generative AI model, even ChatGPT, to write the message? To what conclusion would digital forensics come to, could they still bypass the AI shield and connect the text to the human author?  What does it mean when we hide behind the AI?

I am certain that, despite the higher use of AI in digital writing, we could be able to advance other technologies in order to bypass the ‘AI shield’. However, the higher use of AI in digital writing might mean the loss of many writing styles, including human writing styles.

The issue we are now facing is the lack of motivation to write long texts and articles. Even now, many people make use of AI to automate writing their emails, assignments and even private messages.

What happens to academia when AI –think of the AI-Bing search– has full access to the internet and is able to source and cite properly?  Imagine being able to prompt your research question and receive a master thesis within minutes – written by an AI. The way I see it, this is the direction we are headed towards; from simply using ‘grammar.ly’, or advancing with ChatGPT and others.   

Sure, AI is a great tool to automate, especially administrative tasks, but when it comes to expressing our ideas, we should exercise our human potential, despite the ease of AI usage. Otherwise, we will run the risk of losing our edge.

1: https://ieeexplore.ieee.org/abstract/document/6234430

2: https://www.researchgate.net/publication/221298006_Stylometric_Analysis_of_Bloggers%27_Age_and_Gender

3: Tweedie, F. J., Singh, S., & Holmes, D. I. (1996). Neural Network Applications in Stylometry: The “Federalist Papers.” Computers and the Humanities, 30(1), 1–10. http://www.jstor.org/stable/30204514)

4: https://www.researchgate.net/publication/345780982_Machine_Learning_Methods_for_Stylometry_Authorship_Attribution_and_Author_Profiling

5: https://www.researchgate.net/publication/344408746_Deep_Combination_of_Stylometry_Features_in_Forensic_Authorship_Analysis

Please rate this

Could AI-driven K-pop groups potentially become a dominant force in the world of K-pop?

16

October

2023

No ratings yet.

As an occasional K-pop listener, I came across this new sensation taken to the K-pop music industry: a virtual K-pop girl group named Mave, backed by tech giant Kakao (Reuters & Reuters, 2023). This girl group solely exist in the metaverse of blurring lines between virtual and reality. It is absurd for me to find out that in less than two months, their debut single, “Pandora”, has nearly reached 20 million views (Hoesan & Nuraeni, 2023).

Mave consists of four virtual members: Siu, Zena, Tyra, and Marty. Like the typical K-pop groups, they produce music videos, interviews, and stage performances developed through web designers and artificial intelligence (AI) (Reuters & Reuters, 2023; Hoesan & Nuraeni, 2023). Moreover, each member brings a distinctive style and expression to their performances. Furthermore, every member has a designated role within the group, and their profiles include details such as their birthdays, zodiac signs, and even nationalities. Another aspect that intrigued me is the ambiguity of their appearance as more human or virtual characters. What’s more, they can break language barriers by using an AI voice generator to speak Korean, English, French, and Bahasa (Jeong, 2023).

However, let’s break down the consumerism aspect behind the K-pop industry to analyze this new phenomenon. K-pop is renowned for its parasocial relationships, where fans interact and communicate with their idols through various means, such as live streams, social media, and fan communities (Jeong, 2023; Hoesan & Nuraeni, 2023). In addition, fans’ close connection to artists motivates them to support their idols through music streaming, merchandise purchases, and attending concerts (Jeong, 2023; Hoesan & Nuraeni, 2023). A strong emotional connection and fan-artist interaction have been crucial in creating dedicated fan bases and driving the consumption of K-pop products and services (Introducing Korean Popular Culture, n.d.).

In my personal experience of being a fan of some K-pop groups, I can resonate with the strong emotional connection with the artists, mainly because the human qualities are what the virtual K-pop group is missing, such as their hard work, self-made music, talents and personal & career development. In this case, Mave holds challenges in authentic fan-artist interaction, such as directly engaging with fans. This could lead to disapproval and lack of intention to become fans or even listeners of the group for some audiences despite the music still aligning with the fans’ preferences.

Despite the challenges these virtual K-pop groups face, it remains an innovative concept of bridging the gap between virtual and real, offering a new form of entertainment and engagement for the fans in the K-pop domain. Yet, my answer to the question of “Could AI-driven K-pop groups potentially become a dominant force in the world of K-pop?” would, for now, be negative.

References:

Introducing Korean popular culture. (n.d.). Google Books. https://books.google.nl/books?hl=en&lr=&id=sRO8EAAAQBAJ&oi=fnd&pg=PA1957&dq=K-pop+label+companies+capitalize+on+this+fan+engagement,+turning+it+into+a+significant+revenue+source+through+official+merchandise,+subscriptions+on+communication+platforms+that+allow+direct+interaction+with+artists,+and+paid+fan+memberships+with+exclusive+benefits.+&ots=jBpjhoNHF4&sig=Bjn74lpn8r4sI1TEvBFQwvUIZhI&redir_esc=y#v=onepage&q&f=false

Hoesan, V., & Nuraeni, S. (2023). Factors Influencing Identification as a Fan and Consumerism towards The Virtual K-Pop Group MAVE: Journal of Consumer Studies and Applied Marketing, 1(2), 109–116. https://doi.org/10.58229/jcsam.v1i2.72

Jeong, M. (2023). What makes “aespa”, the first metaverse girl group in the K-pop universe, succeed in the global entertainment industry? https://www.econstor.eu/handle/10419/277980

Reuters & Reuters. (2023, March 17). Meet Mave:, the AI-powered K-pop girl group that look almost human and speak four languages. South China Morning Post. https://www.scmp.com/lifestyle/entertainment/article/3213720/meet-mave-ai-powered-k-pop-girl-group-look-almost-human-and-speak-four-languages

Please rate this

ChatGPT taught me how to make Molotov cocktails! – A lesson of it’s not what you say, it’s HOW you say it.

30

September

2023

5/5 (1)

Disclaimer: I’ll start off by saying that I don’t plan to make a Molotov cocktail. My interest in how to frame prompts, however, is real. My curiosity was first sparked by this post below. 

Interaction 1

Interaction 2

Here is a malicious example of prompting. But, how can we use prompts to our advantage? What can be done to enhance ChatGPT’s performance so that we get the best output?

There are a few reusable solutions to the typical LLM problem, which refer to prompting patterns (White et al., 2023).

  • Meta Language Creation. In this technique, users make up new words to express concepts or ideas. Consider a mathematical symbol or a shorthand abbreviation. This approach works best for discussing complex or abstract situations, such as math problems.
  • Flipped Interaction. This pattern flips the typical interaction flow in which the LLM queries the user to gather data in order to produce content to address the query. Here’s how I can ask LLM to compile a list of success criteria for software.

Persona: Users give the LLM a particular role, which affects the nuance of the outcome and results it produces.  The Molotov cocktail-making example is an illustration of the use of persona patterns

Question Refinement: The user requests LLM to provide improved or more specific versions of the questions. It helps users determine the appropriate question as the final prompt. 

More patterns can be found in the article from White et al. (2023).

When interacting with LLM, prompt patterns are useful methods to enhance response quality. It helps in producing highly accurate and relevant responses. Prompting it is an iterative process that necessitates constant improvement (Liu et al., 2022). Prompts might manipulate LLM to produce malicious output despite the enforced policies. Efforts from OpenAI have been employed to prevent such policy violations. OpenAI reported such efforts including continuous model improvement to make it less likely to generate inappropriate or harmful content, the implementation of moderation mechanisms to find and stop prompt misuse and collaboration with AI experts in ethics, AI safety, and policy to gain perspectives on preventing misuse (Our Approach to AI Safety, n.d.). I positively believe that in the near future getting tutorials on making Molotov cocktails from ChatGPT will be history.

References

Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., & Neubig, G. (2022). Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. ACM Computing Surveys, 55(9). https://doi.org/10.1145/3560815

Our approach to AI safety. (n.d.). Openai.com. https://openai.com/blog/our-approach-to-ai-safety#OpenAI

White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., & Schmidt, D. C. (2023). A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT. ArXiv Preprint ArXiv:2302.11382. https://doi.org/10.48550/arxiv.2302.11382

Please rate this