The USA are often praised for their openness to innovation, while the EU is seen as lagging behind. But there is one aspect where the USA are now following the EU: AI regulation. In this blogpost I will discuss the Californian Bill “SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” which currently awaits ratification by the Governor of California. (California Legislative Information, 2024)
While not yet enacted, the EU has created one of the most far reaching efforts in the world to regulate AI with the Artificial Intelligence Act (AI Act). As we had discussed in class the AI Act focusses on different aspects such as a risk-based framework, accountability and transparency, governance, and human rights. (European Parliament, 2023)
How does the SB 1047 compare? First off, it is important to note that the Bill would only be turned into law in California. Nonetheless, this more or less means a nationwide application since most affected companies are based in Silicon Valley, California.
SB 1047 focusses on a few different aspects, I have highlighted the ones I think are most far reaching:
- Developers must implement controls to prevent the model from causing “critical harm”
- Developers must provide a written and separate safety and security protocol
- Developers must include a “kill switch” through which a full shutdown can be enacted
- Developers will have to have their models be tested, assessed, and regularly audited. (Gibson Dunn, 2024)
Like the AI Act, SB 1047 would focus on high-risk, high-impact AI models, while focusing on safety and security of the people impacted by AI.
But why would you care? Will this even affect everyday people? Isn’t this just stifling innovation and risking loss of competitive advantage?
Before you jump to the comments let me first highlight one of the bills supporters – Elon Musk. On his platform X, Musk has posted about his support for the bill, stating that AI should be regulated like “any product/technology that is a potential risk to the public” (Tan, 2024) I don’t often align with Musk’s views, but I really agree with this stance on regulation!
Why should we let AI and its development stay completely unchecked but still use it for vital parts of our daily life? Why should we not want to know how AI works beneath the engine? Time and time again, history has taught us that leaving big systems unchecked because they were deemed “too complex” or because we trusted the people who run them to do so in the best interest of the public, does not always lead to the desired outcomes.
From job applications, health, safety, to privacy we already use AI in most aspects of life. I, for one, do not want these parts of my life to be guided by the ethics (or maybe lack thereof) of individuals. I want there to be clear legislature and a framework in place to guide the future development of AI. Because even though most people might not clearly see how their life is (beneficially) impacted by AI currently, I don’t want anyone to ever experience how AI might detrimentally impact their life.
Resources used:
California Legislative Information. (2024, September 3). Senate Bill No. 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. California Legislature. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047
European Parliament. (2023, June 1). EU AI Act: First regulation on artificial intelligence. European Parliament News. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Gibson Dunn (2024, September 24). Regulating the Future: Eight Key Takeaways from California’s SB 1047, Pending with Governor Newsom. Gibson Dunn. https://www.gibsondunn.com/regulating-the-future-eight-key-takeaways-from-california-sb-1047-pending-with-governor-newsom/
Musk, E. [@elonmusk]. (2024, September 15). AI should be regulated like any product/technology that is a potential risk to the public [Tweet]. Twitter. https://x.com/elonmusk/status/1828205685386936567?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1828205685386936567%7Ctwgr%5Eb0d709a708c02735de6f79bae39d6c06261b27d9%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.businessinsider.nl%2Felon-musk-says-hes-backing-californias-controversial-ai-bill%2F
Tan, K. W. K. (2024, 27 augustus). Elon Musk says he’s backing California’s controversial AI bill. Business Insider Nederland. https://www.businessinsider.nl/elon-musk-says-hes-backing-californias-controversial-ai-bill/
The Image set as the featured image was generated by ChatGPT
Thank you for your post!
I do agree with your stance, that we need to have strong AI regulation for prevention and to protect the public from potential risks. I am happy that both EU and the US have taken steps in this direction, and I hope this is just the beginning, with more countries taking action worldwide.
Unfortunately, passing a bill of law in only one state is not enough. California-based companies could just create a subsidiary or a independent entity in another state and they can easily bypass the Californian state law. I believe there should be a larger movement to raise awareness and move this law to federal level, to govern the entire US. We shall see how this develops in the future.
Great remarks Emma, I couldn’t agree more with your conclusions.
Too many times have we seen a great invention turned to the dark side. A great example is dynamite, thanks to which we now have the Nobel prize.
In my opinion, it’s surprising how the EU and now the US, have started to tackle a problem, before it was too late. That’s not something we see in politics often enough. I have to agree with Codrin’s comment. It would be great, if the US passed a similar bill on the federal level. An interesting case of that comes from Elon Musk, who not that long ago changed the place of registration of Tesla, because of a state law he didn’t like.
Finally, I am no expert on law or AI, so it’s difficult for me to judge this one. I just hope we can find a balance between safety and progress.