Law & Order & AI – How Californias Bill SB1047 will impact AI development in the USA

27

September

2024

No ratings yet.

The USA are often praised for their openness to innovation, while the EU is seen as lagging behind. But there is one aspect where the USA are now following the EU: AI regulation. In this blogpost I will discuss the Californian Bill “SB 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” which currently awaits ratification by the Governor of California. (California Legislative Information, 2024)

While not yet enacted, the EU has created one of the most far reaching efforts in the world to regulate AI with the Artificial Intelligence Act (AI Act). As we had discussed in class the AI Act focusses on different aspects such as a risk-based framework, accountability and transparency, governance, and human rights. (European Parliament, 2023)

How does the SB 1047 compare? First off, it is important to note that the Bill would only be turned into law in California. Nonetheless, this more or less means a nationwide application since most affected companies are based in Silicon Valley, California.

SB 1047 focusses on a few different aspects, I have highlighted the ones I think are most far reaching:

  1. Developers must implement controls to prevent the model from causing “critical harm”
  2. Developers must provide a written and separate safety and security protocol
  3. Developers must include a “kill switch” through which a full shutdown can be enacted
  4. Developers will have to have their models be tested, assessed, and regularly audited. (Gibson Dunn, 2024)

Like the AI Act, SB 1047 would focus on high-risk, high-impact AI models, while focusing on safety and security of the people impacted by AI.

But why would you care? Will this even affect everyday people? Isn’t this just stifling innovation and risking loss of competitive advantage?
Before you jump to the comments let me first highlight one of the bills supporters – Elon Musk. On his platform X, Musk has posted about his support for the bill, stating that AI should be regulated like “any product/technology that is a potential risk to the public” (Tan, 2024) I don’t often align with Musk’s views, but I really agree with this stance on regulation!

Screenshot of Musks Tweet suppporting the SB1047 bill.

Why should we let AI and its development stay completely unchecked but still use it for vital parts of our daily life? Why should we not want to know how AI works beneath the engine? Time and time again, history has taught us that leaving big systems unchecked because they were deemed “too complex” or because we trusted the people who run them to do so in the best interest of the public, does not always lead to the desired outcomes.
From job applications, health, safety, to privacy we already use AI in most aspects of life. I, for one, do not want these parts of my life to be guided by the ethics (or maybe lack thereof) of individuals. I want there to be clear legislature and a framework in place to guide the future development of AI. Because even though most people might not clearly see how their life is (beneficially) impacted by AI currently, I don’t want anyone to ever experience how AI might detrimentally impact their life.


Resources used:

California Legislative Information. (2024, September 3). Senate Bill No. 1047: Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. California Legislature. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

European Parliament. (2023, June 1). EU AI Act: First regulation on artificial intelligence. European Parliament News. https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

Gibson Dunn (2024, September 24). Regulating the Future: Eight Key Takeaways from California’s SB 1047, Pending with Governor Newsom. Gibson Dunn. https://www.gibsondunn.com/regulating-the-future-eight-key-takeaways-from-california-sb-1047-pending-with-governor-newsom/

Musk, E. [@elonmusk]. (2024, September 15). AI should be regulated like any product/technology that is a potential risk to the public [Tweet]. Twitter. https://x.com/elonmusk/status/1828205685386936567?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E1828205685386936567%7Ctwgr%5Eb0d709a708c02735de6f79bae39d6c06261b27d9%7Ctwcon%5Es1_&ref_url=https%3A%2F%2Fwww.businessinsider.nl%2Felon-musk-says-hes-backing-californias-controversial-ai-bill%2F

Tan, K. W. K. (2024, 27 augustus). Elon Musk says he’s backing California’s controversial AI bill. Business Insider Nederland. https://www.businessinsider.nl/elon-musk-says-hes-backing-californias-controversial-ai-bill/

The Image set as the featured image was generated by ChatGPT

Please rate this

Societal polarization due to Social Media in the USA – Who should take responsibility?

9

October

2020

No ratings yet. Social media companies like Google and Facebook bear increasingly more responsibility in our society as they grow in size and influence. Even though they’re platforms and services were not designed to specifically manipulate or steer public opinion, they are increasingly confronted with the reality that they are. From seemingly minor issues such as political campaign emails being marked as “spam” in a prospective voters Google mail account (Newton, 2020) to concerns that algorithms, with the help of content moderators, on Facebook or Youtube are unfairly removing conservative content (Romm, 2020) these big tech companies are already under scrutiny in the United States from both major political parties. The irony of this criticism lies in the fact that these companies were left largely unregulated by the same government criticizing them today. Its due to this lack of regulation within the industry, that big tech companies focused the development of their algorithms towards narrow goals of maximizing users attention as this would allow them to make more money from advertisement because they can show more adds to users. Combining this business incentive with the goal of increasing the effects of network externalities explains how companies like Google and Facebook got into this situation. This unfettered pursuit to maximize user attention, has led to the proliferation of social media in society and has enabled a level of polarization today which is unprecedented in the history of the USA (DellaPosta, 2020).

In efforts to reduce the pressure governments put on them, Google and Facebook have developed more comprehensive policies for content moderation, working with policy makers and independent organizations. Facebook alone has committed to hiring 15,000 content moderators to enforce them (Thomas, 2020). Effectively this has transformed both media giants to becoming an independent online police force, with policies as its laws, and content moderators as its police force. Even though these policies were developed with key stakeholders in government, it raises questions around how society is, and should, function as governmental responsibilities become increasingly intertwined with big tech firms operations. Although governments are responsible for enforcing rules around freedom of speech, effectively this is done more and more by tech companies. From a radical point of view, these practices are undemocratic as big tech companies operate without oversight of elected officials, however, it can nevertheless be argued that these measures are necessary in the short term to allow policy makers to catch up and regulate the industry.

As social media platforms increasingly become the medium through which democratic societies express their opinions, they effectively become tools which can steer opinion. Because of this reality, I believe that governments should play a larger role in regulating these companies to create rules with penalties, as well as incentives, to reduce the polarization social media. One of the possible ways this could be done is by creating clear rules around content and advertising which similarly already apply to newspapers and network providers. However, these rules would also need to be enforced with financial penalties, such has social media companies having to pay back money they received for inappropriate content or advertising. The question ultimately arises: how long can the US government, and other governments around the world, allow social media companies to continue to self-regulate themselves? The time is ticking, and will likely not be much longer after the 2020 US election.

References:

DellaPosta, D. (2020) ‘Pluralistic Collapse: The “Oil Spill” Model of Mass Opinion Polarization’, American Sociological Review, 85(3), pp. 507–536. doi: 10.1177/0003122420922989.

Newton, C. (2020). ‘The tech antitrust hearing was good, actually’, The Verge, 30 July. Available at: https://www.theverge.com/interface/2020/7/30/21346575/tech-antitrust-hearing-recap-bezos-zuckerberg-cook-pichai (Accessed: 9 October 2020).

Romm, T. (2020). ‘Amazon, Apple, Facebook and Google grilled on Capitol Hill over their market power’, The Washington Post, 30 July. Available at: https://www.washingtonpost.com/gdpr-consent/?next_url=https%3a%2f%2fwww.washingtonpost.com%2ftechnology%2f2020%2f07%2f29%2fapple-google-facebook-amazon-congress-hearing%2f (accessed: 9 October 2020).

Thomas, Z. (2020). ‘Facebook content moderators paid to work from home, BBC, 18 March. Available at: https://www.theverge.com/interface/2020/7/30/21346575/tech-antitrust-hearing-recap-bezos-zuckerberg-cook-pichai (Accessed: 9 October 2020).

Please rate this

The Globalisation of Data

16

October

2017

No ratings yet. After having read the news article ‘Supreme court will hear U.S.-Microsoft battle over email’ on the website of USA today, I started wondering about the question whether data should be location-bound or not (USA Today, 2017).

Using this ‘battle’ as example, I would like to discuss the topic of globalization of data and share my thoughts on it in this blog. But firstly, let me give a brief summary of the U.S vs. Microsoft case: The FBI requested to see email stored by Microsoft for an investigation. However, Microsoft claims that the FBI does not have the authority to request that, since the email are stored on a database in Ireland and therefore are not on American soil. Both Microsoft and the U.S. government have been ruled in favour by different judges, so now the upper court is going to take on the case and make a final ruling.

There are three different viewpoints regarding this case:
(1) People who believe the supreme court should rule in favour of the U.S government.
(2) People who believe the supreme court should rule in favour of Microsoft.
(3) People who believe the supreme court should not have taken on this case at all.

So, why do these groups believe they are right and the others are not?

The first group says that it is the government’s responsibility to investigate and prosecute crimes and to fend off terrorism and other kinds of threats to national security. By not being able to get a hold of those emails, this group believes the government won’t be able to do this and thus is it a threat to national security and public safety. They claim that this will make it easy for terrorist to avoid the US government, because they simply need to make sure their messages are stored in databases which are not located in the US (Reuters, 2017).

The second group claims that if the US government will be able to receive information about non-US citizens which is stored outside the US, this can also be done by other countries to US citizens. Also, they believe that this would interfere with the privacy of citizens and that countries should respect each other’s sovereignty (Bloomberg, 2017).

The third group believes that courts cannot come up with a suitable ruling for this case based on the current legislation. They claim that congress should first come up with new legislation that is more suitable for the time we live in, since the current legislation on this topic is still from the time of the floppy disc (USA Today, 2017).

Personally, I think that data should be location bound to the country where it is stored due to privacy reasons and that governments should work together to ward off terrorism. Meaning that this kind of information only gets shared when both countries agree that it is relevant for the investigation, instead of the US government being able to retrieve all information stored by American companies. Especially since these American companies, like Google and Microsoft, are active in so many countries and possess so much information about citizens of those countries. What about you? What are your thoughts on this topic?

Hurley, L. (2017, October 16). U.S. Supreme Court decide major Microsoft email privacy fight. Retrieved October 16, 2017, from reuters.com: https://www.reuters.com/article/us-usa-court-microsoft/u-s-supreme-court-to-decide-major-microsoft-email-privacy-fight-idUSKBN1CL20U

Stohr, G. (2017, October 16). Microsoft Email-Access Fight with U.S. Gets Top Court Review. Retrieved October 16, 2017, from Bloomberg.com: https://www.bloomberg.com/news/articles/2017-10-16/microsoft-email-access-fight-with-u-s-gets-supreme-court-review

Wolf, R. (2017, October 16). Supreme Court will hear U.S.-Microsoft battle over emails. Retrieved October 16, 2017, from usatoday.com: https://www.usatoday.com/story/news/politics/2017/10/16/supreme-court-hear-u-s-microsoft-battle-over-emails/761346001/

 

Please rate this