Americas

Asia

Oceania

Contributing Writer

CISOs to grapple with a thicket of emerging regulations after Newsom vetoes California’s AI bill

News Analysis
30 Sep 20249 mins
CSO and CISOGovernmentIT Governance

CISOs must now cope with a welter of emerging EU and disparate US state laws after Governor Gavin Newsom rejected California's stringent AI safety and security law, which many thought would set a global regulatory high-water mark.

Following a tense period of uncertainty, California Governor Gavin Newsom has vetoed a landmark bill, SB-1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. Passed by the state’s legislature on August 28, 2024, it was considered the world’s most stringent set of regulations yet proposed for governing AI.

Despite recently signing 17 other bills covering the deployment and regulation of GenAI technology, including AB 2655 and AB 2839, two controversial pieces of legislation that limit the use of election-related AI and deepfakes, Newsom thought that SB 1047 was a bridge too far because it fell short of “providing a flexible, comprehensive solution to curbing the potential catastrophic risks.”

“This bill would require developers of large artificial intelligence (Al) models, and those providing the computing power to train such models, to put certain safeguards and policies in place to prevent catastrophic harm,” Newsom said.

“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” he argued.

Newsom says the bill would potentially stifle innovation

“Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”

He added that California, which is home to 32 of the world’s 50 leading Al companies, will not abandon its responsibility as a steward of this new technology but stressed that any regulation adopted by the state must be “informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.”

Newsom’s dramatic and last-minute rejection of a bill scuttles what its proponents envisioned as a comprehensive and global-leading regulatory framework due to the extensive safety and security guardrails it placed around foundational models, which they believed served as a high-bar template for organizations grappling with the complex creation and use of new genAI technologies.

CISOs need to roll up their sleeves to tackle compliance

SB 1047 was a broadly popular bill. It received support from the majority of Californians and garnered endorsements from leading academics, legal experts, and dozens of Hollywood heavyweights. Although Google, Meta, and OpenAI opposed the bill, some leading AI players, including Anthropic, tentatively endorsed it.

With the bill’s veto, AI governance now falls to the EU AI Act, a less expansive AI regime enacted this summer, plus a patchwork of US state-level current and proposed AI regulatory frameworks of varying scope and intensity, an executive order issued by the Biden administration, and the newly created AI Institute at the National Institute of Standards and Technology (NIST). Most experts say that given the relentlessly divided US Congress, the prospect of a national AI safety and security law is highly doubtful.

The bottom line for CISOs, then, is to “roll up your sleeves” because complying with many of the forthcoming disparate and often contradictory requirements will fall on them, Bobby Malhotra, a member of Winston & Strawn’s artificial intelligence (AI) strategy group, tells CSO. “Keep your finger on the pulse. Things are changing dynamically, both in terms of technology and the underlying regulations.”

Was SB 1047 overkill?

As Governor Newsom indicated, SB 1047 was designed to impose its most expansive requirements on the biggest AI players. The bill broadly applied to “covered models,” meaning models that cost over $100 million to develop that are trained using computing power “greater than 10^26 integer or floating-point operations” (FLOPs) or are based on covered models and fine-tuned at a cost of over $10 million and using computing power of three times 10^25 integer or FLOPs.

The bill would also have required developers to implement technical and organization controls designed to prevent covered models from causing “critical harms,” defined as:

  • creating or using certain weapons of mass destruction to cause mass casualties,
  • causing mass casualties or at least $500 million in damages by conducting cyberattacks on critical infrastructure or acting with only limited human oversight and causing death, bodily injury, or property damage in a manner that would be a crime if committed by a human
  • and other comparable harms.

It also required developers to implement a kill-switch or “shutdown capabilities” in the event of disruptions to critical infrastructure. The bill further stipulated that covered models implement extensive cybersecurity and safety protocols subject to rigorous testing, assessment, reporting, and audit obligations.

Some AI experts say these and other bill provisions were overkill. David Brauchler, head of AI and machine learning for North America at NCC Group, tells CSO the bill was “addressing a risk that’s been brought up by a culture of alarmism, where people are afraid that these models are going to go haywire and begin acting out in ways that they weren’t designed to behave. In the space where we’re hands-on with these systems, we haven’t observed that that’s anywhere near an immediate or a near-term risk for systems.”

Critical harms burdens were possibly too heavy for even big players

Moreover, the critical harms burdens of the bill might have been too heavy for even the most prominent players to bear. “The critical harm definition is so broad that developers will be required to make assurances and make guarantees that span a huge number of potential risk areas and make guarantees that are very difficult to do if you’re releasing that model publicly and openly,” Benjamin Brooks, Fellow at the Berkman Klein Center for Internet & Society at Harvard University, and the former head of public policy for Stability AI, tells CSO.

California State Senator Scott Wiener, the bill’s sponsor, lamented the lost opportunity to impose meaningful restraints on AI. “The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing,” he said after Newsom’s veto.

“While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public,” Wiener said. “This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way.”

Although Newsom contends that smaller and potentially equally risky AI models would have been free of SB 1047’s obligations, some AI experts say that any substantial Gen AI player would likely have crossed the law’s thresholds quite soon. “It is fair to say that SB 1047 covers the largest AI models that cost over $100 million to train and develop,” Brooks says.

“However, those thresholds aren’t particularly durable. I think we’ll have many models that will cross that threshold in the near future,” Brooks says. “Again, a hundred million dollars might sound like a lot to you and me, but in the context of big AI investments, that is not a high bar. There are a number of early-stage companies that are making investments within that order of magnitude.”

Where does AI regulation go next?

Newsom plans to work on a new AI bill during the California legislature’s next session. He hopes to work with “the leading experts on genAI to help California develop workable guardrails for deploying genAI, focusing on developing an empirical, science-based trajectory analysis of frontier models and their capabilities and attendant risks.”

Newsom also plans to work with “academia to convene labor stakeholders and the private sector to explore approaches to use genAI technology in the workplace.” Moreover, following his veto, he signed a bill that requires California’s Office of Emergency Services “to expand their work assessing the potential threats posed by the use of genAI to California’s critical infrastructure, including those that could lead to mass casualty events.”

Davis Hake, senior director of cybersecurity services at Venable, tells CSO that “[AI safety and security efforts] weren’t going to go away or be settled with SB 1047, even if it set some type of high watermark. If you sell to Europeans or interact with their systems, you need to start thinking about their potential obligations under the EU AI Act because the Europeans have moved first.”

Unlike most experts, Hake is hopeful that federal lawmakers or policymakers will find a more comprehensive solution that takes precedence over all the emerging AI regulations, at least in the US. “Right now, we have this realm where California is in the discussion as are the Europeans, but shouldn’t it be the US negotiating with Europe, not just California negotiating with Europe?” he asks.

“In terms of policymaking issues like trust and safety, requirements for risk assessment are probably better left to a federal level because the Department of Commerce and Department of State are used to doing these types of negotiations.”