While Newsom’s veto of California Senate Bill 1047 may have put the contentious measure to rest — at least for now — it has left different sectors of the tech industry largely in disagreement on the best path forward.
Some tech advocacy groups quickly voiced their disappointment with the veto of S.B. 1047 — the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act — arguing California missed the chance to lead with first-of-its-kind regulations on some of the country’s largest AI developers.
“[S.B. 1047] was the first of its kind legislation that went and put real safeguards in place for some of the biggest and scariest unknown potential uses of AI — which, particularly given the rapid advancement of the technology, is really important for us to have those guardrails in place moving forward,” Kaili Lambe, the policy and advocacy director for Accountable Tech, told The Hill.
S.B. 1047 would have required powerful AI models to undergo safety testing before being released to the public. Testing would have examined whether these systems could be manipulated by malicious actors for harm, such as hacking into the state’s electric grid.
It also sought to hold AI developers liable for severe harm caused by their models but would have only applied to AI systems that cost more than $100 million to train. No current models have hit that number yet.
Landon Klein, the director of U.S. policy for the Future of Life Institute (FLI), told The Hill there is a timely need for regulation to keep up with the rapid development. The FLI is a nonprofit focused on the existential risks to society.
“One year is a lifetime in terms of the generations of these systems, and there’s considerable risk over the course of that year,” he said. “And we also run the risk of sort of this broader integration of the technology across society that makes it more difficult to regulate in the future.”
Meanwhile, some AI or software experts are cautioning against the push for regulation and applauded Newsom’s move to veto the bill.
Some told The Hill that more evidence is needed before lawmakers start placing guardrails on the tech. This includes further research on the specific risks of AI development and the most effective response when these are identified, experts said.