Will California flip the AI industry on its head?

  News, Rassegna Stampa
image_pdfimage_print

Artificial intelligence is moving quickly. It’s now able to mimic humans convincingly enough to fuel massive phone scams or spin up nonconsensual deepfake imagery of celebrities to be used in harassment campaigns. The urgency to regulate this technology has never been more critical — so, that’s what California, home to many of AI’s biggest players, is trying to do with a bill known as SB 1047.

SB 1047, which passed the California State Assembly and Senate in late August, is now on the desk of California Governor Gavin Newsom — who will determine the fate of the bill. While the EU and some other governments have been hammering out AI regulation for years now, SB 1047 would be the strictest framework in the US so far. Critics have painted a nearly apocalyptic picture of its impact, calling it a threat to startups, open source developers, and academics. Supporters call it a necessary guardrail for a potentially dangerous technology — and a corrective to years of under-regulation. Either way, the fight in California could upend AI as we know it, and both sides are coming out in force.

The original version of SB 1047 was bold and ambitious. Introduced by state Senator Scott Wiener as the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, it set out to tightly regulate advanced AI models with a sufficient amount of computing power, around the size of today’s largest AI systems (which is 10^26 FLOPS). The bill required developers of these frontier models to conduct thorough safety testing, including third-party evaluations, and certify that their models posed no significant risk to humanity. Developers also had to implement a “kill switch” to shut down rogue models and report safety incidents to a newly established regulatory agency. They could face potential lawsuits from the attorney general for catastrophic safety failures. If they lied about safety, developers could even face perjury charges, which include the threat of prison (however, that’s extremely rare in practice).

California’s legislators are in a uniquely powerful position to regulate AI. The country’s most populous state is home to many leading AI companies, including OpenAI, which publicly opposed the bill, and Anthropic, which was hesitant on its support before amendments. SB 1047 also seeks to regulate models that wish to operate in California’s market, giving it a far-reaching impact far beyond the state’s borders.

Unsurprisingly, significant parts of the tech industry revolted. At a Y Combinator event regarding AI regulation that I attended in late July, I spoke with Andrew Ng, cofounder of Coursera and founder of Google Brain, who talked about his plans to protest SB 1047 in the streets of San Francisco. Ng made a surprise appearance onstage later, criticizing the bill for its potential harm to academics and open source developers as Wiener looked on with his team.

“When someone trains a large language model…that’s a technology. When someone puts them into a medical device or into a social media feed or into a chatbot or uses that to generate political deepfakes or non-consensual deepfake porn, those are applications,” Ng said onstage. “And the risk of AI is not a function. It doesn’t depend on the technology — it depends on the application.”

Critics like Ng worry SB 1047 could slow progress, often invoking fears that it could impede the lead the US has against adversarial nations like China and Russia. Representatives Zoe Lofgren and Nancy Pelosi and California’s Chamber of Commerce worry that the bill is far too focused on fictional versions of catastrophic AI, and AI pioneer Fei-Fei Li warned in a Fortune column that SB 1047 would “harm our budding AI ecosystem.” That’s also a pressure point for the chair of the Federal Trade Commission Lina Khan, who’s concerned about federal regulation stifling the innovation in open-source AI communities.

Onstage at the YC event, Khan emphasized that open source is a proven driver of innovation, attracting hundreds of billions in venture capital to fuel startups. “We’re thinking about what open source should mean in the context of AI, both for you all as innovators but also for us as law enforcers,” Khan said. “The definition of open source in the context of software does not neatly translate into the context of AI.” Both innovators and regulators, she said, are still navigating how to define, and protect, open-source AI in the context of regulation.

The result of the criticism was a significantly softer second draft of SB 1047, which passed out of committee on August 15th. In the new SB 1047, the proposed regulatory agency has been removed, and the attorney general can no longer sue developers for major safety incidents. Instead of submitting safety certifications under the threat of perjury, developers now only need to provide public “statements” about their safety practices, with no criminal liability. Additionally, entities spending less than $10 million on fine-tuning a model are not considered developers under the bill, offering protection to small startups and open source developers.