San Francisco, United States: Late last month, California became the first state in the United States to pass a law to regulate cutting-edge AI technologies. Now experts are divided over its impact.
They agree that the law, the Transparency in Frontier Artificial Intelligence Act, is a modest step forward, but it is still far from actual regulation.
Recommended Stories
list of 4 items
- list 1 of 4China tightens export controls on rare-earth metals: Why this matters
- list 2 of 4Who is Maria Corina Machado, 2025 winner of the Nobel Peace Prize?
- list 3 of 4Trump to hit China with 100 percent tariff amid escalating trade spat
- list 4 of 4Trump announces layoffs amid government shutdown, despite legal questions
end of list
The first such law in the US, it requires developers of the largest frontier AI models – highly advanced systems that surpass existing benchmarks and can significantly impact society – to publicly report how they have incorporated national and international frameworks and best practices into their development processes.
It mandates reporting of incidents such as large-scale cyber-attacks, deaths of 50 or more people, large monetary losses and other safety-related events caused by AI models. It also puts in place whistleblower protections.
“It is focused on disclosures. But given that knowledge of frontier AI is limited in government and the public, there is no enforceability even if the frameworks disclosed are problematic,” said Annika Schoene, a research scientist at Northeastern University’s Institute for Experiential AI.
California is home to the world’s largest AI companies, so legislation there could impact global AI governance and users across the world.
Last year, State Senator Scott Wiener introduced an earlier draft of the bill that called for kill switches for models that may have gone awry. It also mandated third-party evaluations.
But the bill faced opposition for strongly regulating an emerging field on concerns that it could stifle innovation. Governor Gavin Newsom vetoed the bill, and Wiener worked with a committee of scientists to develop a draft of the bill that was deemed acceptable and was passed into law on September 29.
Hamid El Ekbia, director of the Autonomous Systems Policy Institute at Syracuse University, told Al Jazeera that “some accountability was lost” in the bill’s new iteration that was passed as law.
“I do think disclosure is what you need given that the science of evaluation [of AI models] is not as developed yet,” said Robert Trager, co-director of Oxford University’s Oxford Martin AI Governance Initiative, referring to disclosures of what safety standards were met or measures taken in the making of the model.
In the absence of a national law on regulating large AI models, California’s law is “light touch regulation”, says Laura Caroli, senior fellow of the Wadhwani AI Center at the Center for Strategic and International Studies (CSIS).
Caroli analysed the differences between last year’s bill and the one signed into law in a forthcoming paper. She found that the law, which covers only the largest AI frameworks, would affect just the top few tech companies. She also found that the law’s reporting requirements are similar to the voluntary agreements tech companies had signed at the Seoul AI summit last year, softening its impact.
High-risk models not covered
In covering only the largest models, the law, unlike the European Union’s AI Act, does not cover smaller but high-risk models – even as the risks arising from AI companions and the use of AI in certain areas like crime investigation, immigration and therapy, become more evident.
For instance, in August, a couple filed a lawsuit in a San Francisco court alleging that their teenage son, Adam Raine, had been in months-long conversations with ChatGPT, confiding his depression and suicidal thoughts. ChatGPT had allegedly egged him on and even helped him plan this.
“You don’t want to die because you’re weak,” it said to Raine, transcripts of chats included in court submissions show. “Yo