An anonymous reader shared this report from the Washington Post:
A California state lawmaker introduced a bill on Thursday aiming to force companies to test the most powerful artificial intelligence models before releasing them — a landmark proposal that could inspire regulation around the country as state legislatures increasingly tackle the swiftly evolving technology.
The new bill, sponsored by state Sen. Scott Wiener, a Democrat who represents San Francisco, would require companies training new AI models to test their tools for “unsafe” behavior, institute hacking protections and develop the tech in such a way that it can be shut down completely, according to a copy of the bill. AI companies would have to disclose testing protocols and what guardrails they put in place to the California Department of Technology. If the tech causes “critical harm,” the state’s attorney general can sue the company.
Wiener’s bill comes amid an explosion of state bills addressing artificial intelligence, as policymakers across the country grow wary that years of inaction in Congress have created a regulatory vacuum that benefits the tech industry. But California, home to many of the world’s largest technology companies, plays a singular role in setting precedent for tech industry guardrails. “You can’t work in software development and ignore what California is saying or doing,” said Lawrence Norden, the senior director of the Brennan Center’s Elections and Government Program… Wiener says he thinks the bill can be passed by the fall.
The article notes there’s now 407 AI-related bills “active in 44 U.S. states (according to an analysis by an industry group called BSA the Software Alliance) — with several already signed into law. “The proliferation of state-level bills could lead to greater industry pressure on Congress to pass AI legislation, because complying with a federal law may be easier than responding to a patchwork of different state laws.”
Even the proposed California law “largely builds off an October executive order by President Biden,” according to the article, “that uses emergency powers to require companies to perform safety tests on powerful AI systems and share those results with the federal government. The California measure goes further than the executive order, to explicitly require hacking protections, protect AI-related whistleblowers and force companies to conduct testing.”
They also add that as America’s most populous U.S. state, “California has unique power to set standards that have impact across the country.” And the group behind last year’s statement on AI risk helped draft the legislation, according to the article, though Weiner says he also consulted tech workers, CEOs, and activists. “We’ve done enormous stakeholder outreach over the past year.”
Read more of this story at Slashdot.