Since California legislators passed a fiercely contested AI safety bill at the end of August, the question has been whether Governor Gavin Newsom would sign it into law. We now know the answer, with Newsom vetoing the bill after saying earlier this month that it could “have a chilling effect on the industry.” That industry accounts for almost 20% of the state’s gross regional product.
Just last week, Newsom vetoed a bill that would have made data sharing opt-outs mandatory for web and mobile browsers. “It’s troubling the power that companies such as Google appear to have over the governor’s office,” said Justin Kloczko, tech and privacy advocate for nonprofit Consumer Watchdog.
What the bill was designed to do. SB1047 would have required companies to publicly disclose the safety protocols for AI models, outline a so-called “kill switch” as well as providing protection for whistleblowers, all at a time when “the risks these models present to the public are real and rapidly increasing,” according to bill author Scott Wiener, a Democratic state senator.
The tech industry argued that the bill would drive companies from the state, although Elon Musk, in the AI business himself through xAI, had said he supported the bill.
Why we care. In a sane world, the threats posed by AI would be assessed and addressed at a national level by the federal government. In this world, however, it is likely to be regulated, if at all, at the state level. That’s how data privacy has been handled, of course, but even that was driven in very large part by legislation from Europe.
Right now, it feels like AI innovation is less likely to be curbed by regulators than by shortages in the electricity needed to power it.
Dig deeper: AI in marketing: Examples to help your team today