Gov. Gavin Newsom in October vetoed a well-intended but ill-conceived bill, Assembly Bill 1064, to regulate how minors interact with generative artificial intelligence platforms. But this welcome relief from a regulatory burden that would have been a massive hindrance to AI innovation has been accompanied by a flurry of California AI regulations that he recently signed into law. And given that Congress is seemingly unable or unwilling to act, for the time being it appears that the pace of AI innovation in the United States will be dictated by the whim of whomever wields the Golden State’s governor’s pen.
Assembly Bill 1064, officially dubbed the Leading Ethical AI Development (LEAD) for Kids Act, was a response to a number of very troubling cases where AI chatbots appear to have led minors into disturbing conversations, sometimes with tragic outcomes. While there is no question that Large Language Model (LLM) chatbots have created new safety problems that need to be addressed, AB 1064 represented the sort of overly broad, knee-jerk regulatory response that could wipe out beneficial uses of a new technology in order to address a specific harm.
As Newsom correctly observed in his veto notice, the bill would have imposed “such broad restrictions on conversational AI tools that it may unintentionally lead to a total ban on the use of these products by minors.” He continued, “We cannot prepare our youth for a future where AI is ubiquitous by preventing their use of these tools altogether.”
This understanding of the collateral consequences of overly burdensome regulation, along with the expression of some welcome regulatory humility, contrasts with California’s approach to regulating digital technologies generally. For all of his forbearance in vetoing AB 1064 and vetoing the even broader regulation of frontier AI models last year in Senate Bill 1047, Newsom and California’s Legislature have enacted a tremendous number of smaller regulations that could collectively amount to the death of AI innovation by a thousand cuts.
For example, Newsom signed Senate Bill 243, a reduced but still potent regulation of minors’ access to chatbots that includes mandatory age verification. This step will endanger both free speech and data security online for all users, not just children. Meanwhile, he signed a scaled-back but still problematic frontier AI model regulation in Senate Bill 53, which grants California regulatory agencies broad authority to unilaterally decide whether new AI products are safe. Further bills enacted by California this year regulate algorithmic pricing and generative AI content generation, each with problematic aspects that will nevertheless govern AI products nationwide.
Ultimately, what all these bills create is a scenario where all U.S. tech companies are de facto subject to California’s laws unless federal law preempts them. The best-case scenario for investment and development of new technologies is that they are allowed to develop freely, with regulation stepping in to address specific, concrete risks that are discovered along the way. Instead, states like California are saddling AI innovation with broad precautionary regulations that will restrict uses of these promising technologies right out of the box.
At this point, the only way to guarantee that the “California Effect” doesn’t hamstring U.S. technological competitiveness in AI is for Congress to finally do its job and pass a light-touch federal AI regulatory framework that preempts most state regulation of advanced computational technologies. Unfortunately, a serious attempt to pass a temporary moratorium on state AI regulations narrowly failed earlier this year, and the current state of gridlock on Capitol Hill makes the prospect of federal intervention seem increasingly dim.
Thus, for the foreseeable future, the pace of U.S. AI development and deployment—and the very real race with China for cutting-edge AI supremacy—will be decided by the California Legislature’s voracious desire to regulate, checked only sometimes by the governor’s veto pen.
Josh Withrow is a resident fellow with the R Street Institute’s technology and innovation team.