Throttling artificial intelligence will come back to haunt California

Nearly two years since generative artificial intelligence (AI) tools like ChatGPT and Google Gemini became widely available, California lawmakers have debated how this powerful new technology ought to be regulated or restricted.  Unfortunately, some are legislating from a position of fear — to the detriment of American consumers and the nation’s vibrant AI industry.

Sacramento advanced nearly two dozen legislative proposals just this year, attempting to dictate the future development of AI technology nationwide. The most draconian of these bills, SB 1047, would have created a new state agency tasked with heavily regulating advanced, so-called “frontier” AI models in the name of trust and safety. 

A vocal minority of AI safety maximalists perceives AI and related technologies as a unique threat that must be tightly controlled by government agencies, or else risk the end of civilization. To this end, SB 1047 would have also required all large AI model developers to preemptively certify and attest that their AI tools could never be intentionally misused by criminals to cause future “critical harms” to the public. 

While the goal to prevent catastrophic harms may be well-intentioned, this excessively high standard would make it nearly impossible for AI developers large and small to comply with the letter of the requirements, and functionally bans open-source AI models such as Meta’s Llama 3.

Cooler heads ultimately prevailed when Governor Gavin Newsom vetoed the flawed SB 1047 in late September, but the Golden State did enact nearly two dozen other new laws that could have significant consequences on AI development going forward.

One of these new laws regulating the use of AI in political or election-related material, even parody and satire, is already facing a legal challenge on First Amendment grounds. A federal judge recently prevented AB 2839 from taking effect with a preliminary injunction, ruling that California’s law “acts as a hammer instead of a scalpel, hindering humorous expression and unconstitutionally stifling te free and unfettered exchange of ideas.”

By contrast, several states are already finding success attracting AI entrepreneurs to their regions through a free-market, limited government approach to regulation.

Utah’s new Artificial Intelligence Learning Laboratory fosters interstate collaboration with industry experts and stakeholder groups to provide thoughtful policy recommendations to the legislature on fast-moving developments in AI. Utah’s Learning Lab is currently researching how AI might be responsibly implemented in fields such as health care and pharmaceutical development.

Texas’s AI Advisory Council is exploring how state agencies, such as the Department of Transportation, can leverage AI to save taxpayer dollars, improve the accuracy and response time of first responders, and use machine learning and video analytics to deploy road crews and identify traffic disruptions.

With Congress largely deadlocked and unable to pass a federal AI standard, states are stepping in to fill the void. One industry group estimates that nearly 700 individual bills were filed across 45 states in 2024, more than double the amount of state legislation submitted in the previous year. 

Fortunately, there are proven alternatives to AI regulation for states that wish to foster American innovation in emerging technologies, while protecting the public from actual harms and illegal conduct. 

Lawmakers seeking to maximize the benefits of AI should look to the Model State Artificial Intelligence Act, recently approved by members of the American Legislative Exchange Council (ALEC). This framework promotes responsible experimentation of AI, cuts burdensome regulations that hinder development, and requires states to conduct an inventory of existing state laws that address concerns with AI. ALEC members also adopted two model policies that can improve existing laws to combat illegal AI deepfakes used to facilitate child sexual abuse material (CSAM) or the distribution of non-consensual intimate images.

When California lawmakers return to Sacramento in the coming months, they should leave SB 1047 on the cutting room floor and instead pursue targeted policies that are rooted in reality—not a science fiction horror film—and support our world class AI industry in Silicon Valley. 

Jake Morabito is director of the Communications and Technology Task Force at the American Legislative Exchange Council (ALEC).

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *