California regulation frenzy risks burgeoning AI industry

SACRAMENTO — California has recently enacted a sweeping package of AI laws, positioning itself as a leader in state-level AI regulation. The focus is on safety, transparency and specific use-cases like deep fakes and employment. The most significant piece of legislation is the Transparency in Frontier Artificial Intelligence Act (TFAIA), or Senate Bill 53.

That law aims to impose transparency and safety requirements rather than broad bans — focusing on “trust but verify” oversight: requiring disclosure of governance frameworks, safety protocols and incident-reporting. However, the requirement to publish detailed transparency reports could expose trade secrets or vulnerabilities, and impose heavy compliance burdens. Some argue the law penalizes “paperwork” and formalities rather than actual harmful outcomes.

If you haven’t figure it out by now, the first two paragraphs were largely produced using ChatGPT, an Artificial Intelligence generator. Other than a few style foibles, I can’t take issue with its summary. Frankly, its explanation is better written and more accurate than similar reports I’ve read in daily newspapers. The stunning advance in AI sophistication is raising some obvious questions. The most pressing: What should government do to regulate it?

Not surprisingly, my answer is “as little as possible.” Government is a clunky, bureaucratic machine driven by special-interest groups and politicians. It’s always behind the curve. If state and federal regulators had the skill of the entrepreneurs who developed these cutting-edge technologies, they would most likely work at such firms, where they’d score a higher pay package. The government B-team can’t keep up with the A-team, so regulations lag behind corporate innovations.

Typically, as the AI robot explained, they focus on paperwork errors. These rules stifle meaningful advancements, benefit firms with high-powered lobbyists and provide an advantage to companies that operate in less-regulated environments. When states pass their own rules, they create a mish-mash of hurdles for an industry that is not confined within any state boundary. Given its size, California’s typically heavy-handed approach often becomes the national standard.

In fact, California lawmakers relish their role as nation trend-setters, as they push for every progressive priority (from ICE vehicle bans to single-payer healthcare) in the hopes that it pushes the national conversation in their direction. Other Blue States are doing the same thing. Often, they base their regulations on the European Union’s model — one that’s based on fear of the unseen. States have thus far introduced 1,000 different AI-related bills.

As my R Street Institute colleague and AI expert Adam Thierer explained in testimony last month before the U.S. House of Representatives, “America’s AI innovators are currently facing the prospect of many state governments importing European-style technocratic regulatory policies to America and, even worse, applying them in a way that could end up being even more costly and confusing than what the European Union has done. Euro-style tech regulation is heavy-handed with highly detailed rules that are both preemptive and precautionary in character. … Europe’s tech policy model is ‘regulate-first’ while America’s philosophy is ‘try-first.’”

In the now-concluded California legislative session, lawmakers introduced at least 31 AI bills, with several including SB 53 garnering Gov. Gavin Newsom’s signature. Most are manageable for the industry, but new laws and regulations often suffocate ideas a little at a time. On the good-news front, Newsom — ever mindful of a potential presidential run, and sensible enough to not want to crush one of the state’s economic powerhouses — vetoed the worst of them.

He rejected Assembly Bill 1064, which would have forbade any company or agency from making AI chatbots “available to a child unless the companion chatbot is not foreseeably capable of doing certain things that could harm a child.” That broad language — how can anything be “foreseeably capable”? — caused much consternation. “AB 1064 effectively bans access of anyone under 18 to general-purpose AI or other covered products, putting California students at a disadvantage,” as a prominent tech association argued in opposition.

In his veto, Newsom echoed that point and added that, “AI already is shaping the world, and it is imperative that adolescents learn how to safely interact with AI systems.” He championed his signing of Senate Bill 243, which tech companies accepted as a better alternative. It mainly requires operators to disclose that children are interacting with a chatbot. That’s fine, but the governor also promised to support other messages in the next session.

How exactly can an industry thrive under a never-ending threat of more legislation, especially given that some of the proposals are quite intrusive? I’m a big advocate for federalism and the idea that states are the laboratories of democracy, but in this case a federal approach is better given, again, the national nature of the internet world.

I’ll finish with words of wisdom from ChatGPT: Strict or poorly designed rules could slow beneficial uses of AI in healthcare, education, infrastructure and public safety. Fear of liability or red tape might discourage experimentation that could improve lives.

Steven Greenhut is Western region director for the R Street Institute and a member of the Southern California News Group editorial board. Write to him at sgreenhut@rstreet.org.

(Visited 4 times, 4 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *