Rep. Ted Lieu, co-chair of House Democrats’ new AI commission, discusses government’s role in burgeoning industry

Back in early 2023, Rep. Ted Lieu wanted to make a point about the powerful capabilities of artificial intelligence and how Congress should regulate the rapidly developing technology.

So the Los Angeles County Democrat introduced legislation expressing support for Congress to tackle the issue: a resolution written entirely by ChatGPT. It was the first time a piece of federal legislation had been written by AI.

Later that year, TIME named Lieu one of its 100 most influential people in AI for helping drive the conversation about the future of artificial intelligence.

The following year, in 2024, Lieu was tapped, along with Rep. Jay Obernolte, a Republican from San Bernardino County, to co-chair a bipartisan Task Force on Artificial Intelligence in the U.S. House of Representatives. The group issued a report late last year with recommendations and policy proposals “to ensure America continues to lead the world in responsible AI innovation.”

With that work complete, House Democrats launched their own group this month, called the House Democratic Commission on AI and the Innovation Economy, to further their discussions about AI. The commission will meet throughout 2026 and “develop policy expertise in partnership with the innovation community, relevant stakeholders and committees of jurisdiction,” an announcement by House Democratic leaders said.

Lieu, D-Torrance, was again named co-chair of the new group, along with Reps. Josh Gottheimer of New Jersey and Valerie Foushee of North Carolina.

We caught up with Lieu — one of just a handful of members of Congress with a computer science degree, according to his office — to discuss his views on artificial intelligence, what he hopes to accomplish with the new commission and what the federal government’s role should be when it comes to the burgeoning AI industry.

The interview has been edited for brevity or clarity.

Q: How did you come to work on AI policies? What do you believe you bring to this conversation?

A: I’ve always loved computers from a young age. I remember my first computer was an Apple IIe. I think this was in high school. I was super excited because, for my birthday, I was able to upgrade the memory. I had 512K memory. I doubled it. I was super excited at that time. I’ve always loved technology. In college, I did take courses on artificial intelligence as well.

(Then) three years ago, OpenAI shocked the world with the worldwide release of ChatGPT. Since then, I’ve been working on AI issues in Congress.

Q: What’s your goal with the House Democrats’ new AI commission?

A: AI is moving very quickly. One of the goals of the commission is simply to educate the American people, as well as members of Congress, on AI.

We want to continue to have American companies innovate with AI, and at the same time, make sure that we put in commonsense guardrails to prevent any significant harm from coming to the American people.

Q: Let’s talk about that. It’s a delicate balance between supporting innovation and implementing guardrails to protect people from abuse. How do you strike that balance?

A: In most AI use cases, government is not going to care. The way I think about it is: You’ve got a large ocean of AI and then a small pond of AI. And the large ocean of AI is all the AI we don’t care about as a government. And then a small pond of AI, why might we care about it?

To me, there are three buckets: The first is AI that can destroy the world. The Department of Defense, for example, has weapons that launch automatically. Last term, I (pushed for a law not to) let AI launch a nuclear weapon by itself. There’s got to be meaningful human control.

The second bucket is AI that’s not going to destroy the world but could kill you individually, (such as) AI-moving objects: planes, trains, automobiles. I think we need to have a lot more regulators trained on unique aspects of AI.

The last part, which is the hardest to grapple with, is AI that can cause significant harm that isn’t going to kill you individually or the world. For example, you wouldn’t want AI chatbots that help people commit suicide. How do you make sure these AI companies aren’t developing large language models that help people commit suicide?

Q: President Donald Trump recently signed an executive order to ban states from regulating AI on their own. He said a patchwork of regulations from different states could hamper innovation and cause the U.S. to lose the AI race against China. Do you understand or agree with his rationale?

A: Donald Trump is being massively hypocritical. He just greenlighted the sale of advanced AI chips to China, specifically the H200 chip. China does not have anywhere near the capability of making these chips. So Donald Trump just gave a massive boost to China’s AI industry.

I agree it is not efficient to have 27 states regulating AI. A national framework would be far better. At the same time, the Trump administration has done nothing to work with Congress to establish any sort of federal framework.

(Note: White House spokesperson Kush Desai, in response to Lieu’s comments, said in an emailed statement: “The Trump administration is committed to ensuring the dominance of the American tech stack — without compromising on national security.”)

Q: What should be the federal government’s role when it comes to AI? How can Congress lead on AI policies?

A: (My staff and I are) putting together a bill right now of bipartisan ideas on AI. We’ll be introducing that shortly in the coming year. This bill that I’m working on will be composed of bipartisan bills that have already been introduced, as well as ideas from the bipartisan AI task force.

Q: What about AI excites you? What about it concerns you?

A: I’m excited about AI’s role in the health care space. AI has now folded millions and millions of human proteins. That’s going to accelerate cures, accelerate treatments in the health care space.

I am concerned about AI being an accomplice to destroying the world. As these language models get more and more sophisticated, eventually they can tell someone how to build a virus that could become a pandemic and give them the 23 steps to do so, or they can tell someone at a college laboratory how to make a deadly chemical weapon. Those are issues I’m concerned about.

Q: The 2026 elections are just around the corner. How concerned are you about deepfakes and how that might influence elections?

A: I’m deeply concerned. It will be part of the legislation we’ll introduce next year.

I also know one of the best ways to mitigate the issue is to inoculate the American people. That means educate the American people that AI is going to increase the number of deepfakes, that people should not trust everything they see on the internet and use some common sense and do some double-checking before they believe every video or image that they see.

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *