State lawmakers want to rein in artificial intelligence. Here’s how.

With Congress yet to act on reining in artificial intelligence, that obligation has fallen into the laps of individual states. And as the 2026 spring session of the General Assembly in Springfield nears an end date of May 31, Illinois lawmakers are making a final push to get state AI regulations over the finish line.

The bipartisan appetite comes in response to a growing number of cases in which AI chatbots have pushed troubled teens over the edge, leading to self-harm or suicide.

“We can now say with certainty that the self-regulation of chatbot development has failed,” said state Rep. Daniel Didech, D-Buffalo Grove, at a recent hearing on AI regulation. “These products… resulted in the death of children. The chatbot developers who allowed these tragedies to happen should not get a second chance to fix these problems without government oversight.”

Other prominent states like New York and California have taken the first steps in regulating AI, leaving Illinois with the decision to either follow in their footsteps — or strike out on their own.

Here are some of the options on the table in Springfield:

Mandatory AI ‘safety plans’


Lawmakers are asking all large AI developers to create and adhere to a safety plan that would mitigate “catastrophic” disasters. Plans would be reviewed by a third party, and any developers’ failure to comply would subject them to penalties from the Illinois Attorney General’s office.

Didech, the bill’s main sponsor in the Illinois House, says a safety plan would allay some fears surrounding AI by preventing it from leading to lethal weaponry, ensuring AI models turn off when prompted and establishing guardrails to prevent chatbots from urging users toward suicide.

The bill has been opposed by a handful of big tech companies but received support from Anthropic, developer of the AI chatbot Claude.

The requirement for a safety plan has been a core element of regulations passed in California and New York. In adopting a similar law, Illinois would be joining those two states in creating an AI regulatory template for other states.

“We’re not trying to completely recreate the wheel here,” Didech said. “This is something that California and New York are already implementing and we think Illinois has an opportunity to play a leadership role in this as well, to ensure that these models are actually safe for public use.”

AI disclosure in customer service


Rapid growth in AI has allowed chatbots to sound strikingly realistic and engage in real-time verbal conversations. Lawmakers are asking companies to be up front about when customers are talking to a real person.

A bill introduced by state Sen. Rachel Ventura, D-Joliet, would require companies to tell a customer at the beginning of a conversation if they are speaking to an AI chatbot. Consumers could sue companies that don’t comply.

“AI is becoming so sophisticated that it’s becoming harder and harder to tell the difference,” Ventura said.

Safeguarding children, and chatbot product liability


In the wake of several AI-driven suicidal epsiodes, lawmakers are seeking to throw the legal responsibility for the unintended consequences of chatbots back on to developers.

Rep. Jennifer Gong-Gershowitz, D-Glenview, is pushing for AI models to be defined as “products,” leaving them liable for any damage they cause a user — in the same way sickness from contaminated food or a car crash from faulty design falls on the shoulders of their producers.

“If there were warning signs for suicidal ideation or psychosis, a human being might refer that person to a professional who could help,” Gong-Gershowitz said. “By contrast, what we’re seeing with AI chatbots is that they are predisposed to validating everything that a human being says, even if it is wrong or dangerous.”

A bill sponsored by state Sen. Mary Edly-Allen, D-Libertyville, would also require AI companies to implement measures that detect a user’s suicidal expressions and connect them with a professional who can help.

Under Gong-Gershowitz’s bill, chatbots also would be liable if they become an asset used by a criminal to commit wrongdoing.

Pushing AI out of the classroom


Lawmakers also are trying to push AI out of the classroom in time for the 2026-27 school year. A bill introduced by state Sen. Robert Martwick, D-Chicago, would bar teachers in public school districts from using AI to grade a student’s work, and require instructors to get approval from their school board before using AI in any classroom material.

AI election misinformation


With the 2026 midterm elections looming in November, AI deepfakes have opened the floodgates for misinformation and disinformation in political campaigns.

Under a bill sponsored by Gong-Gershowitz, political campaigns would not be able to make negative-deepfake content of their opponent that could sway the mind of a voter in the 90 days leading up to an election.

“There’s no time 90 days before an election to correct the record, which could have a very disastrous impact on our elections,” Gong-Gershowitz said. “This is really about ensuring that voters have the facts, and voters have the truthful information they need to make decisions.”

Bot ticket purchasing


A proposal by Sen. Steve Stadelman, D-Rockford, would prevent bots from being used to buy tickets for concerts, comedy shows and other ticketed events. The bill has received unanimous support from all Illinois Senate members, creating an easy path to passage. The Illinois Attorney General’s office would enforce the measure.

Apartment pricing


After reports alleging landlords were using algorithmic pricing to maximize profit from rent across Chicago, a bill from state Sen. Graciela Guzman, D-Chicago, would prevent landlords from coordinating pricing through third-party services.

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *