Every wildfire season, Californians face the same challenge: accessing reliable, current information about evacuation orders and fire containment status. California’s solution? Build an artificial intelligence (AI) chatbot that fails at both.
The chatbot, “Ask CAL FIRE,” debuted in May of 2025, and Gov. Gavin Newsom cited it as an example of California “transforming government to better serve people.” Yet a month after its launch, the chatbot could not provide timely information about the Ranch Fire in San Bernardino County, responding to queries with containment information that was almost a week out of date.
This wasn’t a one-off glitch. The chatbot routinely gives different answers to the same question about which supplies to bring during an evacuation depending on minor wording changes, and often cannot provide information about evacuation orders, responding “I’m not sure” when residents need to know whether to flee their homes.
The failure of “Ask CAL FIRE” stems from a fundamental problem with how the California Department of Forestry and Fire Protection approached the challenge. Instead of asking “How do we provide better information to residents during emergencies?” the agency asked: “How can we use generative AI?”
Cal Fire’s AI-first problem-solving approach traces directly to Newsom’s 2023 executive order that directed state agencies to “consider pilot projects of GenAI applications.” While well-intentioned, this inverted the problem-solving process and created a perverse incentive for California agencies, creating institutional pressure to deploy AI tools regardless of whether they are an appropriate solution. In Cal Fire’s case, rather than identifying communications gaps and choosing the best tools to address them, they sought to find problems AI could solve.
If Cal Fire had started with the problem rather than the technology, effective solutions would have been more evident and achievable. The core failures of “Ask CAL FIRE”—outdated fire information, inconsistent responses and missing evacuation data—could have been avoided with traditional improvements to existing systems.
For timely fire containment updates, Cal Fire needed real-time database integration between field operations and their public website. Instead of hoping an AI system would solve data freshness issues, they could have implemented automated feeds that update the public on containment percentages as soon as field crews report them. This isn’t cutting-edge technology; it’s basic data management.
The evacuation order problem required even simpler solutions. Rather than training AI to understand complex emergency procedures, Cal Fire could have built database connections to county emergency management systems that would automatically pull evacuation data and display it clearly on their website. A basic database query system would provide reliability that could outperform any chatbot.
Emergency supply information is an even easier problem to solve. To provide information on what to pack in case of a wildfire emergency, standardized “frequently asked questions” pages with vetted, approved answers would have eliminated the inconsistent responses that plague the AI system.
These solutions share common characteristics: they’re proven, reliable, and far less expensive than building and maintaining an AI system. Most importantly, they would have worked.
Cal Fire’s chatbot failure reflects a troubling pattern across government agencies: the pursuit of innovation theater over solving problems. Cities spend millions on “smart city” initiatives by installing sensors and developing smartphone applications, while potholes remain unfilled and traffic lights stay poorly timed. School districts purchase iPads while textbooks fall apart and roofs leak. Transit agencies develop sophisticated mobile apps while buses run late due to outdated scheduling systems.
This isn’t a California-only phenomenon. Across the country, government officials gravitate toward solutions that sound cutting edge rather than those that work reliably. The appeal is obvious: using new technology is seen as “transforming government to better serve people.” It generates flashier headlines and campaign material. Meanwhile, the unglamorous work of fixing data systems, improving staff training, or streamlining bureaucratic processes offers no photo-op. The result is a government that wears a veneer of innovation while fundamental services deteriorate underneath.
This isn’t to say that AI has no place in government operations. When deployed thoughtfully, AI can transform how agencies serve citizens. The difference lies in the approach: successful AI implementations start with clearly defined problems and rigorous evaluation of whether AI is the best solution. They require careful integration with existing systems and extensive testing. Most importantly, they measure success not by the sophistication of the technology, but by improvements in service delivery.
California’s executive order short-circuited this process. By directing agencies to find uses for AI rather than allowing them to identify where AI might help, the state invited expensive failures like the Cal Fire chatbot. The agency’s backwards approach caused the issue, not the technology itself. As AI continues advancing, the pressure to deploy it everywhere will only intensify. California’s wildfire chatbot should serve as a warning: when governments chase technological trends instead of solving real problems, citizens pay the price—sometimes with their safety.Mark Dalton is the senior policy director of the R Street Institute’s technology and innovation team