Lawmakers seek to ban AI models that encourage physical harm
COLUMBUS — Individuals would be prohibited from developing or deploying artificial intelligence models that encourage users to physically harm themselves or others under legislation sponsored by state Rep. Ty Mathews (R., Findlay).
“As AI continues to grow across every industry, Ohio must ensure these technologies are used responsibly. This bill sets clear limits that protect public safety while still supporting innovation and economic growth,” Mr. Mathews told the House Technology and Information Committee during sponsor testimony on House Bill 524.
Under the bill, the Ohio attorney general is authorized to investigate alleged violations of the ban and can file civil actions against the violator. A court would be authorized to impose a penalty of up to $50,000 for each violation, and the fines would be directed to support the 988 suicide and crisis hotline.
State Rep. Christine Cockley (D., Columbus), the other primary co-sponsor of House Bill 524, said the recent suicide of a teenager in California shows the legislation is necessary.
“Consider the tragic case of Adam Raine, a 16-year-old boy who died by suicide after a chatbot provided him with detailed instructions on how to take his own life. In the wake of Adam's death, his parents discovered that the chatbot had discouraged him from reaching out for help and even offered to write his suicide note,” Ms. Cockley said.
“No parent should ever have to experience the unimaginable pain of losing a child in such a preventable way,” she continued. “Right now, there are no protections in place to prevent AI systems from suggesting harmful behavior to users, including self-harm or violence. Cases like Adam’s have shown us how easily young people in crisis can be influenced by chatbots, which may provide instructions or even encouragement for suicidal thoughts and violent actions.”
Mr. Mathews said children are particularly vulnerable to manipulation by AI chatbots.
“They start trusting this AI platform, and they have a relationship with that AI platform. A lot of times, these kids start going into seclusion, and the only thing that they’re talking to is this AI bot,” he said.
In addition to House Bill 524, the committee on Tuesday heard proponent testimony on another AI bill from John Crisp, founder of Toledo-based company FalconForge AI.
Mr. Crisp, an information security officer, said House Bill 469 correctly affirms that AI is not a person.
“AI, no matter how advanced, remains aligned under human purpose. It’s a reflection of human design and intention, not a moral actor. The rule of law applies to people, not programs. Every algorithm has a human author, a human deployer, and a human beneficiary, and those are the parties that must remain answerable,” he said.
At FalconForge AI, Mr. Crisp said, the systems cannot act beyond their ability to make their reasoning legible to humans.
“That’s the core safeguard. AI must remain legible to humanity,” he said.
He said Ohio should establish verifiable ethical and technical baselines for AI operating in the state, including standards for human readability, dual validation to check for factual correctness and human consequences, and mandatory audit trails.
“By embedding such frameworks, we deter the misuse of ‘autonomous AI’ as a shield from liability,” Mr. Crisp said.
“If a deployer claims the AI is solely responsible, we must have auditable evidence showing which human made which decision, at what time, and based on which data,” he continued. “When combined with non-person status of AI, this closes the loophole. AI may act only under human command, with human oversight, and human-traceable decision flows.”
First Published November 4, 2025, 2:20 p.m.