Lawmakers seek to create penalties for AI models that encourage suicide
COLUMBUS, Ohio (WCMH) — The Ohio House Technology and Innovation Committee heard testimony from supporters Tuesday of a new bill to penalize software companies with artificial intelligence models that encourage their users to self-harm.
House Bill 524, sponsored by Ohio Reps. Christine Cockley (D-Columbus) and Ty Mathews (R-Findlay) would empower the Ohio Attorney General’s Office to fine software companies up to $50,000 if their AI models are found to suggest or encourage self-harm, suicidal ideation or public violence.
Tony Coder, CEO of the Ohio Suicide Prevention Foundation, told lawmakers he knows of at least four cases in which victims used AI to write suicide notes before taking their own lives. He also said that only about 22% of teenagers who turn to AI for mental health advice actually receive an appropriate response.
“I am absolutely pro-technology, pro-AI; it can provide some great opportunities for humanity,” Coder said. “If we don’t look at this and actually set up some kind of boundaries so that kids aren’t seeking this out and instead seeking out trained mental health professionals, then our suicide numbers are gonna go up.”
Cockley said she is concerned about how many Ohioans are turning to AI for mental health advice and companionship, and that the state must address troubling instances in which AI has suggested or confirmed methods by which users might end their own lives. She said the fines established by H.B. 524 would push companies and developers to ensure their products are safe, and noted that the revenue generated by those fines would be directed to the 988 Suicide and Crisis Lifeline.
“The people who are developing these models are some of the smartest people,” Cockley said. “I am sure that they can figure out a way to make sure these models are trained with a framework that takes public safety and human life into consideration.”
Greg Lawson, research fellow at the Buckeye Institute, said he’s generally suspicious of regulations on artificial intelligence, especially as the industry has become a nationwide economic driver. Lawson said while he understands the concerns over potential harms, the technology is still new and still developing, and lawmakers should be cautious about regulating such a nascent industry.
“I get it, we have situations where real tragedies are happening and people are rightfully concerned, and we do want to be very sensitive to that,” Lawson said. “If we’re going to do anything, it can only be related to the actual harms that are produced rather than layering regulations on the technology itself.”
According to Mathews, H.B. 524 is not meant to stifle or limit the development of AI technology, but to give the state some power to respond if and when AI causes harm, especially to children.
“I don’t have a regulation on my car that it can’t go 175 miles an hour down the road. But there is a penalty to it, right?” Mathews said. “I believe the private sector will step up and self-regulate, but we as a state, we need to be providing those parents the ability to have some type of recourse.”
The committee also heard testimony in favor of H.B. 524 from the Catholic Conference of Ohio, arguing the state must ensure that AI tools are being developed responsibly. Sponsors of H.B. 524 are planning to hold a press event later this month in cooperation with the Ohio Suicide Prevention Foundation.