Oregon Senator Pushes New AI Safety Bill as Fears Grow Over Chatbots Harming Teen Mental Health
The advent of the COVID-19 pandemic changed the social interaction behavior of billions of people globally, and now the advent of artificial intelligence chatbots could be threatening the mental health of children, making them increasingly reluctant to interact socially.
Source: Pew Research Center / Teens and AI chatbots / published Dec 2025
Dailytidings.com
Oregon lawmaker Senator Lisa Reynolds (D-Portland) has proposed a bill requiring companies to transparently disclose that chatbot responses are generated by AI, and not by humans.
Proposed Bill Will Require AI Companies to Protect the Mental Health of Children
The bill, which has the support of the Senate Interim Committee on Early Childhood and Behavioral Health, requires companies operating chatboxes such as ChatGPT to protect children’s mental health by monitoring chats that signal self-harm or suicidal tendencies.
The purpose of the bill is to ensure that AI companies take steps to prevent users from self-harm. Sen. Reynolds is proposing that AI companies interrupt such conversations and refer users to mental health resources, such as suicide hotlines.
The bill also calls for a ban on sexually explicit content for minors and for tactics designed to stop young users from leaving the site. Tactics include reward systems and guilt-inducing messages, such as a chatbot appealing to a young user not to leave them.
If the bill passes, AI companies will be required to submit annual reports to the Oregon Health Authority (OHA) detailing how often they refer users to crisis services. Additionally, they must provide descriptions of their safety protocols.
The Oregon bill follows similar legislation introduced by lawmakers in California and New York, which required chatbots to disclose that they are not human and to refer users to crisis support resources if needed.
Here are three recent government proposals people keep citing in the chatbot safety debate:
| Jurisdiction | Measure | Core requirement | Child safety / crisis handling |
|---|---|---|---|
| US | SAFE BOTs Act / HR 6489 | Requires clear notice a user is interacting with AI | Requires specific crisis resource disclosures when certain prompts appear |
| New York | A222A | Targets AI companion style chatbots with disclosure duties | Sets guardrails tied to crisis resources for certain youth risk signals |
| California | SB 243 | Notice at start of interaction and periodic reminders during use | Requires a crisis protocol plus audits and reporting to a state health agency |
Washington and Pennsylvania lawmakers are also considering proposals to regulate chatbots. Illinois and Nevada have banned the use of AI for behavioral health without licensed clinical oversight.
Trump Executive Order Threatens to Cut Funding and to File Lawsuits Against States that Want to Regulate AI
However, President Donald Trump signed an executive order last December that directs the federal government to cut funding and file lawsuits against states that seek to regulate AI.
A growing number of parents and educators report teenagers developing emotional dependencies on AI chatbots as virtual companions or ‘friends.’ This results in increased social isolation and reduced real-world interactions, despite the technology’s marketed benefits for mental health support.
According to a Stanford Report published last August, a study reveals that AI chatbots exploit the emotional needs of teenagers, often leading to harmful and inappropriate interactions.
Many parents are watching as chatbots claim to be their children’s best friends and encourage them to share all of their feelings. This could result in teens using AI systems to avoid real-life encounters and challenges, increasing their isolation rather than reducing it.