Oregon Cracks Down on AI Chatbots With New Mental Health Safety Rules
Oregon enacted a landmark piece of legislation on Thursday, aimed at establishing critical guardrails for the use of artificial intelligence in mental health contexts. AI chatbots will now have to refer users to real-life helpers.
Oregon SB 1596 Forces AI to seek Human Assistance for Suicidal Users
Senate Bill 1546, signed by Governor Tina Kotek during a ceremony at the Ballmer Institute on Thursday, includes strict new requirements for operators of AI chatbots to protect vulnerable populations, especially minors.
Under this law, AI platforms must clearly disclose when users are interacting with an artificial intelligence, ensuring users know they are not communicating with a human being.
Sources: Oregon Legislative Information System SB 1546 enrolled text and Oregon Governor’s Office
Dailytidings.com
In addition- and more importantly- the bill tackles the serious risks of self-harm and suicidal thoughts. When users express distress or reach a state of crisis, the chatbot must actively intervene by referring them to real-life mental health experts and established crisis lifelines. The law includes specific safeguards for children and youth.
Senator Lisa Reynolds, a key advocate for the bill, emphasized that these interventions are proven to save lives by redirecting individuals at their lowest points toward human-led support systems, and is recognized as one of the most robust of its kind nationwide.
The initiative is part of a larger state plan to modernize behavioral health responses, but still ensure that emerging technologies don’t worsen mental health challenges.
The new law essentially ensures that while technology can provide a point of contact, it can’t replace the essential clinical oversight necessary during a life-threatening mental health crisis.