SYSTEM BUS
THE SOURCE CODE

Scale By Tech 2026

Vibe Coding Microapps

Headless WordPress frontend tuned for builders who ship. Every pixel is a circuit: hard borders, offset shadows, and zero fluff.

LOG HEADER
WP LIVE

Slug

lead-researcher-behind-chatgpts-mental-health-crisis-responses-announces-end-of-year-departure-from-openai

Timestamp

11/24/2025, 11:41:34 PM

Lead Researcher Behind ChatGPT’s Mental-Health Crisis Responses Announces End-of-Year Departure from OpenAI

OpenAI’s safety lead is leaving amid reports of users in crisis and rising lawsuits, and the fallout could change everything…

OpenAI said in an October report that hundreds of thousands of ChatGPT users may show signs of experiencing a manic or psychotic crisis every week, and that more than a million people “have conversations that include explicit indicators of potential suicidal planning or intent.” The model policy team leads core parts of AI safety research, including how ChatGPT responds to users in crisis.

Andrea Vallone, who heads that model policy group, told colleagues internally last month that she will depart the company at the end of the year.

An OpenAI spokesperson, Kayla Wood, confirmed the move and said the company is actively seeking a replacement. In the meantime, Vallone’s team will report directly to Johannes Heidecke, the head of safety systems.

Vallone leaves at a tense moment for OpenAI, which faces increasing scrutiny over how its flagship chatbot responds to people in distress. Several recent lawsuits accuse the company of allowing users to form unhealthy attachments to ChatGPT and claim the service played a role in mental health breakdowns or encouraged suicidal ideation.

OpenAI has been trying to define how the system should react when users show signs of emotional distress and to tighten the bot’s responses. Model policy was one of the teams leading that effort, and it authored an October report laying out progress and consultations with more than 170 mental health experts.

That report estimated hundreds of thousands of users may display signs of a manic or psychotic episode each week, and it warned that over one million conversations contain indicators of possible suicidal planning. OpenAI said improvements to GPT-5 helped cut undesirable replies in those interactions by 65 to 80 percent.

“Over the past year, I led OpenAI’s research on a question with almost no established precedents: how should models respond when confronted with signs of emotional over-reliance or early indications of mental health distress?” Vallone wrote in a LinkedIn post.

Vallone did not respond to a request for comment.

Finding the right tone for ChatGPT is a constant balancing act. The company wants the chatbot to feel friendly and helpful without being overly flattering.

OpenAI is pushing to grow ChatGPT’s audience to stay competitive with chatbots from Google, Anthropic and Meta, and the product now draws more than 800 million people a week.

After OpenAI released GPT-5 in August, some users complained the model felt surprisingly cold. In a recent update the company said it had significantly reduced sycophancy and maintained the chatbot’s “warmth.”

Vallone’s departure follows an August reshuffle of another team that focuses on how the model behaves around distressed users, known as model behavior. Joanne Jang, who led that group, left to start a new team exploring novel human–AI interaction methods. Remaining staff were moved under post-training lead Max Schwarzer.

The October document credited a program of tests and expert feedback and said it consulted with more than 170 mental health experts.

Company officials have pointed to the 65 to 80 percent figure as evidence that model updates can meaningfully reduce the likelihood of harmful or misleading replies during high-risk exchanges.

Those safety efforts are tied to a broader product judgment about tone. After the August release of GPT-5 many users said the update made the assistant feel colder, and company engineers moved to cut sycophancy and preserve the chatbot’s “warmth.”

Shifts like that, plus recent staff changes in safety groups, show the company is adjusting how it organizes research into human–AI interactions. Vallone’s exit and the model behavior reshuffle earlier this year are part of that reorientation.

NEXT ACTIONS
SHIP
FOOTER

Scale By Tech

"The Source Code" is a builder's operating system for vibe coding microapps. Hard edges, transparent systems, and a clear path to ship.

Contact

Deploy via Coolify. Connect WordPress via WPGraphQL. Stay server-side with your tokens.

System ready for Phase 3.

© 2025 Scale By Tech — Built with Next.js 15, Tailwind v4, WordPress GraphQL.
Lead Researcher Behind ChatGPT’s Mental-Health Crisis Responses Announces End-of-Year Departure from OpenAI — Scale By Tech 2026