Doctors examining ChatGPT
OpenAI reveals which experts are steering ChatGPT mental health upgrades.
Ever since a lawsuit accused ChatGPT of becoming a teen’s “suicide coach,” OpenAI has been scrambling to make its chatbot safer. Today, the AI firm unveiled the experts it hired to help make ChatGPT a healthier option for all users.
In a press release, OpenAI explained its Expert Council on Wellness and AI started taking form after OpenAI began informally consulting with experts on parental controls earlier this year. Now it’s been formalized, bringing together eight “leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health” to help steer ChatGPT updates.
One priority was finding “several council members with backgrounds in understanding how to build technology that supports healthy youth development,” OpenAI said, “because teens use ChatGPT differently than adults.”
That effort includes David Bickham, a research director at Boston Children’s Hospital, who has closely monitored how social media impacts kids’ mental health, and Mathilde Cerioli, the chief science officer at a nonprofit called Everyone.AI. Cerioli studies the opportunities and risks of children using AI, particularly focused on “how AI intersects with child cognitive and emotional development.”
These experts can seemingly help OpenAI better understand how safeguards can fail kids during extended conversations to ensure kids aren’t particularly vulnerable to so-called “AI psychosis,” a phenomenon where longer chats trigger mental health issues.
In January, Bickham noted in an American Psychological Association article on AI in education that “little kids learn from characters” already—as they do things like watch Sesame Street—and form “parasocial relationships” with those characters. AI chatbots could be the next frontier, possibly filling in teaching roles if we know more about the way kids bond with chatbots, Bickham suggested.
“How are kids forming a relationship with these AIs, what does that look like, and how might that impact the ability of AIs to teach?” Bickham posited.
Cerioli closely monitors AI’s influence in kids’ worlds. She suggested last month that kids who grow up using AI may risk having their brains rewired to “become unable to handle contradiction,” Le Monde reported, especially “if their earliest social interactions, at an age when their neural circuits are highly malleable, are conducted with endlessly accommodating entities.”
“Children are not mini-adults,” Cerioli said. “Their brains are very different, and the impact of AI is very different.”
Neither expert is focused on suicide prevention in kids. That may disappoint dozens of suicide prevention experts who last month pushed OpenAI to consult with experts deeply familiar with what “decades of research and lived experience” show about “what works in suicide prevention.”
OpenAI experts on suicide risks of chatbots
On a podcast last year, Cerioli said that child brain development is the area she’s most “passionate” about when asked about the earliest reported chatbot-linked teen suicide. She said it didn’t surprise her to see the news and noted that her research is focused less on figuring out “why that happened” and more on why it can happen because kids are “primed” to seek out “human connection.”
She noted that a troubled teen confessing suicidal ideation to a friend in the real world would more likely lead to an adult getting involved, whereas a chatbot would need specific safeguards built in to ensure parents are notified.
This seems in line with the steps OpenAI took to add parental controls, consulting with experts to design “the notification language for parents when a teen may be in distress,” the company’s press release said. However, on a resources page for parents, OpenAI has confirmed that parents won’t always be notified if a teen is linked to real-world resources after expressing “intent to self-harm,” which may alarm some critics who think the parental controls don’t go far enough.
Although OpenAI does not specify this in the press release, it appears that Munmun De Choudhury, a professor of interactive computing at Georgia Tech, could help evolve ChatGPT to recognize when kids are in danger and notify parents.
De Choudhury studies computational approaches to improve “the role of online technologies in shaping and improving mental health,” OpenAI noted.
In 2023, she conducted a study on the benefits and harms of large language models in digital mental health. The study was funded in part through a grant from the American Foundation for Suicide Prevention and noted that chatbots providing therapy services at that point could only detect “suicide behaviors” about half the time. The task appeared “unpredictable” and “random” to scholars, she reported.
It seems possible that OpenAI hopes the child experts can provide feedback on how ChatGPT is impacting kids’ brains while De Choudhury helps improve efforts to notify parents of troubling chat sessions.
More recently, De Choudhury seemed optimistic about potential AI mental health benefits, telling The New York Times in April that AI therapists can still have value even if companion bots do not provide the same benefits as real relationships.
“Human connection is valuable,” De Choudhury said. “But when people don’t have that, if they’re able to form parasocial connections with a machine, it can be better than not having any connection at all.”
First council meeting focused on AI benefits
Most of the other experts on OpenAI’s council have backgrounds similar to De Choudhury’s, exploring the intersection of mental health and technology. They include Tracy Dennis-Tiwary (a psychology professor and cofounder of Arcade Therapeutics), Sara Johansen (founder of Stanford University’s Digital Mental Health Clinic), David Mohr (director of Northwestern University’s Center for Behavioral Intervention Technologies), and Andrew K. Przybylski (a professor of human behavior and technology).
There’s also Robert K. Ross, a public health expert whom OpenAI previously tapped to serve as a nonprofit commission advisor.
OpenAI confirmed that there has been one meeting so far, which served to introduce the advisors to teams working to upgrade ChatGPT and Sora. Moving forward, the council will hold recurring meetings to explore sensitive topics that may require adding guardrails. Initially, though, OpenAI appears more interested in discussing the potential benefits to mental health that could be achieved if tools were tweaked to be more helpful.
“The council will also help us think about how ChatGPT can have a positive impact on people’s lives and contribute to their well-being,” OpenAI said. “Some of our initial discussions have focused on what constitutes well-being and the ways ChatGPT might empower people as they navigate all aspects of their life.”
Notably, Przybylski co-authored a study in 2023 providing data disputing that access to the Internet has negatively affected mental health broadly. He told Mashable that his research provided the “best evidence” so far “on the question of whether Internet access itself is associated with worse emotional and psychological experiences—and may provide a reality check in the ongoing debate on the matter.” He could possibly help OpenAI explore if the data supports perceptions that AI poses mental health risks, which are currently stoking a chatbot mental health panic in Congress.
Also appearing optimistic about companion bots in particular is Johansen. In a LinkedIn post earlier this year, she recommended that companies like OpenAI apply “insights from the impact of social media on youth mental health to emerging technologies like AI companions,” concluding that “AI has great potential to enhance mental health support, and it raises new challenges around privacy, trust, and quality.”
Other experts on the council have been critical of companion bots. OpenAI noted that Mohr specifically “studies how technology can help prevent and treat depression.”
Historically, Mohr has advocated for more digital tools to support mental health, suggesting in 2017 that apps could help support people who can’t get to the therapist’s office.
More recently, Mohr told The Wall Street Journal in 2024 that he had concerns about AI chatbots posing as therapists, though.
“I don’t think we’re near the point yet where there’s just going to be an AI who acts like a therapist,” Mohr said. “There’s still too many ways it can go off the rails.”
Similarly, although Dennis-Tiwary told Wired last month that she finds the term “AI psychosis” to be “very unhelpful” in most cases that aren’t “clinical,” she has warned that “above all, AI must support the bedrock of human well-being, social connection.”
“While acknowledging that there are potentially fruitful applications of social AI for neurodivergent individuals, the use of this highly unreliable and inaccurate technology among children and other vulnerable populations is of immense ethical concern,” Dennis-Tiwary wrote last year.
For OpenAI, the wellness council could help the company turn a corner as ChatGPT and Sora continue to be heavily scrutinized. The company also confirmed that it would continue consulting “the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people’s well-being.”