Why do you think you're right?

I placed a relatively high probability primarily because of FDA's explicit signaling towards that during the Digital Health Advisory Committee meeting held just weeks ago on November 6, 2025.

In the briefing materials for this meeting, the FDA acknowledged that while no generative AI chatbots are currently cleared, they "can anticipate some soon." This specific language is a strong indicator that valid applications are already in the advanced stages of the review pipeline. The FDA rarely convenes high-profile advisory committees to discuss hypothetical scenarios without a pressing, concrete need to finalize its stance on pending technologies. The meeting focused heavily on "Generative AI-Enabled Digital Mental Health Medical Devices," suggesting that the first authorized device will likely be a mental health chatbot (such as those developed by companies like Limbic or Woebot, who are active in these regulatory dialogues) rather than a radiology tool.

https://www.fda.gov/media/189391/download

https://www.fda.gov/advisory-committees/advisory-committee-calendar/november-6-2025-digital-health-advisory-committee-meeting-announcement-11062025

https://www.youtube.com/watch?v=F_FonISpeMc

https://www.sidley.com/en/insights/newsupdates/2025/11/us-fda-and-cms-actions-on-generative-ai-enabled-mental-health-devices-yield-insights-across-ai

Besides that, the regulatory infrastructure required to approve such a device is now in place. The FDA finalized its guidance on Predetermined Change Control Plans (PCCPs) in December 2024, resolving a critical bottleneck. PCCPs allow manufacturers to pre-specify how their models will evolve and update without triggering a new submission for every change—a prerequisite for viable LLM-based software. With this established and the agency actively soliciting industry feedback on labeling and safety guardrails for "AI therapists," the boureocratic difficulties have been significantly reduced.

The 4-month window until March 2026 is tight, but given that these discussions have been ongoing throughout 2025, a Q1 2026 authorization aligns with the typical timeline following a major advisory committee review.

Files
Why might you be wrong?

The November 2025 advisory meeting highlighted severe concerns about "hallucinations" and crisis management (e.g., suicide risk), which could lead the FDA to demand longer, more rigorous real-world efficacy trials than manufacturers anticipate.

https://www.youtube.com/watch?v=F_FonISpeMc

Files
LogicCurve
made a comment:

It depends on what the AI LLM chat bot's intended purpose in mental health will be.  I.e., will it be ask the person how they are doing each day lending interaction in friendly communication - or is the intention this AI LLM will be used to diagnose and treat possible and probable mental health health cases?  

If the latter - there are some problems:

Exploring the Dangers of AI in Mental Health Care

https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care

"A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

Therapy is a well-tested approach to helping people with mental health challenges, yet research shows that nearly 50 percent of individuals who could benefit from therapeutic services are unable to reach them.

Low-cost and accessible AI therapy chatbots powered by large language models have been touted as one way to meet the need. But new research from Stanford University shows that these tools can introduce biases and failures that could result in dangerous consequences. The paper will be presented at the ACM Conference on Fairness, Accountability, and Transparency this month.

“LLM-based systems are being used as companions, confidants, and therapists, and some people see real benefits,” said Nick Haber, an assistant professor at the Stanford Graduate School of Education, affiliate of the Stanford Institute for Human-Centered AI, and senior author on the new study. “But we find significant risks, and I think it’s important to lay out the more safety-critical aspects of therapy and to talk about some of these fundamental differences.  .......  "

It looks like even this promising chatbot approval may take a few years: 

https://arxiv.org/html/2508.20996v1

Files
Files
Tip: Mention someone by typing @username