Hallucinatory AI Doctors? Patients Demand Opt-Out Rights from LLMs Scraping Mental Health Records

Post date: April 4, 2026 · Discovered: April 18, 2026 · 3 posts, 105 comments

AI note-taking and summarization in mental healthcare facilities is drawing immediate fire. Patients are demanding complete control over consent, particularly when proprietary tools or opaque data usage models are involved.

The discussion splits sharply: some fear AI's inherent unreliability, citing that these 'regurgitation machines prone to hallucinations' cannot touch medical care. Others worry deeply about the breach of trust involved in sending Protected Health Information (PHI) to third-party services, regardless of stated compliance. Meanwhile, some users argue that opting out means declining necessary care, while others push for demanding specific privacy policies from the practice, not just accepting general terms.

The overwhelming sentiment demands that patient autonomy trumps technological convenience. The line drawn is clear: refuse consent when the AI processes data outside of clear, local control or when the mechanism for data use—especially for training LLMs—remains a mystery.

Key Points

OPPOSE

Proprietary AI in mental health is unacceptable.

Patients reject AI systems they cannot scrutinize, especially those that process data off-site or are 'prone to hallucinations' (slazer2au).

OPPOSE

Hallucinations risk misdiagnosis and misrepresentation.

Concerns exist that AI summaries can omit key facts or generate dangerously incorrect details (leadore).

OPPOSE

Sending PHI to third parties erodes patient trust.

The act of sending records to outside AI services is seen as an unnecessary breach of trust (BlindFrog, VampirePenguin).

OPPOSE

Patients must question the inevitability of AI adoption.

Individuals argue against accepting the premise that AI integration is a non-negotiable standard of care (sem).

MIXED

Opting out may jeopardize necessary medical care.

A counterargument suggests that refusing AI tools might lead to a decline in available services or care quality (CultLeader4Hire, Crankeley).

Source Discussions (3)

This report was synthesized from the following Lemmy discussions, ranked by community score.

204
points
Would you keep seeing a doctor that required to you agree to the use of AI in your treatment to continue being a patient?
[email protected]·159 comments·4/4/2026·by Washedupcynic
20
points
Chatbots are now prescribing psychiatric drugs
[email protected]·7 comments·4/3/2026·by lemmydev2·theverge.com
6
points
[article] The Chatbot Delusions
[email protected]·0 comments·11/24/2025·by kirk781·bloomberg.com