Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I’ve been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.
The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I’m thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)
Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That’s a no for me dawg.jpg.
I’m really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.
If my doctor refuses to let me be a patient if I don’t consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?
EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.
This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.
Edit 2: I just wanted to say that I appreciate everyone here that commented. For the most part everyone brought up valid points, and helped me see things I had not considered. I emailed my doctor and let them know I did not want to agree to the use of AI. I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn’t take issue with it.
Thank you everyone!
I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn’t take issue with it.
A cynical part in me thinks they’ll just have it “locally installed” in the same way that Firefox is locally installed (doesn’t mean the meaningful part runs locally), and that no third party has access because the servers just don’t show stuff from other tenants even though the server operator could theoretically see all. It’s not like the medical people necessarily know better if their vendor answered the concerns in this manner
One way to find out for lay people might be to turn off WiFi, or disconnect the network cable, and see if it still works — in case you’re in a position where the doc might seem willing to do such a 30-second experiment (if they haven’t already tried this in the past themselves). Doesn’t mean it doesn’t get uploaded when internet is reconnected (e.g. for backups), but that is much harder to check, and if the vendor already made sure the processing is all local then it’s probably okay and not being sold off as training or insurance data
Kudos for reading the terms of service and raising your concerns with them! So long as some of us keep doing that, the privacy of people who don’t know about this sort of thing is also better-protected. Thank you :)
-
If your options are having a doctor that uses AI or having no doctor at all. Some doctor is better than none.
-
I would ask more information about what AI they are using, where the data is processed (locally or online), where and how the AI collected data is stored (locally or in the cloud), who can access your data and whether it could be used for some AI training.
-
You’re asking a forum that’s in a strongly anti-AI bubble, so the answer you’re going to get is both obvious and useless. You might as well be asking a bunch of Vegans whether you should have steak for dinner. You admit yourself to having a strong knee-jerk reaction.
However, it looks to me like their use of AI is a perfectly reasonable one. It’s just making transcripts and summaries of sessions. I do that all the time with personal logs and meetings, I’ve got a couple of local models that run entirely on my computer. So I don’t see the big deal here.
If you feel you really need this medical help then maybe don’t rely on the advice of a bunch of people you know are going to instantly react negatively to AI regardless of any other details.
Sounds reasonable, until the transcripts end up totally incorrect because the AI misinterpreted something or hallucinated.
As I said, this is something I do all the time myself. Even with just my piddling little graphics card it works fine, the technology is quite good these days. I’m sure a professional setup being used by a doctor would be a much higher standard than that.
I get the impression no level of quality and no kind of human involvement with the results will likely satisfy you, though. Which means your negative view of AI is not particularly useful here.
It is useful because it’s a protection against AI messing with our futures.
To illustrate the point re: transcriptions:
https://pubmed.ncbi.nlm.nih.gov/40326654/
The Unexpected Harms of Artificial Intelligence in Healthcare: Reflections on Four Real-World Cases
Kerstin Denecke et al. Stud Health Technol Inform. 2025.
Results: The incidents discussed include: Whisper’s harmful hallucinations; UNOS’s algorithm delaying transplants for black patients; the WHO’s S.A.R.A.H. chatbot providing inaccurate health information; and Character AI’s chatbot promoting disordered eating among teens.
That article is from May 2025, almost a full year ago. In AI terms that’s the stone age.
One of the major problems of getting information about AI inside an anti-AI bubble is that nobody here is actually using it, so they don’t know what its actual quality and capabilities are like now. As I said, I actually set up a system like this for myself on my own personal computer and I keep it updated as new models come out, so I’ve seen what the state of the art (or near-state-of-the-art at any rate) is actually like.
Nothing is perfect, of course. But perfection is not the standard this system is to be compared against. The alternative is the doctor’s handwritten notes and personal memories. Those are almost certainly not as good.
The doctor’s notes will always be better because the doctor is a human who has the sense to ask you to repeat yourself if they didn’t catch what you said.
Agree to disagree. I trust the research more than anecdotes.




