Lemmy, I really would like to hear your opinions on this. I am bipolar. after almost a decade of being misdiagnosed and on medication that made my manic symptoms worse, I found stable employment with good insurance and have been able to find a good psychiatrist. I’ve been consistently medicated for the past 3 years, and this is the most stable I have been in my entire life.
The office has rolled out the use of an app called MYIO app. My knee jerk reaction was to not be happy about the app, but I managed my emotions, took a breath and vowed to give it a chance. After being sent the link to validate my account, the app would force restart my phone at the last step of activation. (I have my phone locked down pretty tight, and lots of google shit, and data sharing is disabled, so I’m thinking that might be the cause. My phone is also like 4-5 years old, so that could also be the cause.)
Luckily I was able to complete the steps on PC and activate that way. Once I was in the account there were standard forms to sign, like the HIPAA release. There was also a form there requesting I consent to the use of AI. Hell to the NO. That’s a no for me dawg.jpg.
I’m really emotional and not thinking rationally. I am hoping for the opinions of cooler heads.
If my doctor refuses to let me be a patient if I don’t consent to AI, what should I do? What would you do? Agree even though this is a major line in the sand for me, or consent to keep a provider I have a rapport with, who knows me well enough to know when my meds need adjusting?
EDIT: This is the text of the AI agreement. As part of their ongoing commitment to provide the best possible service, your provider has opted to use an artificial intelligence note-taking tool that assists in generating clinical documentation based on your sessions. This allows for more time and focus to be spent on our interactions instead of taking time to jot down notes or trying to remember all the important details. A temporary recording and transcript or summary of the conversation may be created and used to generate the clinical note for that session. Your provider then reviews the content of that note to ensure its accuracy and completeness. After the note has been created, the recording and transcript are automatically deleted.
This artificial intelligence tool prioritizes the privacy and confidentiality of your personal health information. Your session information is strictly used for the purpose of your ongoing medical care. Your information is subject to strict data privacy regulations and is always secured and encrypted. Stringent business associate agreements ensure data privacy and HIPAA compliance.
Edit 2: I just wanted to say that I appreciate everyone here that commented. For the most part everyone brought up valid points, and helped me see things I had not considered. I emailed my doctor and let them know I did not want to agree to the use of AI. I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn’t take issue with it.
Thank you everyone!
I let them know that I was cool with transcription software being used as long as it was installed locally on their machines, but I did not want a third party online app having access to recorded sessions for the purposes of transcription. They didn’t take issue with it.
A cynical part in me thinks they’ll just have it “locally installed” in the same way that Firefox is locally installed (doesn’t mean the meaningful part runs locally), and that no third party has access because the servers just don’t show stuff from other tenants even though the server operator could theoretically see all. It’s not like the medical people necessarily know better if their vendor answered the concerns in this manner
One way to find out for lay people might be to turn off WiFi, or disconnect the network cable, and see if it still works — in case you’re in a position where the doc might seem willing to do such a 30-second experiment (if they haven’t already tried this in the past themselves). Doesn’t mean it doesn’t get uploaded when internet is reconnected (e.g. for backups), but that is much harder to check, and if the vendor already made sure the processing is all local then it’s probably okay and not being sold off as training or insurance data
Kudos for reading the terms of service and raising your concerns with them! So long as some of us keep doing that, the privacy of people who don’t know about this sort of thing is also better-protected. Thank you :)
-
If your options are having a doctor that uses AI or having no doctor at all. Some doctor is better than none.
-
I would ask more information about what AI they are using, where the data is processed (locally or online), where and how the AI collected data is stored (locally or in the cloud), who can access your data and whether it could be used for some AI training.
-
Can you ask how AI is used in the app?
I can, but in truth I don’t care. I don’t want my data being used to train AI, and I don’t want my treatment to be guided by AI.
The “fine print” you added doesn’t say the automated transcript will be used for training a model. I’d highly, highly doubt HIPAA protected clinic notes would be use for training an LLM. If they did, the clinic would go bankrupt from lawsuits.
Also, if they only use AI for automated transcription, would you feel the same instead of “AI” it were a dedicated automated transcription tool?
If you abhor all things AI, your feelings of not continuing with this clinic are valid. However, I don’t think they are using AI in ways you think they are.
If they did, the clinic would go bankrupt from lawsuits.
for that, patients would need to be able to prove that their data was used. how would you be able to prove it?
I’d highly, highly doubt HIPAA protected clinic notes would be use for training an LLM

AI and the people pushing it are not trustworthy. They do not have your data security nor your wellbeing at heart, even if your doctor does. LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance. Likely, the AI use will be on the part of the insurance company to find ways of denying your claims.
LLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance
This is simply false. AI sucks but it doesn’t help to lie about it.
EDIT:
Go run a local model on your own computer, and delete the context when you are done. Boom you just used an LLM in a way that maintains your data security.
I would nope the fuck out and change doctors. A regurgitation machine prone to hallucinations has no place in medical care.
If this was for a GP, I would agree with this stance. But a good, fitting and competent mental health professional can be harder to find.
I don’t believe that. They just don’t want to pay them what they’re worth. Machines don’t ask for days off or health insurance, that’s their rationale. I hope they go out of business.
By god they’re going to make OP change doctors just because they hate “le stochastic parrot”. And op is probably in the US which makes the whole thing even crueller.
Literally a horde of teenagers playing with a bipolar’s head because they have big feelings about stuff.
And all this for a fucking note taking app Jesus Christ. Yeah sure OP is probably risking their mental health in the process but who gives a shit about that when you have an occasion to proclaim that le AI bad.
you seem to have no clue about the problem at hand. It’s the lesser of issues that the AI transcriber could hallucinate. the worse problem, which is irreversible, that the treatment session and every private detail that gets discussed is funneled to at best questionable companies who will do whatever they want with your private information. once that happened, you can’t just make them delete what they stored in the process, it is completely unveriable what they do besides offering the original service. everything that was told in the session will not stay between the two of you.
accepting this unknowingly is very dangerous. accepting it knowingly will alter what you say and the results with it, like going to a therapist who you know personally, which is not allowed for very good reasons.You think therapists and doctors in general don’t use Docs or Notes services that are hosted or backed up in the cloud ? You think having your medical data leaked to tech companies is new ? Just because the notes transcription app is AI doesn’t make it magically worse. In fact it makes the data harder to access as you need to re-infer the whole enchilada if you want to mine it (as opposed to, say, Google Drive who can just make a SQL query on your data and get it structured and ready to use).
It’s nice that mental health is so inconsequential to you that you can balance it against privacy purity politics. It’s really cool for you that you’re in this position of privilege. It’s not cool to be pushing on someone with a clinical condition in a way that will probably get them worse off, in a country with absolutely no mental health safety net. Just like antivax it’s coated in fake concern, but you’re playing a dangerous game with someone else’s life and you’re cool with it because you’re insulated from the consequences.
You guys really are a pure product of those amoral hyper-individualistic times.
It’s nice that mental health is so inconsequential to you that you can balance it against privacy purity politics.
oh now I’m a privacy purist! oh god what have I become! I want totally unreasonable things!!
or, it seems you by default don’t care about privacy at all because surely who needs it, and also already forgot the case of woman in USA using online period tracker apps that outed them for having an illegal abortion.
Just like antivax it’s coated in fake concern,
fake concern, sure… my concerns are very real, and OP has come for advice, asking among others what could be the consequences. well, this is one of the consequences there will be.
You guys really are a pure product of those amoral hyper-individualistic times.
yes, blame me, not the system that made this situation. don’t you want to call the cops on me?
you do know at some point the whole ‘hallucinations’ line is going to be as fresh as calling things ‘woke’, right?
the ‘does this thing have ai in it’ is already a fucking blur as businesses link to each other via private and public APIs… healthcare is no different.
these things are already in place in many places. if youre a part of any nation wide health services, youre already impacted.
its like the fact that a huge % of our GDP is tied to like 10 companies… you cannot live your life in the modern united states without suffering products or services from those 10 companies, full stop. your life with ai will look the same.
can you work hard avoid shit and cry about it? yep. yep you can… but thats about it.
you do know at some point the whole ‘hallucinations’ line is going to be as fresh as calling things ‘woke’, right?
The truth doesn’t care whether it’s “fresh” or not.
As long as AI still hallucinates, it will be useful for entertainment purposes only and never for anything as serious as healthcare.
your life with ai will look the same.
lol, tell that to every other business fad that has come and gone.
The AI bubble will pop, the economy will crash, and in the long run, that will be a good thing.
Dude must be some MBA crypto bro AI slop jock. His grammar isn’t good enough to be one of those idiot CEOs who just learned what artificial intelligence is. Maybe he’s a shareholder for one of those soul-less companies. Probably not that either though. Perhaps he’s just a terrible artist or programmer who uses AI slop for all of his works of shart. The possibilities really are endless these days.
im an ex corp drone whose value was replacing humans with automation.
it sucks, its already exists, it will happen more. llms are already in these pipelines and theres nothing any of us can do to avoid it.
im not saying its good. im not saying it should be. im saying, it exists right now cuz ive been a part of it.
…your value is replacing humans with machines?
Explain to me the value of that.
maybe youre new here but big business likes it when they save money. value.
I exited purposefully
Oh okay, so your only value is the pursuit of material bullshit and not the well being of human beings. Good luck getting AI to pay for your shitty wares when nobody makes money to afford them. 🤭
I have no idea what it’s like to be you, and I’m glad I don’t. Enjoy your cold empty heart! 🙂
haha, k. its clear you dont, but thats ok.
Ummm, hallucinations are literally how LLMs work. Everything they generate is confabulation, though sometimes it’s useful confabulation.
It’s almost like the very businesses that creamed their pants about being able to replace workers and endless “blue ocean” profits exaggerated, lied, and forced AI into every. single. product. That’s not consumers’ faults…
i cant understand why people are oblivious to the multi-faced war-front that is AI.
theres the shit you hear about and see every day (oh look copilot shit the bed! claude cant add! teehehee look at all the extra fingers!) and then theres the shit that is actually being implemented in process models all over the place in nearly every department. from inventory to healthcare analysis to customer service, this shit is in daily use now … and you cannot avoid it.
ai is just an api call away and software companies suck.
Given how captured our data is by the lack of regulation even in the medical space in the US. I simply do not want my personal data to be used in anything but in-house signal-to-noise improvement for diagnosis.
Anything else, which is most of it, is unacceptable and I do not consent.
You’re probably not suffering mental health crises desperate for bare minimum psychiatric care. It is an absolute jungle here and it can literally take years to find the right person and they are almost never on insurance.
Privacy and your rights to it and your own autonomy/med care are important.
However, some may have to weigh the safety of themselves and those around them to determine whether they should be standing on principle and refusing care.
You’re asking a forum that’s in a strongly anti-AI bubble, so the answer you’re going to get is both obvious and useless. You might as well be asking a bunch of Vegans whether you should have steak for dinner. You admit yourself to having a strong knee-jerk reaction.
However, it looks to me like their use of AI is a perfectly reasonable one. It’s just making transcripts and summaries of sessions. I do that all the time with personal logs and meetings, I’ve got a couple of local models that run entirely on my computer. So I don’t see the big deal here.
If you feel you really need this medical help then maybe don’t rely on the advice of a bunch of people you know are going to instantly react negatively to AI regardless of any other details.
Sounds reasonable, until the transcripts end up totally incorrect because the AI misinterpreted something or hallucinated.
As I said, this is something I do all the time myself. Even with just my piddling little graphics card it works fine, the technology is quite good these days. I’m sure a professional setup being used by a doctor would be a much higher standard than that.
I get the impression no level of quality and no kind of human involvement with the results will likely satisfy you, though. Which means your negative view of AI is not particularly useful here.
It is useful because it’s a protection against AI messing with our futures.
To illustrate the point re: transcriptions:
https://pubmed.ncbi.nlm.nih.gov/40326654/
The Unexpected Harms of Artificial Intelligence in Healthcare: Reflections on Four Real-World Cases
Kerstin Denecke et al. Stud Health Technol Inform. 2025.
Results: The incidents discussed include: Whisper’s harmful hallucinations; UNOS’s algorithm delaying transplants for black patients; the WHO’s S.A.R.A.H. chatbot providing inaccurate health information; and Character AI’s chatbot promoting disordered eating among teens.
That article is from May 2025, almost a full year ago. In AI terms that’s the stone age.
One of the major problems of getting information about AI inside an anti-AI bubble is that nobody here is actually using it, so they don’t know what its actual quality and capabilities are like now. As I said, I actually set up a system like this for myself on my own personal computer and I keep it updated as new models come out, so I’ve seen what the state of the art (or near-state-of-the-art at any rate) is actually like.
Nothing is perfect, of course. But perfection is not the standard this system is to be compared against. The alternative is the doctor’s handwritten notes and personal memories. Those are almost certainly not as good.
The doctor’s notes will always be better because the doctor is a human who has the sense to ask you to repeat yourself if they didn’t catch what you said.
Agree to disagree. I trust the research more than anecdotes.
id request further information. is the LLM used for secondary analysis? is it the primary and the doctor evaluates the results manually?
‘ai’ is just a tool in the right hands that can be beneficial. that said, it absolutely can be used by lazy assholes to pretend to do their job…
so, if you ave the resources to demand ‘no ai’… go for it, but im too poor to demand much, and id be more focused on the use of the ai.









