Qwen 3.5 is one of the best of the open-weight (self-host able) models right now. It’s not as good as some of the extra massive proprietary models like the bigger Claude models.
- 0 Posts
- 4 Comments
Joined 2 years ago
Cake day: June 6th, 2024
You are not logged in. If you use a Fediverse account that is able to follow users, you can follow this user.
WolfLink@sh.itjust.worksto
Ask Lemmy@lemmy.world•Would you keep seeing a doctor that required to you agree to the use of AI in your treatment to continue being a patient?
01·24 days agoLLMs are inherently bad at data security and there is no way these companies can, in good faith, promise HIPPA compliance
This is simply false. AI sucks but it doesn’t help to lie about it.
EDIT:
Go run a local model on your own computer, and delete the context when you are done. Boom you just used an LLM in a way that maintains your data security.
WolfLink@sh.itjust.worksto
Privacy@lemmy.ml•I'm tired of LLM bullshitting. So I fixed it.
0·3 months agoI’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.
Qwen 3.5 can be run via ollama