Localhost would only work on the machine itself. I have Cosmos installed on my own computer, but I have no experience trying it in a virtual machine. That may be a question better suited for the Cosmos Discord server, I can get you an invite if you want it
- 0 Posts
- 5 Comments
mierdabird@lemmy.dbzer0.comto
Selfhosted@lemmy.world•Any immich users know how the people sorting works?English
2·22 days agoAlthough if I remember right a separate backup agent is obsolete, the main Immich container can do backups now
mierdabird@lemmy.dbzer0.comto
Selfhosted@lemmy.world•Any immich users know how the people sorting works?English
11·22 days agoMaybe that’s specific to my host software that manages docker, it’s called Cosmos Cloud. There is a pre-built Immich docker compose which has an option to add a backup agent.

mierdabird@lemmy.dbzer0.comto
Selfhosted@lemmy.world•Any immich users know how the people sorting works?English
51·23 days agoFirst of all like the other commenter said make sure you have all the right Immich containers running, should be 4 to 5 depending on if you have a backup container.
In the webpage, not the app, go to server settings first, in the machine learning section check that you have facial recognition turned on and a model selected, read through in case you want to change anything else about it.
Then secondly go to job queue and try refreshing face detection and grouping.

It’s hard to say what exactly your requirements are in terms of VRAM/RAM from what you described here, but as a general recommendation whether AMD or Intel, I’d stick with DDR4 generation hardware. DDR5 is extremely expensive, but any non-MoE model that spills into system memory will still be frustratingly slow.
For GPU’s the best bang for your buck if you want Nvidia is probably the 3060 12GB, it has 360GB/s memory bandwidth and one or more of those is a very reasonable starting point for local AI.
If you’re okay with AMD there are some really unique cards floating around, I recently picked up a V620 off ebay for $350, it’s an ex-datacenter card with 32GB GDDR6 @ 512GB/s bandwidth. It’s a bit of a power hog but in my early testing it was running Qwen coder 3 30B at like 100 tokens/sec.
I run it on an ASUS X570 PRO board which is the cheapest AM4 board I could find with an optimal PCI-E setup: three x16 slots running 4.0x8, 4.0x8, 3.0x4. I have successfully tested it with the V620, a 9060XT, and a 3060 for 60 GB total VRAM, though the third x16 is only single slot so I had to borrow a pci extender cable to try it. I’ve found 48gb VRAM is plenty for me so I doubt I’ll actually run a third card unless I find a good deal on a single slot one.
Kinda turned into a ramble but let me know if you got questions