Back in the day it was nice, apt get update && apt get upgrade and you were done.

But today every tool/service has it’s own way to being installed and updated:

  • docker:latest
  • docker:v1.2.3
  • custom script
  • git checkout v1.2.3
  • same but with custom migration commands afterwards
  • custom commands change from release to release
  • expect to do update as a specific user
  • update nginx config
  • update own default config and service has dependencies on the config changes
  • expect new versions of tools
  • etc.

I selfhost around 20 services like PieFed, Mastodon, PeerTube, Paperless-ngx, Immich, open-webui, Grafana, etc. And all of them have some dependencies which need to be updated too.

And nowadays you can’t really keep running on an older version especially when it’s internet facing.

So anyway, what are your strategies how to keep sanity while keeping all your self hosted services up to date?

  • totoro@slrpnk.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 hour ago

    Wow, that sounds like a nightmare. Here’s my workflow:

    nix flake update
    nixos-rebuild switch
    

    That gives me an atomic, rollbackable update of every service running on the machine.

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    I just run watchtower in docker. It will watch all your other docker images and update them to latest version automatically if you want.

    It works fine but with time, I stopped thinking i need to be on latest version all the time. It really isnt very important.

    Just a few of my services are open on the internet, mainly caddy and wireguard.

  • conrad82@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 hours ago

    I do it manually. update the container version and docker pull and run

    I have reduced the number of containers to ones i actually use, so it is manageable.

    i use v2 instead of v2.1.0 docker container tags if the provider don’t make too many bleeding edge changes between updates

  • irmadlad@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    I keep it simple, although reading down through the thread, there are some really nice and ingenious ways people accomplish about the same thing, which is totally awesome. I use a WatchTower fork and run it with --run-once --cleanup. I do this when I feel comfortable that all the early adopters have done all the beta testing for me. Thanks early adopters. So, about 1 a month or so, I update 70 Docker containers. As far as OS updates, I usually hit those when they deploy. I’m running Ubuntu Jammy, so not a lot of breaking changes in updates. I don’t have public facing services, and I am the only user on my network, so I don’t really have to worry too much about that aspect.

  • mlfhA
    link
    fedilink
    English
    arrow-up
    5
    ·
    8 hours ago

    Everything I run, I deploy and manage with ansible.

    When I’m building out the role/playbook for a new service, I make sure to build in any special upgrade tasks it might have and tag them. When it’s time to run infrastructure-wide updates, I can run my single upgrade playbook and pull in the upgrade tasks for everything everywhere - new packages, container images, git releases, and all the service restart steps to load them.

    It’s more work at the beginning to set the role/playbook up properly, but it makes maintaining everything so much nicer (which I think is vital to keep it all fun and manageable).

  • SayCyberOnceMore@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    6 hours ago

    I don’t use docker, etc, so for me, if it’s in the normal Arch repos or AUR then I don’t need to think about it until there’s a .pacnew file to look at

    Then, it’s just the odd git pull on literally 2 devices.

    All organised by ansible…

    (well except the .pacnew, but I think it’s nice to keep in touch with the packages)

  • pHr34kY@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 hours ago

    I wonder if anyone ever wrote an update aggregator that would find all package managers, containers and git repos and whatnot and just do all of them.

    Some are a right pain to update, such as Nextcloud. Installing a monthly update should not feel like an enterprise prod deployment.

    It’s kinda ironic is that package managers have caused the exact problem that they are supposed to solve.

    • Jeena@piefed.jeena.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      I am developing a script which will do that specifically for my services.

      Right now at the first stage it only checks GitHub, Codeberg, etc. To check if there is a new version compared to what each service is running right now.

      https://git.jeena.net/jeena/service-update-alerts

      I am extending it now with a auto update part, but it’s difficult because sometimes I can’t just call a static script because some other migration things need to run. So I have a classifier which takes the release notes and let’s a local LLM to judge if it’s OK to run the automation or if I need to do it manually. But for that I am collecting old release notes as examples from each service. This takes forever to do so I only have it done for PieFed, PeerTube, Immich and open-webui, and I didn’t push those changes to the public repo yet.

    • BlackEco@lemmy.blackeco.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 hours ago

      I guess auto merge isn’t enabled, since there’s no way to check if an update doesn’t break your deployment beforehand, am I right?

        • BlackEco@lemmy.blackeco.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          Yes, but usually when you use automerge you should have set up a CI to make sure new versions don’t break your software or deployment. How are you supposed to do that in a self-hosting environment?

          • tofu@lemmy.nocturnal.garden
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            Ideally, you have at least two systems, test updates in the dev system and only then allow it in prod. So no auto merge in prod in this case or somehow have it check if dev worked.

            Seeing which services are usually fine to update without intervening and tuning your renovate config to it should be sufficient for homelab imho.

            Given that most people are running :latest and just yolo the updates with watchtower or not automated at all, some granular control with renovate is already a big improvement.

    • Jeena@piefed.jeena.netOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Because you point to :latest and everything is dockerized and on one machine? How does it know when it’s time to upgrade?

      • Overspark@piefed.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        7 hours ago

        Yeah only for :latest containers, that’s true. It automatically runs a daily service to check whether there are newer images available. You can turn it off per container if you don’t want it.

        One of the nice things about it is that I have containers running under several different users (for security reasons) so that saves me a lot of effort switching to all these users all the time.

          • Overspark@piefed.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            2 hours ago

            Depends. There are a few things I update by hand, but as long as you have proper backups it’s generally safer to run the latest versions of things automatically if you don’t mind the possibility of breakage (which is very rare in my experience). This is in the context of self-hosting of course, not a business environment.

          • prenatal_confusion@feddit.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 hours ago

            Depends on what you want to do. For production with sensitive data, yes it is. For my ytdl and jellyfin? Perfectly fine.

  • iamthetot@piefed.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    30 minutes ago

    cd appname && dockup && cd ..

    Dockup being an alias for docker compose pull && docker compose up -d

    Repeat for the few services I have.

    • ominous ocelot@leminal.space
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 hours ago

      I don’t understand. docker compose up starts the container. When does the docker compose pull happen? Or is there an update directive in the compose file?

    • Jeena@piefed.jeena.netOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 hours ago

      So everything is dockerized and points to :latest?

      What about the necessary changes to the docker compose files? What about changes necessary in nginx configs?

      I guess you also read each release notes manually?

      • iamthetot@piefed.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        31 minutes ago

        Not running anything that I’ve had to alter compose files. Also never had change nginx configs. Maybe I’m just running particularly stable stuff.

        I usually read update notes yes, but I’d be lying if I said I was always thorough.

  • Sanctus@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 hours ago

    Damn I’m lucky I just run small game servers cause the old way still works for me, aside from piehole that needs to be updated but it squeels at me when it needs it so I dont have to remember.

  • Alvaro@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    Personally I just wrote a bash script that does all of my regular updates and I run it manually whenever

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    23 minutes ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    Git Popular version control system, primarily for code
    HTTP Hypertext Transfer Protocol, the Web
    LXC Linux Containers
    nginx Popular HTTP server

    3 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

    [Thread #233 for this comm, first seen 12th Apr 2026, 05:50] [FAQ] [Full list] [Contact] [Source code]