WatchTower can auto uodate your container or notify you when an update is available, I use it with a Matrix account for notifications
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
+1 for watchtower. I've been using it for about a year now without any issues to keep anywhere from 5 to 10 Docker containers updated.
Sorry if it's obvious, but I don't see a way to use Matrix for notifications on their documentation and my searching is coming up blank. Do you by chance have a tutorial for this?
Here is how I did it:
docker run -d \
--name watchtower \
-v /var/run/docker.sock:/var/run/docker.sock \
-e WATCHTOWER_NOTIFICATION_URL=matrix://username:password@domain.org/?rooms=!ROOMID:domain.org \
-e WATCHTOWER_NOTIFICATION_TEMPLATE="{{range .}}[WatchTower] ({{.Level}}): {{.Message}}{{println}}{{end}}" \
containrrr/watchtower
Edit: I created a pull request to the WatchTower documentation, here: https://github.com/containrrr/watchtower/pull/1690
Thank you very much! I'll get this set up on mine.
I use DIUN (docker image update notifier). You can watch tags with it and it will notify you when updates are available. I have it email me Saturday morning. I like it a lot more than watchtower.
Huh, that’s actually way better than my current setup of spamming me on Telegram every time there’s an update
This looks great. I was looking at Watchtower again a few days ago, but I don't want to auto update my containers, just get notified for updates. I usually just keep the RSS feed of the project in my feed reader, but diun looks like a proper solution. Thanks!
Since my "homelab" is just that, a homelab, I'm comfortable with using :latest-tag on all my containers and just running docker-compose pull and docker-compose up -d once per week.
This is mostly my strategy too. Most of the time I don't have any issues, but occasionally I'll jump straight to a version with breaking changes. If I have time to fix I go find the patch notes and update my config, otherwise I just tag the older version and come back later.
I've recently been moving my containers from docker compose into pure ansible though since I can write roles/playbooks to push config files and cycle containers which previously required multiple actions on docker compose. It's also helped me to turn what used to be notes into actual code instead.
Just put all commands into a bash file. Starting with ‘’docker tag’’ changing tag to something else in case I need to revert and than pull, compose up. All run by crontab weekly. In case something breaks the latest working container is still there.
The beer way I’ve found is to wait till something breaks. Message around on forums asking why I’m getting errors till someone recommends update and restart.
Blindly Remove the docker. Recreate.
And hope none of the configs break. ✌️💛
You read breaking changes before you update things, that's how.
Seriously. All this talk of automatically updating versions has my head spinning!
I use watchtower and hope nothing will break. I never read breaking changes.
When an issue happens, I just search the internet or change the tag to a known working version until the issue is resolved.
I can afford to have my server down for a few days. It’s not critical to me.
It kind of depends on what are your priorities. In my experience it's usually much easier to upgrade to latest version from previous version, than to jump couple versions ahead, because you didn't have time doing upgrades recently..
When you think about it, from the development point of view, the upgrade from previous to last version is the most tested path. The developers of the service probably did exactly this upgrade themselves. Many users probably did the same and reported bugs. When you're upgrading from version released many months ago to the current stable, you might be the only one with such combination of versions. The devs are also much more likely to consider all the changes that were introduced between the latest versions.
If you encounter issue upgrading, how many people will experience the same problem with your specific versions combination? How likely are you to see issue on GitHub compared to a bunch of people that are always upgrading to latest?
Also moving between latest versions, there's only limited set of changes to consider if you encounter issues. If you jumped 30 versions ahead, you might end up spending quite some time figuring out which version introduced the breaking change.
Also no matter how carefully you look at it, there's always a chance that the upgrade fails and you'll have to rollback. So if you don't mind a little downtime, you can just let the automation do the job and at worst you'll do the rollback from backup.
It's also pretty good litmus test. If service regularly breaks when upgrading to latest without any good reason, perhaps it isn't mature enough yet.
We're obviously talking about home lab where time is sometimes limited, but some downtime usually not a problem.
Are they documented separately from other changes?
They're usually clearly documented in support forums by people saying "MY STUFF WON'T BOOT PLESE HALP"
It depends on the project. If the project doesn't make an effort to highlight them I would consider using a different one.
But any decent OSS will make a good change log for their updates that you can read.
I've just been updating my containers every week or so and if something breaks I'll try and fix it. It would definitely be preferable to "fix" in advance, but with enough containers getting updated, checking/reading every change becomes a fair amount of work. Most of the time nothing breaks.
Downvotes are cool but if this is a bad way of doing things just tell me.
What is driving you to need to update so often?
Nothing. Is this too frequent?
Well, there's always the "if it ain't broke don't fix it" mantra. There's a few reasons I tend to update. Because there's a feature I want or need, to fix a big that affects me, or because a software frequently updates with breaking changes and keeping up with reading change logs is the best way to deal with that. The last option is usually because if I keep up with it I don't have to read and fix multiple months of breaking changes.
I read the changelogs for the apps, and manually update the containers. Too many apps have breaking changes between releases.
Ideally containers are provided with a major release version tag, so not just :latest but :0.18 for all 0.18.x releases that should in theory not break compatibility.
Then you can set your Podman systemd configuration file (I use Quadlet .container files) to automatically check for new versions and update them.
In theory 🤡
Well, most projects publish their dockerfiles so you could take ans rebuild them with the tags you want. And all the building can be built into a CI/CD pipeline so you just have to make a new push with the latest versions.
I should make something like that.
this is the way to do it.
and periodically keep taps on main releases to swap from 0.18 to 0.19
Watchtower auto updates for me.
Sometimes stuff breaks, if it does and I can't fix it, I'll just roll back to a backup for that stack and figure it out from there.
I just use docker compose files. Bundle my arr stack in a single compose file and can docker compose pull to update them all in one swoop.
Just so I understand, you're using your compose file to handle updating images? How does that work? I'm using some hacked together recursive shell function I found to update all my images at once.
There’s plenty of tutorials out there for it. A quick DuckDuckGo search turned up this as one of the first results, but the theory is the same if you wanted to bundle ‘arr containers instead of nginx/whatever. https://www.digitalocean.com/community/tutorials/workflow-multiple-containers-docker-compose
Essentially you create docker compose file for services, within which you have as many containers as you want set up like you would any other compose file. You ‘docker compose pull’ and ‘docker compose up -d’ to update/install just like you would for individual docker container, but it does them all together. It sounds like others in the thread have more automated someone with services dedicated to watching for updates and running those automatically but I just look for a flag in the app saying there’s an update available and pull/ up -d whenever it’s convenient/I realize there’s an update.
Compose is the best. Way more granular control. And makes migration entirely pain free. Just ran into the case for it. Set it and forget it, use the same compose for updates.
I use podman auto-update command.
I'd also like to see what others use
I originally used this too, but in the end had to write my own python script that basically does the same thing and is also triggered by systemd. The problem I had was that for some reason podman sometimes thinks there is a new image, but when it pulls it just gets the old image. This would then trigger restarts of the containers because auto-update doesn't check if it actually downloaded anything new. I didn't want those restarts so had to write my own script.
Edit: but I lock the version manually though e.g. nextcloud 27 and check once a month if I need to bump it. I do this manually in case the upgrade needs an intervention.
I pin versions and stick to stable releases as I want stability. Everything is behind a VPN so I'm not too worried. I check them and update once a week or so.
Auto update with "latest" version tag, and re-pull to a specific previous version if there are problems. Got too many containers to keep up with individual versions
If you pull 'latest' and then want to roll back, how do you know what version you were in before? Is there a way to see what version/tag actually got pulled when you pull latest?
Last time it happened was with one of the newer Nextcloud updates. It was a bit of trial and error, but I eventually went back to a version that worked and I could fix the underlying issue. There should be a list of version tags either on dockerhub or GitHub that list all versions that have been pushed to live and are available to pull
I use something called What's Up Docker to check for docker updates. It integrates nicely with Home Assistant, so I made a card on my server state dashboard that shows which containers have updates available. I'll check every so often and update my docker-compose files.
Kubernetes with ArgoCD declarative config and then Renovate. It automatically makes prs against my config repo for container/chart versions with the change log in the description
+1 for renovate.
A little bit different setup - helmfile in git repository + pipelines in woodpecker-ci.
I combine 3 options:
- Watchtower updates most containers. They never break. If it leads to some breaking, it goes to the second option.
- Update script that update the whole stack from portainer webhook. This did fix the only stack that used to give me issues with watchtower. The other stack is watchtower itself.
- Manual update. Only for Homeassistant. I want to make sure to know about breaking changes. So I update it when I can and I read the patch notes.
It works for my around 100 containers.
I use a combination of flux and a python app that checks out everything running on my cluster and keeps me a list of what needs some attention from upgrades and kube-clarity as well. It's more kubernetes related though.
By manually updating the whole thing.
"Gus are you cra--"
Eh, its a good brain exercise.