In a couple of prior articles (here and here) we showcased the capabilities of our WireGuard Docker container with some real world examples. At the time, our WireGuard container only supported one active tunnel at a time so the second article resorted to using multiple WireGuard containers running on the same host and using the host's routing tables to do advanced routing between and through them.
In October 2023, our WireGuard container received a major update and started supporting multiple WireGuard tunnels at a time, which made it much more versatile than before. In this article we'll take advantage of this new capability and showcase a setup that involves a single container that acts as both a server and a client that tunnels peers through multiple redundant VPN connections while maintaining access to the LAN.
Many VPN providers have a limit on the number of devices (or tunnels). This setup will allow you to have an unlimited amount of devices tunneled through a single VPN connection while also supporting a fail-over backup connection!
DISCLAIMER: This article is not meant to be a step by step guide, but instead a showcase for what can be achieved with our WireGuard image. We do not officially provide support for routing whole or partial traffic through our WireGuard container (aka split tunneling) as it can be very complex and require specific customization to fit your network setup and your VPN provider's. But you can always seek community support on our Discord server's #other-support channel.
Tested on Ubuntu 23.04, Docker 24.0.5, Docker Compose 2.20.2, with Mullvad.
Configure a standard WireGuard server according to the WireGuard documentation.
wireguard:
image: lscr.io/linuxserver/wireguard:latest
container_name: wireguard
cap_add:
- NET_ADMIN
- SYS_MODULE
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
- SERVERURL=wireguard.domain.com
- SERVERPORT=51820
- PEERS=1
- PEERDNS=auto
- INTERNAL_SUBNET=10.13.13.0
volumes:
- /path/to/appdata/config:/config
- /lib/modules:/lib/modules
ports:
- 51820:51820/udp
sysctls:
- net.ipv4.conf.all.src_valid_mark=1
restart: unless-stopped
Start the container and validate that docker logs wireguard
contains no errors, and validate that the server is working properly by connecting a client to it.
Copy the 2 WireGuard configs that you get from your VPN providers into files under /config/wg_confs/wg1.conf
and /config/wg_confs/wg2.conf
.
Make the following changes:
Table = 55111
to distinguish rules for this interface.PostUp = ip rule add pref 10001 from 10.13.13.0/24 lookup 55111
to forward traffic from the wireguard server through the tunnel using table 55111 and priority 10001.PreDown = ip rule del from 10.13.13.0/24 lookup 55111
to remove the previous rule when the interface goes down.PersistentKeepalive = 25
to keep the tunnel alive.AllowedIPs =
and calculate the value using a Wireguard AllowedIPs Calculator.
0.0.0.0/0
in the Allowed IPs
field.Disallowed IPs
field, for example: 192.168.0.0/24, 10.13.13.0/24
, make sure it doesn't include the VPN interface address (...We maintain a lot of images, which are used by a lot of people, on a lot of platforms, using a lot of tools, and it's not always immediately clear which of those many combinations we support, and will provide support for. This post is an attempt to clarify that situation and provide links to our formal documentation on the matter.
Any exceptions to our support policy will be clearly called out in the readme for the relevant image.
The TL;DR is if you run up to date versions of our currently maintained images using a supported version of Docker, rootfully, on Linux, using docker compose or the docker CLI to create and update your containers, we will support you with any issues you encounter.
Our support policy can be grouped into 4 categories:
With the exception of the last category it's worth noting that unsupported does not mean it won't work, it just means we won't help you make it work. Additionally, if you do manage to get something in the last category working it doesn't change anything, it's still unsupported and a bad idea. Requests for help with anything outside of the Formally Supported category should use the #other-support
channel on our Discord server.
Our general support philosophy can be summarised as follows:
With that out of the way, our current support policy can always be found at https://linuxserver.io/supportpolicy and we will make announcements via our usual channels if anything substantial changes.
]]>As early as 2019, we started centralizing all our documentation for container images, informational snippets, Frequently Asked Questions as well as full-blown user-guides, this has all lived on GitBook. The reason for going with GitBook at the time was simply because of its native ability to build off of a git repo, as well as its hosted nature (yes, we want to spend most of our time creating containers, not maintaining infrastructure). We were also considering Read The Docs and Bookstack for this usecase. The git integration was a killer feature, as it allowed us to implement it as step in our pipeline project to automatically push updated documentation with the same base as the readme.
As time went on, the LinuxServer team grew. Which as an organization, the skillset also grew. A part of this skillset included various takes on other documentation tools. Since we always want improve, our documentation has also seen multiple iterations. While doing these updates certain pain points arose:
The sync from our GitHub repo to GitBook has been disabled for a couple of months, as we have been preparing, improving and testing MkDocs. The freeze has been necessary because we adapted the templates our jenkins-builder generates for MkDocs, and we didn't want the current docs to get formatted weirdly, as the syntaxes differ just enough.
The switch to MkDocs allows us to customize the build-output to our liking, with the knowledge we have within the team. It also resolves all the pain-points listed above.
We would just like to give a shout out to GitBook and say thank you for providing us with a OSS license.
]]>As an organization, we maintain hundreds of Docker images and with each image having multiple tags and different naming conventions on different registries, things can become confusing. In this article we attempt to untangle that web and clarify how all the images and tags we push relate to each other.
docker.io
.latest
is used. The format is <registry>/<repo>/<image>:<tag>
(except for Gitlab, which uses the format <registry>/organization/<repo>/<image>:<tag>
). If <registry>
is not provided, it defaults to docker.io
, so attempting to pull linuxserver/swag
will result in pulling docker.io/linuxserver/swag:latest
.docker pull
, the image manifest is first retrieved.lscr.io/linuxserver/swag:latest
tag is a dynamic one and it points to a different image every time a new stable build is pushed. Static tags on the other hand are pushed to the registry once and never updated. Repulling the same static tag at a later time will pull the same image as before. lscr.io/linuxserver/swag:arm64v8-2.6.0-ls224
is a static tag as it contains the specific build number (ls224
) and will not get overwritten as the build number will get incremented in the next build and push.We push our images to four public registries. There are subtle differences between these registries in how the repos and images are structured and named.
Docker.io is the default registry. If the user does not define a registry in a command, docker client automatically adds docker.io/
. For instance pulling linuxserver/swag
is the same as pulling docker.io/linuxserver/swag
In the beginning of time, Linuxserver.io decided to set up multiple organizations on Docker Hub to host images. There were separate orgs for different arches such as armhf
and aarch64
, and there were separate orgs for baseimages and community images. Over time, the secondary arch images were brought under the same orgs as the amd64
ones through the use of multi-arch manifests and those additional orgs were deprecated. The community org that hosted community provided and maintained images was also deprecated as we realized that the community did not contribute further into support and maintenance of the images,...
It has been two years since Webtop and our accompanying base images were released with the goal of delivering a full Linux Desktop to your Web Browser. If you were not aware the backend technology enabling this was a combination of xrdp, Guacamole, and an in house client Gclient. Guacamole and xrdp are amazing pieces of software, but are held back by forcing a square peg into a round hole. Most remote desktop software is built for a native desktop client and requires a significant amount of overhead to convert it into a format a modern Web browser understands, because of that fidelity and performance is not great. The folks at noVNC have done a great job of creating an RFB compliant native VNC client for the browser, but again they are bound by that protocol and can only do so much to optimize for web.
This led me (TheLamer) down a rabbit hole of trying to find an open source project with the singular goal of delivering Linux to Web Browsers. I am happy to announce, after more than a year of work in the background, that not only have I found it but have joined the KasmVNC team to see this through. This is the fundamental technology driving the new containers that just went live. Some important notes before we get started:
Here is a quick comparison of our previous version vs now: (1080p capture)
On top of a drastic improvement in responsiveness and FPS there is also fidelity with fine grain control over compression, format, and frame-rate to suit your needs.
The real question though is how high can you go?
Lossless, not fake lossless, or semi lossless, but actual true 24 bit RGB leveraging the Quite OK Image Format decoded client-side with threaded webassembly, more info here. Even better this mode is capable of going over a gigabit at high FPS so if you have been eyeballing that 10gb switch you just found your excuse.
When you pair this with the 32-bit float audio and a fullscreen browser window you get that local feel all from the comfort of your browser.
It is difficult to show a demo of what lossless is like so why not try it yourself?
sudo docker run --rm -it --security-opt seccomp=unconfined --shm-size="1gb" -p 3001:3001 lscr.io/linuxserver/webtop:latest bash
Hop into https://yourhost:3001 and swap Settings > Stream Quality > Preset Modes > Lossless. Check Render Native Resolution if you use UI scaling.
As we wrote almost a year ago now, 32-bit Arm has been on life support for a while, and you may have noticed that none of the new images we've released in recent months have offered 32-bit Arm (armhf) versions, and a number of older images have dropped support over the same period. This has been part of our "soft deprecation" of the architecture, as it has become more and more difficult to support with contemporary applications.
Last week, Raspberry Pi OS started defaulting to a 64-bit kernel on boot, if the hardware supports it, which was possibly not the most graceful way to handle things, but here we are. What this means is that, essentially, 32-bit Arm has transitioned from "on life support" to "doomed"; there is obviously still hardware out there that doesn't support 64-bit, but the single biggest pool of users who can move to 64-bit is now having it (sort of) done for them.
A year ago, around 2/3 of our Arm users were still on 32-bit platforms, today it's less than 1/5, and consequently we have taken the difficult decision to formally deprecate 32-bit Arm builds from 2023-07-01. Due to the number of images and how our build pipelines work there's going to be some wiggle room here, but essentially from the 1st of July 2023 we will no longer support any 32-bit platforms.
Old images will continue to work, but will not receive application or OS updates, and we will not provide support for them. Additionally, the latest
and arm32v7-latest
tags will no longer work for 32-bit Arm, you will need to provide a specific version tag if you wish to pull one of the old images.
If you're currently using our 32-bit Arm images, what are your options?
uname -m
from a terminal session - a response of armv7l
or armhf
means you're running a 32-bit kernel.getconf LONG_BIT
will give you a response of 32
if this is the case.As before, we know this probably isn't what you want to hear, but unfortunately technology marches forward and 32-bit is doomed. Hopefully by providing as much notice as possible you'll have time to find a solution that works for you.
]]>As you may already know, Docker Inc. is sunsetting Free team organizations.
The Linuxserver organization is safe, as it is "Sponsored OSS". This is a program Docker introduced after enforcing pull-limits, being part of this program means our images do not count in the quota for the users' pull-limit. The perks this gives a sponsored organization are comparable to the "Team" organization plan.
In our daily operations we use 3 organizations, this is because we scope images three ways: linuxserver
for our production images, lsiodev
for non-live branch builds, and lspipepr
for PR builds. This means that we need to figure out how to handle lsiodev and lspipepr, as they are currently both Free Team organizations.
Docker's comms on this change have been appalling, but they have at least made a commitment that they won't free up the namespace for any account they delete, which means that at least we don't have to worry about bad actors snapping up popular org names and hosting malicious images on them.
As an end-user, if you are using lscr.io for your images, you won't notice any changes regardless of what happens.
It is an organizational unit which allows multiple users access to a shared "namespace" on Docker Hub. The namespace refers to the linuxserver
part of docker.io/linuxserver/swag:latest
. Being able to share this namespace between users is beneficial for multiple reasons; the linuxserver account is an organization primarily to reduce credential sharing, and for the ability to have a "bot" account, which is responsible for the actual pushing of images.
Not having to deal with shared credentials lowers the barrier of entry for onboarding new maintainers, especially if there is no release pipeline set up for the project. There is also quite a lot of trust involved with sharing credentials, you are essentially handing over ownership. A good organization implementation also includes permission management, Docker Hub has just enough roles to be able to say who can push images.
If you use our base images for your own projects, or fork our downstream images to modify them, you're probably aware that we ask you to change the branding that appears in the container init logs to make it clear that your image is not associated with us. This is for your benefit as much as ours: we aren't well-equipped to provide your users with support, and you don't want them crediting us for your work.
As part of some recent changes, we realised that the current approach doesn't work very well. Most people don't bother, or realise they need, to change the branding, and for those who do it's a bit of a pain. So we've changed things around.
From today, if you build from one of our modernised base images and don't change anything, your init logs will look something like this:
If you want to add your own branding, when using our base images or a forked downstream one, just place a file called branding
containing the text you want to use into the /etc/s6-overlay/s6-rc.d/init-adduser
folder of your image. The branding file will replace the highlighted section of the init:
On start-up, the base image will automatically load the branding into its init, allowing you to inflict whatever ASCII art you like on your users:
The affected bases are:
As well as any derivative base images that use them, such as our nginx and rdesktop bases. We'll be slowly phasing out our older base images over the next few months.
Hopefully this makes it simpler for everyone to manage the branding of your images when using our bases.
A final note: if you're already overriding the adduser init to do custom branding, it'll keep working, but we'd recommend switching to the new approach so that you don't miss out on any future changes to that init step.
]]>WireGuard at this point needs no introduction as it became quite ubiquitous especially within the homelab community due to its ease of use and high performance. We previously showcased several ways to route host and container traffic through our WireGuard docker container in a prior blog article. In this article, we will showcase a more complex setup utilizing multiple WireGuard containers on a VPS to achieve split tunneling so we can send outgoing connections through a commercial VPN while maintaining access to homelab when remote.
The setup showcased in this article was born out of my specific need to a) tunnel my outgoing connections through a VPN provider like Mullvad for privacy, b) have access to my homelab, and c) maintain as fast of a connection as possible, while on the go (ie. on public wifi at a hotel or a coffeeshop). Needs a) and b) would be easily achieved with a WireGuard server and a client running on my home router (OPNsense), which I already have. However, my cable internet provider's anemic upload speeds translate into a low download speed when connected to my home router remotely. To achieve c), I had to rely on a VPS with a faster connection and split the tunneling between Mullvad and my home. Alternatively, I could split the tunnel on each client's WireGuard config, but that is a lot more work, and each client would use up a separate private key, which becomes an issue with commercial VPNs (Mullvad allows up to 5 keys).
In this article, we will set up 3 WireGuard containers, one in server mode and 2 in client modes, on a Contabo VPS server. One client will connect to Mullvad for most outgoing connections, and the other will connect to my OPNsense router at home for access to my homelab.
Before we start, I should add that while having other containers use the WireGuard container's network (ie. network_mode: service:wireguard
) seems like the simplest approach, it has some major drawbacks:
wg0
is hardcoded).The last point listed becomes a deal breaker for this exercise. Therefore we will rely on routing tables to direct connections between and through WireGuard containers.
DISCLAIMER: This article is not meant to be a step by step guide, but instead a showcase for what can be achieved with our WireGuard image. We do not officially provide support for routing whole or partial traffic through our WireGuard container (aka split tunneling) as it can be very complex and require specific customization to fit your network setup and your VPN provider's. But you can always seek community support on our Discord server's #other-support...
]]>One of the questions we get asked pretty regularly is: "How do I customise/modify/otherwise make use of one of your images?". Now, some of this is already covered in our documentation, and reading that always helps, but I thought it might be instructive to run through the details of how our containers actually hang together and what the various options for extending and customising them are. This isn't going to be a hugely technical post, but it does assume a basic level of understanding of both Linux and containers generally, and you might struggle to follow it without that.
There are 3 main schools of thought when it comes to container design: One says that a container should run a single process, and you should have as many containers as you need to run the processes for your application. The second says that a container should run a single application, and you should have as many processes as you need to do so in a single container (excluding databases, KV caches, etc.). The third says that a container should run everything you need for an application; front end, back end, database, cache, kitchen sink.
We subscribe to the second approach, in part because our target audience wants straightforward setups and doesn't want to run 9 separate containers for a password manager, and in part because sometimes it just doesn't make sense to split things out into their own containers just for the sake of ideological purity. In addition, our containers don't really need to be highly scalable because they're mostly used in homelab environments where you're looking at tens of users, not tens of thousands. That doesn't mean you can't scale our containers, but there are some inherent limitations when you move beyond One Container, One Process that need careful planning to work around.
If you're going to be running more than one process, you need a process manager, just like you would on a native host. There are a number of options depending on your needs; everything from full-on systemd if you're completely mad, to options like SysVinit and supervisord, all the way down to our init of choice s6. Specifically, we make use of s6-overlay, which is a bundle of tarballs and init scripts designed to make it easy to run s6 as your process manager in a container. We recently went through a complete overhaul of our init process to take advantage of the new features available in version 3 of s6-overlay, and that's what I'm going to focus on in this post.
The very short version of how our container init works is as follows: