Display Preferences


updates.atom
View - systemd Socket Activation for Game Server Containers
Izaya, 2022-02-20 21:04

systemd Socket Activation for Game Server Containers

I play a reasonably wide variety of (mostly sandbox) games; most recently, Starbound. The issue with sandbox games is that they tend to use a lot of memory, and I only have so much to go around. Today I went ahead and sorted out how to start and stop game servers on demand. In theory, this could also be used for any other service that communicates over TCP.

The situation

I have a Starbound server in an LXC container, debian11-starbound, running on my host, asakura. This container is started and stopped via systemd, as lxc@debian11-starbound. The actual Starbound server is started by systemd also, by a unit in the container itself.

Sockets, services and scripts

On debian11-starbound, I already have a template unit to start the server, as /etc/systemd/system/starbound-server@.service.

[Unit]
Description=StarboundServer
After=network.target
[Service]
WorkingDirectory=/home/sbserver/%i/linux
User=sbserver
Group=sbserver
Type=simple
ExecStart=/home/sbserver/%i/linux/starbound_server
RestartSec=15
Restart=always
KillSignal=SIGINT
[Install]
WantedBy=multi-user.target

In /home/sbserver I have an arbitrary set of server instances, and they can be enabled with, for example, systemctl enable starbound-server@sks1.

On asakura, I create /etc/systemd/system/lxc-tcp-proxy@.service, to handle the actual proxying.

[Unit]
Description=LXC TCP proxy for %I

[Service]
; PrivateTmp=false is necessary because I'm using a file in /tmp to count how many copies are running.
PrivateTmp=false
; The EnvironmentFile will be the configuration file, named by port number on the host.
EnvironmentFile=/etc/lxc-tcp-proxy/%i
ExecStartPre=/usr/local/bin/lxcproxy-pre
ExecStart=/bin/bash -c '/lib/systemd/systemd-socket-proxyd --exit-idle-time=60m "$HOST:$PORT"'
ExecStop=/usr/local/bin/lxcproxy-post

There are three main parts to what will go on here; ExecStartPre will check the container is running, and start it if it's not, before ExecStart gets to run. It will also increment a counter, stored in /tmp, to keep track of how many different ports are in use for this host. An important note here is that during the wait, systemd will hold the phone so to speak, keeping the connection to the client open until ExecStart can get to it.

The script, /usr/local/bin/lxcproxy-pre is reasonably straightforward:

#!/bin/bash

# check how many other ports are being proxied for this container
connections=$(cat /tmp/lxc-tcp-proxy/$CNAME 2>/dev/null)
: ${connections:=0} # 0 if file doesn't exist
echo $connections
if [ $connections -lt 1 ]; then
 systemctl start lxc@$CNAME
 if waitport $HOST $PORT; then
  echo started
 else
  exit -1
 fi
fi
((connections++))
echo $connections
# record the modified number of connections if the connection goes through
echo $connections > /tmp/lxc-tcp-proxy/$CNAME

It does reference another script, waitport, which has been lifted from here and installed at /usr/local/bin/waitport.

#!/bin/bash
host=$1
port=$2
tries=600
for i in `seq $tries`; do
  if /bin/nc -z $host $port > /dev/null ; then
    # Ready
    exit 0
  fi
  /bin/sleep 0.1
done
# FAIL
exit -1

ExecStart uses systemd-socket-proxyd to forward a port to the container itself, with the argument to exit after an hour if nobody is using it. Because we're using environment files for configuration, we need to run systemd-socket-proxyd inside bash (or another shell) to pass the host and port arguments to it.

Once the idle timer runs out, the ExecStop line will decrement the counter, and, if there's nothing else talking to the container, shut it down. The script for this is also straightforward:

#!/bin/bash
# decrement the counter in /tmp
connections=$(cat /tmp/lxc-tcp-proxy/$CNAME)
((connections--))
echo $connections > /tmp/lxc-tcp-proxy/$CNAME
echo $connections

if [ $connections -lt 1 ]; then
 # if the counter is 0, stop the server
 echo "kill server"
 systemctl stop "lxc@$CNAME"
fi

Now that we have all of this hooked up, we need to write the environment file, needs to contain the hostname, server port, and container name for the socket we want to proxy. This will go into /etc/lxc-tcp-proxy/21025.

HOST=starbound.sks.local
PORT=21025
CNAME=debian11-starbound

Last of all, it's tied together with /etc/systemd/system/lxc-tcp-proxy@.socket, which is almost pure boilerplate.

[Socket]
ListenStream=%i
[Install]
WantedBy=sockets.target

Then, as my Starbound server is running on port 21025, I can disable the LXC container service, and start/enable lxc-tcp-proxy@21025.socket.

The last thing to do is to forward the ports from the internet to my main server, and let it do its thing.

Limitations and further refinements

  • Using this method, only game servers that communicate with clients over TCP can be auto-started, as systemd-socket-proxyd only handles TCP.
  • Some way to signal and enable/disable which servers to start and stop within a container (another socket unit? A script to attach to a running container and start a given game server?)

Shout out to N33R for suggesting some refinements that made it into this.


Back to top