This is an application for managing Docker in Docker test environments. Specifically, it is used to manage the creation and deletion of per-branch Docker container hosts. Furthermore, it acts as proxy, routing to the correct container based on the subdomain of the request. If no request comes in for a certain amount of time, the container is automatically suspended to save resources. Once it receives a request again, it is resumed.
It works in connection with the container-manager-dind image, although own images can be used as well.
Since I did not feel like reinventing the wheel when it comes writing a frontend for viewing container host logs, I am instead recommending the use of the excellent Docker OSS sponsored Dozzle for this purpose. It works very well and does just what I was going to implement anyway in a much better way.
- Create and delete Docker containers hosts (DinD) based on a specific branch
- Automatically suspend and resume containers based on request activity (either stop or pause)
- Pull a specific repository upon container host creation and run a specific build script to set up a Compose file, used to set up the test environment
- Proxy requests to the correct container based on the subdomain of the request
- Docker, alternatively Podman
- A host allowing Docker in Docker containers (specifically: allow
--privileged
), therefore it is best to use a dedicated host for this application.
- Docker Compose (works with Podman, too)
name: container-manager
services:
container-manager:
container_name: container-manager
image: dr460nf1r3/container-manager:main
ports:
- '80:3000'
volumes:
- '/var/lib/container-manager:/var/lib/container-manager:rw'
- '/var/run/docker.sock:/var/run/docker.sock:rw'
environment:
CONFIG_CONTAINER_PREFIX: container-host
CONFIG_CUSTOM_BUILD_SCRIPT: ci/build.sh
CONFIG_CUSTOM_BUILD_SCRIPT_LOCAL: false
CONFIG_DATA_DIR_HOST: /var/lib/container-manager/data
CONFIG_DIR_CONTAINER: /config
CONFIG_DIR_HOST: /var/lib/container-manager/config
CONFIG_HOSTNAME: localhost.local
CONFIG_IDLE_TIMEOUT: 60000
CONFIG_LOGLEVEL: debug
CONFIG_MASTER_IMAGE: dr460nf1r3/container-manager-dind
CONFIG_MASTER_IMAGE_TAG: main
CONFIG_REPO_URL: https://github.com/dr460nf1r3/dind-poc.git
CONFIG_SUSPEND_MODE: stop
networks:
- container-manager
restart: always
logging:
driver: 'local'
options:
max-size: '10m'
max-file: '5'
networks:
container-manager:
external: true
name: container-manager
Alternatively, the corresponding docker run
command would be as follows:
$ docker run --net container-manager --name container-manager -p 80:3000 \
-v /var/lib/container-manager:/var/lib/container-manager:rw \
-v /var/run/docker.sock:/var/run/docker.sock:rw \
-e CONFIG_CONTAINER_PREFIX=container-host \
-e CONFIG_CUSTOM_BUILD_SCRIPT=ci/build.sh \
-e CONFIG_CUSTOM_BUILD_SCRIPT_LOCAL=false \
-e CONFIG_DATA_DIR_HOST=/var/lib/container-manager/data \
-e CONFIG_DIR_CONTAINER=/config \
-e CONFIG_DIR_HOST=/var/lib/container-manager/config \
-e CONFIG_HOSTNAME=localhost.local \
-e CONFIG_IDLE_TIMEOUT=60000 \
-e CONFIG_LOGLEVEL=debug \
-e CONFIG_MASTER_IMAGE=dr460nf1r3/container-manager-dind \
-e CONFIG_MASTER_IMAGE_TAG=main \
-e CONFIG_REPO_URL=https://github.com/dr460nf1r3/dind-poc.git \
-e CONFIG_SUSPEND_MODE=stop \
--restart always \
--log-driver local --log-opt max-size=10m,max-file=5 \
dr460nf1r3/container-manager:main
Before starting the compose file, make sure to create the Docker network manually.
$ docker network create container-manager
This is important to allow the container manager to communicate with the container hosts.
We prefer manual creation as creating it in the compose file can lead to issues, such as mismatched network IDs between
stopped containers when running docker compose down
and docker compose up
again.
After setting up the application via a compose file, you can create a container host by sending either a POST or GET
request to the /run
route.
- The request must contain the branch name as a query parameter, e.g.
http://localhost/run?branch=main
. - For supplying secrets, the a POST request can be used with a JSON body containing the branch name and secrets.
-
CONFIG_ADMIN_SECRET
: Secret used to authenticate requests management requests, optional -
CONFIG_CONTAINER_PREFIX
: Prefix for container host names, prepended to the branch name -
CONFIG_CUSTOM_BUILD_SCRIPT
: Path to a custom build script that is executed after the repository is cloned, or the host when CONFIG_CUSTOM_BUILD_SCRIPT_LOCAL is set to true. Otherwise, it must be relative to the cloned repository root. -
CONFIG_CUSTOM_BUILD_SCRIPT_LOCAL
: If set to true, the custom build script is copied from the host to the container and executed here -
CONFIG_DATA_DIR_HOST
: Directory on the host where the data is stored (must exist on the host, too) -
CONFIG_DIR_CONTAINER
: Directory in the container hosts where the config files are stored -
CONFIG_DIR_HOST
: Directory on the host where the per-branch directories are stored (must exit on the host, too) -
CONFIG_HOSTNAME
: Hostname of the container host, defaults tolocalhost.local
-
CONFIG_IDLE_TIMEOUT
: Time in milliseconds after which a container is paused if no requests are received, defaults to 10 minutes -
CONFIG_LOGLEVEL
: Log level of the application (one of "verbose," "debug," "log," "warn," "error," "fatal"), defaults to "log" -
CONFIG_LOGVIEWER
: Whether to enable the Dozzle log viewer to be deployed with correct defaults, eithertrue
orfalse
, defaults totrue
-
CONFIG_LOGVIEWER_CONTAINER_NAME
: Container name of the Dozzle log viewer, defaults tocontainer-logviewer
-
CONFIG_LOGVIEWER_IMAGE
: Image used to create the Dozzle log viewer, defaults toamir20/dozzle
-
CONFIG_LOGVIEWER_PORT
: Port on which the Dozzle log viewer is exposed, defaults to8080
-
CONFIG_LOGVIEWER_TAG
: Tag of the image used to create the Dozzle log viewer, defaults tolatest
-
CONFIG_MASTER_IMAGE_TAG
: Tag of the image used to create the container hosts, defaults tomain
-
CONFIG_MASTER_IMAGE
: Image used to create the container hosts, defaults todr460nf1r3/container-manager-dind
-
CONFIG_REPO_URL
: URL of the repository that is cloned when a container host is created -
CONFIG_SUSPEND_MODE
: Mode in which the container is paused, eitherstop
orpause
, defaults tostop
Even though this can easily be done via the Docker CLI, using Dozzle is a more convenient way to view the logs. It also doesn't require you to have access to the host machine, making it a good fit for developers testing their deployments. This application provides a Dozzle container by default when starting up, that can be used to view the logs of the container hosts. It's scope is limited to the container hosts and the application itself, so no other containers are visible. Also, the Docker Socket is mounted read-only, so no actions can be taken via the Dozzle interface.
After the start of the application, the Dozzle interface can be accessed via http://localhost:8080.
A few things can be configured via environment variables (see above), if further customization is needed, please consult the Dozzle documentation, disable automatic deployment of the container and add a custom instance to the compose file directly.
The admin routes can only be called by adding the x-admin-request
header to the request.
This is specifically required to prevent accidental calls to these routes while proxying requests.
To protect the admin routes, you can set the CONFIG_ADMIN_SECRET
environment variable.
If set, the secret must be sent in the x-admin-token
header of the request.
If the secret is not set, the routes are available without authentication.
The Swagger API documentation can be found at /api
when the application is running
(e.g. http://localhost/api).
Current routes (might not be up to date)
Method | Path | Description |
---|---|---|
GET | /container | |
POST | /container | |
DELETE | /container | |
GET | /health | |
GET | /status | |
GET | /* | |
POST | /* | |
PUT | /* | |
DELETE | /* | |
PATCH | /* | |
OPTIONS | /* | |
HEAD | /* | |
SEARCH | /* |
Name | Path | Description |
---|---|---|
RunContainerDto | #/components/schemas/RunContainerDto |
branch: string;
checkout?: string
authToken?: string
authUser?: string
keepActive?: boolean
X-Admin-Token?: string
X-Admin-Request: string
-
201 The deployment has been created successfully.
-
400 The given parameters were not what we expect.
-
500 An error occurred while creating the deployment.
X-Admin-Token?: string
X-Admin-Request: string
- application/json
{
// The branch this deployment corresponds to
branch: string
// Whether to checkout a tag or commit hash
checkout?: string
// The username to authenticate with, if required
authToken?: string
// The API token to use while cloning the repository, if required
authUser?: string
// Whether the deployment should be kept active at all times, e.g. for cronjob tests - off by default
keepActive?: boolean
}
-
201 The deployment has been created successfully.
-
400 The given parameters were not what we expect.
-
500 An error occurred while creating the deployment.
branch: string;
X-Admin-Token?: string
X-Admin-Request: string
-
201 The deployment has been deleted successfully.
-
404 No deployment found with that name.
X-Admin-Request: string
-
200 The health of the application is okay.
-
503 The app is unhealthy.
X-Admin-Token?: string
X-Admin-Request: string
-
200 The status of the application.
-
500 An error occurred while retrieving the status of the application.
-
404 No container found with that name.
-
405 The method is not allowed for this route, if an X-Admin-Request header is sent.
-
default Proxy the request to the deployed container host.
-
404 No container found with that name.
-
405 The method is not allowed for this route, if an X-Admin-Request header is sent.
-
default Proxy the request to the deployed container host.
-
404 No container found with that name.
-
405 The method is not allowed for this route, if an X-Admin-Request header is sent.
-
default Proxy the request to the deployed container host.
-
404 No container found with that name.
-
405 The method is not allowed for this route, if an X-Admin-Request header is sent.
-
default Proxy the request to the deployed container host.
-
404 No container found with that name.
-
405 The method is not allowed for this route, if an X-Admin-Request header is sent.
-
default Proxy the request to the deployed container host.
-
404 No container found with that name.
-
405 The method is not allowed for this route, if an X-Admin-Request header is sent.
-
default Proxy the request to the deployed container host.
-
404 No container found with that name.
-
405 The method is not allowed for this route, if an X-Admin-Request header is sent.
-
default Proxy the request to the deployed container host.
-
404 No container found with that name.
-
405 The method is not allowed for this route, if an X-Admin-Request header is sent.
-
default Proxy the request to the deployed container host.
{
// The branch this deployment corresponds to
branch: string
// Whether to checkout a tag or commit hash
checkout?: string
// The username to authenticate with, if required
authToken?: string
// The API token to use while cloning the repository, if required
authUser?: string
// Whether the deployment should be kept active at all times, e.g. for cronjob tests - off by default
keepActive?: boolean
}
- [x] Add support managing container hosts via a web interface (implemented in Version 2.1.0 via the Dozzle container)
- [x] Add support for streaming logs from the container hosts via web interface (implemented in Version 2.1.0 via the Dozzle container)
- [ ] You tell me!
Pull requests for new features and bugfixes are always welcome!
$ nix develop
This will:
- set up a Nix shell with pre-commit hooks and dev tools like
commitizen
- install all dependencies with
pnpm install
$ pnpm install
While running the application as follows locally is possible, it is recommended to use the provided compose setup.
# development
$ pnpm run start
# watch mode
$ pnpm run start:dev
# production mode
$ pnpm run start:prod
The Docker image can be used as follows:
$ docker compose up -d
This uses the provided compose.yaml file to start the application. Requests can then be sent to the
application via http://localhost.
This will persist state and /var/lib/docker
of the container hosts in /var/lib/container-manager
.
# unit tests
$ pnpm run test
# e2e tests
$ pnpm run test:e2e
# real world tests
$ bash test/e2e.sh