GitXplorerGitXplorer
d

container-manager

public
3 stars
0 forks
7 issues

Commits

List of commits on branch main.
Verified
e472f1462ed6bd1dd9a90f98988fa1610c03f343

chore: update deps

ddr460nf1r3 committed 10 hours ago
Unverified
d350ee7dbcb9809eb90307b4bdc48a2871153658

chore(deps): update dependency eslint-plugin-prettier to v5.2.2

rrenovate[bot] committed 4 days ago
Unverified
6dbc27894c5946935e5859706bc391e071eaa312

chore(deps): update dependency @types/node to v22.10.6

rrenovate[bot] committed 5 days ago
Unverified
6b6798e8877e4fe62608af58c66a05abc071ea4a

chore(deps): lock file maintenance

rrenovate[bot] committed 6 days ago
Unverified
e6750afd629bfa0ca19d2d09953dcb32ae7e303b

chore(deps): update dependency ts-loader to v9.5.2

rrenovate[bot] committed 7 days ago
Unverified
2ff614b65ec1b6df08ebbd20328b6f4aec7238d4

fix(deps): update dependency @nestjs/swagger to v8.1.1

rrenovate[bot] committed 9 days ago

README

The README file for this repository.

https://img.shields.io/docker/pulls/dr460nf1r3/container-manager.svg GitHub commit activity (branch) Test and deploy container Release management GitHub Tag GitHub License Commitizen friendly

This is an application for managing Docker in Docker test environments. Specifically, it is used to manage the creation and deletion of per-branch Docker container hosts. Furthermore, it acts as proxy, routing to the correct container based on the subdomain of the request. If no request comes in for a certain amount of time, the container is automatically suspended to save resources. Once it receives a request again, it is resumed.

It works in connection with the container-manager-dind image, although own images can be used as well.

Since I did not feel like reinventing the wheel when it comes writing a frontend for viewing container host logs, I am instead recommending the use of the excellent Docker OSS sponsored Dozzle for this purpose. It works very well and does just what I was going to implement anyway in a much better way.

Features

  • Create and delete Docker containers hosts (DinD) based on a specific branch
  • Automatically suspend and resume containers based on request activity (either stop or pause)
  • Pull a specific repository upon container host creation and run a specific build script to set up a Compose file, used to set up the test environment
  • Proxy requests to the correct container based on the subdomain of the request

Requirements

  • Docker, alternatively Podman
  • A host allowing Docker in Docker containers (specifically: allow --privileged), therefore it is best to use a dedicated host for this application.

Not required, but recommended

Sample compose.yml for running the application

name: container-manager
services:
  container-manager:
    container_name: container-manager
    image: dr460nf1r3/container-manager:main
    ports:
      - '80:3000'
    volumes:
      - '/var/lib/container-manager:/var/lib/container-manager:rw'
      - '/var/run/docker.sock:/var/run/docker.sock:rw'
    environment:
      CONFIG_CONTAINER_PREFIX: container-host
      CONFIG_CUSTOM_BUILD_SCRIPT: ci/build.sh
      CONFIG_CUSTOM_BUILD_SCRIPT_LOCAL: false
      CONFIG_DATA_DIR_HOST: /var/lib/container-manager/data
      CONFIG_DIR_CONTAINER: /config
      CONFIG_DIR_HOST: /var/lib/container-manager/config
      CONFIG_HOSTNAME: localhost.local
      CONFIG_IDLE_TIMEOUT: 60000
      CONFIG_LOGLEVEL: debug
      CONFIG_MASTER_IMAGE: dr460nf1r3/container-manager-dind
      CONFIG_MASTER_IMAGE_TAG: main
      CONFIG_REPO_URL: https://github.com/dr460nf1r3/dind-poc.git
      CONFIG_SUSPEND_MODE: stop
    networks:
      - container-manager
    restart: always
    logging:
      driver: 'local'
      options:
        max-size: '10m'
        max-file: '5'

networks:
  container-manager:
    external: true
    name: container-manager

Alternatively, the corresponding docker run command would be as follows:

$ docker run --net container-manager --name container-manager -p 80:3000 \
  -v /var/lib/container-manager:/var/lib/container-manager:rw \
  -v /var/run/docker.sock:/var/run/docker.sock:rw \
  -e CONFIG_CONTAINER_PREFIX=container-host \
  -e CONFIG_CUSTOM_BUILD_SCRIPT=ci/build.sh \
  -e CONFIG_CUSTOM_BUILD_SCRIPT_LOCAL=false \
  -e CONFIG_DATA_DIR_HOST=/var/lib/container-manager/data \
  -e CONFIG_DIR_CONTAINER=/config \
  -e CONFIG_DIR_HOST=/var/lib/container-manager/config \
  -e CONFIG_HOSTNAME=localhost.local \
  -e CONFIG_IDLE_TIMEOUT=60000 \
  -e CONFIG_LOGLEVEL=debug \
  -e CONFIG_MASTER_IMAGE=dr460nf1r3/container-manager-dind \
  -e CONFIG_MASTER_IMAGE_TAG=main \
  -e CONFIG_REPO_URL=https://github.com/dr460nf1r3/dind-poc.git \
  -e CONFIG_SUSPEND_MODE=stop \
  --restart always \
  --log-driver local --log-opt max-size=10m,max-file=5 \
  dr460nf1r3/container-manager:main

Running the application

Before starting the compose file, make sure to create the Docker network manually.

$ docker network create container-manager

This is important to allow the container manager to communicate with the container hosts. We prefer manual creation as creating it in the compose file can lead to issues, such as mismatched network IDs between stopped containers when running docker compose down and docker compose up again.

After setting up the application via a compose file, you can create a container host by sending either a POST or GET request to the /run route.

  • The request must contain the branch name as a query parameter, e.g. http://localhost/run?branch=main.
  • For supplying secrets, the a POST request can be used with a JSON body containing the branch name and secrets.

Environment variables

  • CONFIG_ADMIN_SECRET: Secret used to authenticate requests management requests, optional
  • CONFIG_CONTAINER_PREFIX: Prefix for container host names, prepended to the branch name
  • CONFIG_CUSTOM_BUILD_SCRIPT: Path to a custom build script that is executed after the repository is cloned, or the host when CONFIG_CUSTOM_BUILD_SCRIPT_LOCAL is set to true. Otherwise, it must be relative to the cloned repository root.
  • CONFIG_CUSTOM_BUILD_SCRIPT_LOCAL: If set to true, the custom build script is copied from the host to the container and executed here
  • CONFIG_DATA_DIR_HOST: Directory on the host where the data is stored (must exist on the host, too)
  • CONFIG_DIR_CONTAINER: Directory in the container hosts where the config files are stored
  • CONFIG_DIR_HOST: Directory on the host where the per-branch directories are stored (must exit on the host, too)
  • CONFIG_HOSTNAME: Hostname of the container host, defaults to localhost.local
  • CONFIG_IDLE_TIMEOUT: Time in milliseconds after which a container is paused if no requests are received, defaults to 10 minutes
  • CONFIG_LOGLEVEL: Log level of the application (one of "verbose," "debug," "log," "warn," "error," "fatal"), defaults to "log"
  • CONFIG_LOGVIEWER: Whether to enable the Dozzle log viewer to be deployed with correct defaults, either true or false, defaults to true
  • CONFIG_LOGVIEWER_CONTAINER_NAME: Container name of the Dozzle log viewer, defaults to container-logviewer
  • CONFIG_LOGVIEWER_IMAGE: Image used to create the Dozzle log viewer, defaults to amir20/dozzle
  • CONFIG_LOGVIEWER_PORT: Port on which the Dozzle log viewer is exposed, defaults to 8080
  • CONFIG_LOGVIEWER_TAG: Tag of the image used to create the Dozzle log viewer, defaults to latest
  • CONFIG_MASTER_IMAGE_TAG: Tag of the image used to create the container hosts, defaults to main
  • CONFIG_MASTER_IMAGE: Image used to create the container hosts, defaults to dr460nf1r3/container-manager-dind
  • CONFIG_REPO_URL: URL of the repository that is cloned when a container host is created
  • CONFIG_SUSPEND_MODE: Mode in which the container is paused, either stop or pause, defaults to stop

Viewing container host logs

Even though this can easily be done via the Docker CLI, using Dozzle is a more convenient way to view the logs. It also doesn't require you to have access to the host machine, making it a good fit for developers testing their deployments. This application provides a Dozzle container by default when starting up, that can be used to view the logs of the container hosts. It's scope is limited to the container hosts and the application itself, so no other containers are visible. Also, the Docker Socket is mounted read-only, so no actions can be taken via the Dozzle interface.

After the start of the application, the Dozzle interface can be accessed via http://localhost:8080.

A few things can be configured via environment variables (see above), if further customization is needed, please consult the Dozzle documentation, disable automatic deployment of the container and add a custom instance to the compose file directly.

Admin routes

Calling routes

The admin routes can only be called by adding the x-admin-request header to the request. This is specifically required to prevent accidental calls to these routes while proxying requests.

Protect routes

To protect the admin routes, you can set the CONFIG_ADMIN_SECRET environment variable. If set, the secret must be sent in the x-admin-token header of the request. If the secret is not set, the routes are available without authentication.

API documentation

The Swagger API documentation can be found at /api when the application is running (e.g. http://localhost/api).

Current routes (might not be up to date)

Path Table

Method Path Description
GET /container
POST /container
DELETE /container
GET /health
GET /status
GET /*
POST /*
PUT /*
DELETE /*
PATCH /*
OPTIONS /*
HEAD /*
SEARCH /*

Reference Table

Name Path Description
RunContainerDto #/components/schemas/RunContainerDto

[GET]/container

Parameters(Query)

branch: string;
checkout?: string
authToken?: string
authUser?: string
keepActive?: boolean

Headers

X-Admin-Token?: string
X-Admin-Request: string

Responses

  • 201 The deployment has been created successfully.

  • 400 The given parameters were not what we expect.

  • 500 An error occurred while creating the deployment.


[POST]/container

Headers

X-Admin-Token?: string
X-Admin-Request: string

RequestBody

  • application/json
{
  // The branch this deployment corresponds to
  branch: string
  // Whether to checkout a tag or commit hash
  checkout?: string
  // The username to authenticate with, if required
  authToken?: string
  // The API token to use while cloning the repository, if required
  authUser?: string
  // Whether the deployment should be kept active at all times, e.g. for cronjob tests - off by default
  keepActive?: boolean
}

Responses

  • 201 The deployment has been created successfully.

  • 400 The given parameters were not what we expect.

  • 500 An error occurred while creating the deployment.


[DELETE]/container

Parameters(Query)

branch: string;

Headers

X-Admin-Token?: string
X-Admin-Request: string

Responses

  • 201 The deployment has been deleted successfully.

  • 404 No deployment found with that name.


[GET]/health

Headers

X-Admin-Request: string

Responses

  • 200 The health of the application is okay.

  • 503 The app is unhealthy.


[GET]/status

Headers

X-Admin-Token?: string
X-Admin-Request: string

Responses

  • 200 The status of the application.

  • 500 An error occurred while retrieving the status of the application.


[GET]/*

Responses

  • 404 No container found with that name.

  • 405 The method is not allowed for this route, if an X-Admin-Request header is sent.

  • default Proxy the request to the deployed container host.


[POST]/*

Responses

  • 404 No container found with that name.

  • 405 The method is not allowed for this route, if an X-Admin-Request header is sent.

  • default Proxy the request to the deployed container host.


[PUT]/*

Responses

  • 404 No container found with that name.

  • 405 The method is not allowed for this route, if an X-Admin-Request header is sent.

  • default Proxy the request to the deployed container host.


[DELETE]/*

Responses

  • 404 No container found with that name.

  • 405 The method is not allowed for this route, if an X-Admin-Request header is sent.

  • default Proxy the request to the deployed container host.


[PATCH]/*

Responses

  • 404 No container found with that name.

  • 405 The method is not allowed for this route, if an X-Admin-Request header is sent.

  • default Proxy the request to the deployed container host.


[OPTIONS]/*

Responses

  • 404 No container found with that name.

  • 405 The method is not allowed for this route, if an X-Admin-Request header is sent.

  • default Proxy the request to the deployed container host.


[HEAD]/*

Responses

  • 404 No container found with that name.

  • 405 The method is not allowed for this route, if an X-Admin-Request header is sent.

  • default Proxy the request to the deployed container host.


[SEARCH]/*

Responses

  • 404 No container found with that name.

  • 405 The method is not allowed for this route, if an X-Admin-Request header is sent.

  • default Proxy the request to the deployed container host.

References

#/components/schemas/RunContainerDto

{
  // The branch this deployment corresponds to
  branch: string
  // Whether to checkout a tag or commit hash
  checkout?: string
  // The username to authenticate with, if required
  authToken?: string
  // The API token to use while cloning the repository, if required
  authUser?: string
  // Whether the deployment should be kept active at all times, e.g. for cronjob tests - off by default
  keepActive?: boolean
}

Eventually planned features

  • [x] Add support managing container hosts via a web interface (implemented in Version 2.1.0 via the Dozzle container)
  • [x] Add support for streaming logs from the container hosts via web interface (implemented in Version 2.1.0 via the Dozzle container)
  • [ ] You tell me!

Pull requests for new features and bugfixes are always welcome!

Project setup

Install dependencies with Nix

$ nix develop

This will:

  • set up a Nix shell with pre-commit hooks and dev tools like commitizen
  • install all dependencies with pnpm install

Install dependencies without Nix

$ pnpm install

Compile and run the project

While running the application as follows locally is possible, it is recommended to use the provided compose setup.

# development
$ pnpm run start

# watch mode
$ pnpm run start:dev

# production mode
$ pnpm run start:prod

The Docker image can be used as follows:

$ docker compose up -d

This uses the provided compose.yaml file to start the application. Requests can then be sent to the application via http://localhost. This will persist state and /var/lib/docker of the container hosts in /var/lib/container-manager.

Run tests

# unit tests
$ pnpm run test

# e2e tests
$ pnpm run test:e2e

# real world tests
$ bash test/e2e.sh