GitXplorerGitXplorer
G

minikube-image-benchmark

public
1 stars
1 forks
3 issues

Commits

List of commits on branch main.
Verified
f8e2d0ff39bc0bfc027357e1a2eb13e3d5363232

Merge pull request #41 from ComradeProgrammer/containerd_dockerenv

mmedyagh committed a year ago
Unverified
9dc8cafa423878cfc0e43eb363d7bb742b2633ee

feat: support test docker-env for containerd runtime

CComradeProgrammer committed a year ago
Verified
32d98100979ac0a30ce4ee6994a52a8d0faec1a7

Merge pull request #40 from ComradeProgrammer/benchmark_action

sspowelljr committed a year ago
Verified
df07d6918e38d6efef73d2d0fddcec00056d9c07

Update pkg/benchmark/benchmark.go

sspowelljr committed a year ago
Verified
cc36f19d31dbf3f19c415a4dc0d7141445500f05

Update pkg/benchmark/benchmark.go

sspowelljr committed a year ago
Verified
35a15a5ea9788ec9ce55ed6023b9d646fb7753dd

Update cmd/benchmark.go

sspowelljr committed a year ago

README

The README file for this repository.

minikube-image-benchmark

Purpose

The purpose of this project is to create a simple to run application that benchmarks different methods of building & pushing an image to minikube. Each benchmark is run multiple times and the average run time for the runs is calculated and output to a csv file to review the results.

Warning!

This benchmarking tool is going to make changes to your Docker and minikube instances, so don't run if you don't want those to be disturbed. For example, the /etc/docker/daemon.json is modified and Docker is restarted, the following commands are run as well

minikube delete --all
docker system prune -a -f

Requirements

  • Docker needs to be installed
  • Currently only supported on Linux (only tested on Debian)

Methods

The three current methods the benchmarks tests is using minikube docker-env, minikube image load, and minikube registry addon, with more being added in the future.

How to Run Benchmarks

make
./out/benchmark # defaults to 100 runs per method

or

./out/benchmark --runs 20 # will run 20 runs per method
cat ./out/results.csv # where the output is stored

Non-Iterative vs Iterative Flow

In the non-iterative flow the images/cache is cleared after every image build, making it so each build is on a brand new Docker.

In the iterative flow the images/cache is cleared at the end of a set of benchmarks. So if 20 runs per benchmark, no cache is cleared until all 20 runs have completed, just the last layer of the image is changed between runs.