GitXplorerGitXplorer
m

socat-operator

public
0 stars
0 forks
0 issues

Commits

List of commits on branch master.
Unverified
47ce5782aacde3ba80f757a4854c58e683ccedbc

README.md: Extend

mmrueg committed 4 years ago
Unverified
b58471cc25e114d93429301183b2a2b7f5c6d51b

README.md: Add a diagram

mmrueg committed 4 years ago
Verified
e590fd6289002fa04b3da0bb34ff97451ce7d477

Dockerfile: alpine with socat installed

mmrueg committed 4 years ago
Verified
33f262f871f86578f5366d9ff0ce3d69064f8b21

Initial Design.

mmrueg committed 4 years ago
Verified
d3a6ade1af3e40edca6a712e718ae065fcae93ea

Initial commit

mmrueg committed 4 years ago

README

The README file for this repository.

socat-operator

Terrible idea - An operator to expose services listening on the worker node localhost into a kubernetes cluster using Unix Domain Sockets

Design

Initial scope

A static pod runs socat to relay a TCP/UDP socket exposed on the localhost network into a unix domain socket. The socket file is placed in a shared directory that can be used as a hostPath volume for a regular Kubernetes pod, this container can expose the unix domain socket as a TCP/UDP socket inside the cluster.

Extension

The static pod can be extended to use a second shared volume that provides configuration data to spin up multiple socat processes for different sockets in parallel. The configuration in this shared volume is provided by an operator running inside Kubernetes. It creates the config using a CustomResourceDefinition and spawns the necessary Kubernetes objects like DaemonSets, Services with correct labels so they appear on the right hosts.

Initial Design Diagram

Purpose

The main purpose for this tool is to scrape metrics from services that are only exposed on localhost. Of course it can be used for other use cases as well, like exposing services that are difficult to migrate into containers (system loggers, auditing systems, etc.).

If kubeadm is used to create a cluster, it creates several static pods (kube-scheduler, kube-controller-manager), that only listen on localhost by default. In order to collect metrics from those pods, one would need to expose them on the Node itself, which adds an additional risk since the service is exposed outside.

If docker is used as a container runtime, it has the ability to expose runtime metrics. Unfortunately, those metrics can't be secured via TLS from docker, so you would need to setup a reverse proxy in order to do that. If your kubernetes cluster has a ServiceMesh or similar service-to-service security, this tool would allow you to expose it and scrape it securely within your cluster.