GitXplorerGitXplorer
p

kubernetes-elk-cluster

public
148 stars
61 forks
4 issues

Commits

List of commits on branch master.
Verified
5f4fa5bf9ead538b50d4f58e0e3854566f768877

EOL

ppires committed 6 years ago
Unverified
77ae1ded4be20ec85e813554a0306af93b4dd362

Merge pull request #13 from larmog/PR

ppires committed 9 years ago
Unverified
e63db0f3e071a3bed20650bae0d9e47a2ec0e67c

'host' config no longer suported, use 'hosts' instead.

llarmog committed 9 years ago
Unverified
a0f45e27457d72fd28a0f9f1bd2f1f6c8499e81c

Merge pull request #9 from pires/8-k8s_1_1

ppires committed 9 years ago
Unverified
b33d52dfaa53caf56a6ded25e32b45975c1453a5

Changed volume descriptors to be compatible with Kubernetes 1.1.x. Fixes #8

ppires committed 9 years ago
Unverified
108f879610829b888001db7d7bd38cd03508dcab

Fixes #5

ppires committed 9 years ago

README

The README file for this repository.

This project is no longer maintained

As of November 7th, 2018, I've decided to end my commitment to maintaining this repo and related.

It's been more than 3 years since I last used ELK, so I no longer have the motivation it takes to maintain and evolve this project. Also, other projects need all the attention I can give.

It was a great run, thank you all.

kubernetes-elk-cluster

ELK (Elasticsearch + Logstash + Kibana) cluster on top of Kubernetes, made easy.

Here you will find:

  • Kubernetes pod descriptor that joins Elasticsearch client-node container with Logstash container (for localhost communication)
  • Kubernetes pod descriptor that joins Elasticsearch client-node container with Kibana container (for localhost communication)
  • Kubernetes service descriptor that publishes Logstash
  • Kubernetes service descriptor that publishes Kibana

Pre-requisites

  • Kubernetes 1.1.x cluster (tested with 4 nodes Vagrant + CoreOS)
  • kubectl configured to access your cluster master API Server
  • Elasticsearch cluster deployed - you can skip deploying client-nodes provisioning, since those will be paired with Logstash and Kibana containers, and automatically join the cluster you've assembled with my Elasticsearch cluster instructions).

Deploy

The current Logstash configuration is expecting logstash-forwarder (Lumberjack secure protocol) to be its log input and the certificates provided are valid only for logstash.default.svc.cluster.local. I highly recommend you to rebuild your Logstash images with your own configuration and keys, if any.

Attention:

  • If you're looking for details on how quay.io/pires/docker-elasticsearch-kubernetes images are built, take a look at my other repository.
  • If you're looking for details on how quay.io/pires/docker-logstash image is built, take a look at my Logstash repository.
  • If you're looking for details on how quay.io/pires/docker-logstash-forwarder image is built, take a look at my docker-logstash-forwarder repository.
  • If you're looking for details on how quay.io/pires/docker-kibana image is built, take a look at my Kibana repository.

Let's go, then!

kubectl create -f service-account.yaml
kubectl create -f logstash-service.yaml
kubectl create -f logstash-controller.yaml
kubectl create -f kibana-service.yaml
kubectl create -f kibana-controller.yaml

Wait for provisioning to happen and then check the status:

$ kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-client-s1qnq   1/1       Running   0          57m
es-data-khoit     1/1       Running   0          56m
es-master-cfa6g   1/1       Running   0          1h
kibana-w0h9e      1/1       Running   0          2m
kube-dns-pgqft    3/3       Running   0          1h
logstash-9v8ro    1/1       Running   0          4m

As you can assert, the cluster is up and running. Easy, wasn't it?

Access the service

Don't forget that services in Kubernetes are only acessible from containers within the cluster by default, unless you have provided a LoadBalancer-enabled service.

$ kubectl get service kibana
NAME      LABELS                      SELECTOR                    IP(S)           PORT(S)
kibana    component=elk,role=kibana   component=elk,role=kibana   10.100.187.62   80/TCP

You should know what to do from here.