Monitoring your Kubernetes master node using DaemonSets

As the name implies, DaemonSets are the ideal choice for logging
agents and monitoring tools you have running on your cluster. Introduced
in Kubernetes 1.1, these objects function almost identically to the
ReplicationControllers that you’re familiar with, with the notable
difference that each node will receive exactly one pod based on your

ReplicationControllers and Deployments are not designed to run
cluster-wide daemons, such as log collectors. The pods they create are
scheduled intelligently onto each minion node, meaning there is no
guarantee that each node receive exactly one pod.

Ideally, DaemonSets solve this problem. However, the kube-up scripts
for AWS do not register the master’s kubelet with the Kubernetes API
server, meaning DaemonSets won’t be spun up on your master node. I
believe this is an odd default, but fortunately Kubernetes has a few
flags you can enable on your master kubelet to support DaemonSets.

Registering your master with the Kube API server

Note: I’m assuming you use the kube-up script included in
Kubernetes to spin up your cluster on AWS or a similar non-GCE cloud
provider with SSH access to your master node. I’m using Debian Jessie
but any systemd-based OS should work the same way.
Start off by SSHing into your master node using the same PEM file you used to spin up the cluster:

$ ssh -i ~/.ssh/kubernetes.pem admin@

Now on your master node, you’re going to need the cluster-local IP
address of your master node so we can attach the kubelet to the correct
API server:

$ hostname -i

We can’t use localhost because the API server’s certificate isn’t signed for local requests.

Next, we’re going to edit the master kubelet’s configuration file, adding a few new flags:

$ sudo vim /etc/sysconfig/kubelet

The last line will look something like this:


These are the configuration flags passed to the master node’s
kubelet. We’re going to add 3 flags to this list, described below.
Modify the line to look like this:


  • --api-servers=MASTER_LOCAL_IP tells the kubelet where the Kubernetes API server is located, in this case on the local instance
  • --register-node=true will ask the kubelet to register as a node with the API server on startup
  • --register-schedulable=false makes the master node unschedulable for any pods except for DaemonSets (awesome!)

Go ahead and save the kubelet configuration file.

There’s one more modification we’ll need to make to the kubelet so
that it can register with the API server: give it the Kubernetes API
certificates via a kubeconfig file. The easiest way to get the correct
kubeconfig settings for your master node is by SSHing into any of your
minion nodes. For the default AWS setup:

$ sudo cat /mnt/ephemeral/kubernetes/kubelet/kubeconfig
apiVersion: v1
kind: Config
- name: kubelet

Copy the contents of that file from a minion node to the same location on your master node, save, and you’re good to go.
Finally, restart your master node’s kubelet:

$ sudo systemctl restart kubelet

You can see if it worked by running:

$ sudo journalctl -f -n 100 -u kubelet

As long as you don’t see an error message saying that your kubelet
crashed, you’re all set! If there was an issue, double-check your
configuration flags and restart the kubelet again.

Nodes take a little while to register, so wait about 5 minutes or so.
On your local computer, you can list all of the nodes using kubectl to
make sure it worked:

$ kubectl get nodes
172–20–0– Ready 5d
172–20–0– Ready 5d
172–20–0– Ready 5d
172–20–0– Ready 5d
172–20–0– Ready 5d
172–20–0– Ready,SchedulingDisabled 5d

Notice the final node with scheduling disabled. That’s your master
kubelet registered as a Kubernetes node! You can get started scheduling
DaemonSets right away and they’ll automatically be provisioned on the
master node as well as each of the minions.

Bonus: Some fun DaemonSets for you to use

We’ve got some utilities containerized over at the Pavlov GitHub.
In particular, we’ve open sourced our fluentd container that handles
all our log collection and sends it to Amazon S3. We’ve included an
example Kubernetes object file to make it really easy to get up and
running. Give it a whirl and let us know what you think!

I’m Alex Kern, co-founder and CTO of Pavlov. We make an intelligent
warehouse for your customer data. We’re currently in a limited release
while we on-board companies and developers. If you found this helpful,
recommend it to a friend. 🙂