Limitations of EKS

On Sep 5, 2018 AWS announced EKS availbility in eu-west-1 (Ireland). So I checked it out; this post covers some of the limitations.

Pod limits per instance type

The EKS networking CNI plugin used github exploits elastic network interfaces (ENIs) and attaches them to the EKS worker node instances. This allows the instance to have more than one IP, and each pod on the node gets one of those IPs.

There are limits to the number of ENIs you can attach to each instance type, and how many IPs that ENI can have. Each pod gets an IP, including those running in the kube-system namespace, such as kube-dns, kube-proxy, aws-node etc.

Testing the cluster setup with very small instance types, like t2.micro, or t2.small you can end up running out of IPs and the networking setup will fail.

To compensate for this AWS released an improved bootstrap script. This script will automatically set up the max number of pods that kubelet will allow on the node. But many of the guids around have not been updated for this.

Make sure you’re using this in the userdata part of the worker node launch config:

1
2
#!/bin/bash -xe
/etc/eks/bootstrap.sh <cluster-name>

This defaults the scripts --use-max-pods flag to true.

AWS imposed limitations

There are some limitations listed in the aws docs.

typeunitlimit
Hard limit5maxNumber of security groups the master control plane can be part of
Soft limit (per account, changeable)3maxNumber of EKS clusters

CNI networking plugin not updated

When creating an EKS cluster, the aws-node daemonset is automatically run. This is the networking plugin that uses ENIs. EKS does not currently update the networking daemonset automatically. Networking can be manually updated using the aws-node daemonsets.

Pods hostPort is ignored

The EKS CNI networking plugin currently ignore hostPort. To expose the port of a pod to the host, that pod must have hostNetwork: true and use the containerPort.

Update 26th Sept: The PR adding hostPort support has been merged. No tagged release yet, but building that commit works. It’s included for the v1.2 milestone.

Update 27th Sept: v1.2.0 has been released 🎉, but container networking is not automatically updated.

Dashboard tutorial uses deprecated metrics backend

There’s an AWS tutorial on deploying the kubernetes dashboard, but currently the dashboard doesn’t support the new metrics-server for metrics, and relies on heapster and influxdb.

Using this tutorial will get working metrics in the dashboard, but if you wanted to use the metrics-server instead (as this is being added to the dashboard), then you must remember to expose port 443 on the worker nodes, to allow the masters to communicate with the metrics-server.

Contianer logs are not shipped to cloudwatch

There’s not logging infrastructure that comes with EKS, it must all be done manually. Unlike ECS where it’s trivial to implement the awslogs docker --log-driver plugin and ship logs to cloudwatch.

This article explains how to achieve the same thing on EKS.

TL;DR

EKS is great, manages loads for you, comes with networking, dns, etc for you. But:

  • Use the bootstrap script in user data to limit pods per node
  • Don’t use loads of master security groups
  • Update CNI networking plugin yourself
  • Ship your own logs
  • Use the metrics server and remember port 443
  • Don’t expect hostPort to work, use hostNetwork: true