The Canonical Distribution of Kubernetes
Overview
This is a Kubernetes cluster that includes logging, monitoring, and operational
knowledge. It is comprised of the following components and features:
- Kubernetes (automated deployment, operations, and scaling)
- Three node Kubernetes cluster with one master and two worker nodes.
- TLS used for communication between nodes for security.
- A CNI plugin (e.g., Flannel)
- A load balancer for HA kubernetes-master (Experimental)
- Optional Ingress Controller (on worker)
- Optional Dashboard addon (on master) including Heapster for cluster monitoring
- EasyRSA
- Performs the role of a certificate authority serving self signed certificates
to the requesting units of the cluster.
- Performs the role of a certificate authority serving self signed certificates
- Etcd (distributed key value store)
- Three node cluster for reliability.
Usage
Installation has been automated via conjure-up:
sudo apt-add-repository ppa:juju/stable
sudo apt-add-repository ppa:conjure-up/next
sudo apt update
sudo apt install conjure-up
conjure-up canonical-kubernetes
Conjure will prompt you for deployment options (AWS, GCE, Azure, etc.) and credentials.
This bundle is for multi-node deployments, for individual deployments for
developers, use the smaller
kubernetes-core bundle via conjure-up kubernetes-core.
Proxy configuration
If you are operating behind a proxy (i.e., your charms are running in a
limited-egress environment and can not reach IP addresses external to their
network), you will need to configure your model appropriately before deploying
the Kubernetes bundle.
First, configure your model's http-proxy and https-proxy settings with your
proxy (here we use squid.internal:3128 as an example):
$ juju model-config http-proxy=http://squid.internal:3128 https-proxy=https://squid.internal:3128
Because services often need to reach machines on their own network (including
themselves), you will also need to add localhost to the no-proxy model
configuration setting, along with any internal subnets you're using. The
following example includes two subnets:
$ juju model-config no-proxy=localhost,10.5.5.0/24,10.246.64.0/21
After deploying the bundle, you need to configure the kubernetes-worker charm
to use your proxy:
$ juju config kubernetes-worker http_proxy=http://squid.internal:3128 https_proxy=https://squid.internal:3128
Deploy the bundle
juju deploy canonical-kubernetes
This will deploy the Canonical Distribution of Kubernetes offering with default
constraints. This is useful for lab environments, however for real-world use
you should provide higher CPU and memory instances to kubernetes-worker units.
Note: If you desire to deploy this bundle locally on your laptop, see the
segment about Conjure-Up under Alternate Deployment Methods. Default deployment
via juju will not properly adjust the apparmor profile to support running
kubernetes in LXD. At this time, it is a necessary intermediate deployment
mechanism.
You can increase the constraints by editing the
bundle.yaml
to fit your needs by removing the # comment character.
juju deploy ./bundle.yaml
Note: If you're operating behind a proxy, remember to set the
kubernetes-worker
proxy configuration options as described in the Proxy configuration section
above.
This bundle exposes the kubeapi-load-balancer and kubernetes-worker charms by
default, so they are accessible through their public addresses.
If you would like to remove external access, unexpose the applications:
juju unexpose kubeapi-load-balancer
juju unexpose kubernetes-worker
To get the status of the deployment, run juju status. For a constant update,
this can be used with watch.
watch -c juju status --color
Alternate deployment methods
Deploy with your own binaries
In order to support restricted-network deployments, the charms in this bundle
support
juju resources.
This allows you to juju attach the resources built for the architecture of
your cloud.
juju attach kubernetes-master kubernetes=~/path/to/kubernetes-master.tar.gz
Interacting with the Kubernetes cluster
After the cluster is deployed you may assume control over the Kubernetes
cluster from any kubernetes-master, or kubernetes-worker node.
To download the credentials and client application to your local workstation:
Create the kubectl config directory.
mkdir -p ~/.kube
Copy the kubeconfig file to the default location.
juju scp kubernetes-master/0:config ~/.kube/config
Fetch a binary for the architecture you have deployed. If your client is a
different architecture you will need to get the appropriate kubectl binary
through other means.
juju scp kubernetes-master/0:kubectl ./kubectl
Query the cluster.
./kubectl cluster-info
Accessing the Kubernetes dashboard
The Kubernetes dashboard addon is installed by default, along with Heapster,
Grafana and InfluxDB for cluster monitoring. The dashboard addons can be
enabled or disabled by setting the enable-dashboard-addons config on the
kubernetes-master application:
juju config kubernetes-master enable-dashboard-addons=true
To access the dashboard, you may establish a secure tunnel to your cluster with
the following command:
./kubectl proxy
By default, this establishes a proxy running on your local machine and the
kubernetes-master unit. To reach the Kubernetes dashboard, visit
http://localhost:8001/ui
Control the cluster
kubectl is the command line utility to interact with a Kubernetes cluster.
Minimal getting started
To check the state of the cluster:
./kubectl cluster-info
List all nodes in the cluster:
./kubectl get nodes
Now you can run pods inside the Kubernetes cluster:
./kubectl create -f example.yaml
List all pods in the cluster:
./kubectl get pods
List all services in the cluster:
./kubectl get services
For expanded information on kubectl beyond what this README provides, please
see the
kubectl overview
which contains practical examples and an API reference.
Additionally if you need to manage multiple clusters, there is more information
about configuring kubectl with the
kubectl config guide
Using Ingress
The kubernetes-worker charm supports deploying an NGINX ingress controller.
Ingress allows access from the Internet to containers inside the cluster
running web services.
First allow the Internet access to the kubernetes-worker charm with with the
following Juju command:
juju expose kubernetes-worker
In Kubernetes, workloads are declared using pod, service, and ingress
definitions. An ingress controller is provided to you by default, deployed into
the default namespace of the
cluster. If one is not available, you may deploy this with:
juju config kubernetes-worker ingress=true
Ingress resources are DNS mappings to your containers, routed through
endpoints
As an example for users unfamiliar with Kubernetes, we packaged an action to
both deploy an example and clean itself up.
To deploy 5 replicas of the microbot web application inside the Kubernetes
cluster run the following command:
juju run-action kubernetes-worker/0 microbot replicas=5
This action performs the following steps:
-
It creates a deployment titled 'microbots' comprised of 5 replicas defined
during the run of the action. It also creates a service named 'microbots'
which binds an 'endpoint', using all 5 of the 'microbots' pods. -
Finally, it will create an ingress resource, which points at a
xip.io domain to simulate a proper DNS service.
Running the packaged example
You can run a Juju action to create an example microbot web application:
$ juju run-action kubernetes-worker/0 microbot replicas=3
Action queued with id: db7cc72b-5f35-4a4d-877c-284c4b776eb8
$ juju show-action-output db7cc72b-5f35-4a4d-877c-284c4b776eb8
results:
address: microbot.104.198.77.197.xip.io
status: completed
timing:
completed: 2016-09-26 20:42:42 +0000 UTC
enqueued: 2016-09-26 20:42:39 +0000 UTC
started: 2016-09-26 20:42:41 +0000 UTC
Note: Your FQDN will be different and contain the address of the cloud
instance.
At this point, you can inspect the cluster to observe the workload coming
online.
List the pods
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-kh1dt 1/1 Running 0 1h
microbot-1855935831-58shp 1/1 Running 0 1h
microbot-1855935831-9d16f 1/1 Running 0 1h
microbot-1855935831-l5rt8 1/1 Running 0 1h
nginx-ingress-controller-hv5c2 1/1 Running 0 1h
List the services and endpoints
$ kubectl get services,endpoints
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/default-http-backend 10.1.225.82 <none> 80/TCP 1h
svc/kubernetes 10.1.0.1 <none> 443/TCP 1h
svc/microbot 10.1.44.173 <none> 80/TCP 1h
NAME ENDPOINTS AGE
ep/default-http-backend 10.1.68.2:80 1h
ep/kubernetes 172.31.31.139:6443 1h
ep/microbot 10.1.20.3:80,10.1.68.3:80,10.1.7.4:80 1h
List the ingress resources
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
microbot-ingress microbot.52.38.62.235.xip.io 172.31.26.109 80 1h
When all the pods are listed as Running, the endpoint has more than one host
you are ready to visit the address in the hosts section of the ingress listing.
It is normal to see a 502/503 error during initial application turnup.
As you refresh the page, you will be greeted with a microbot web page, serving
from one of the microbot replica pods. Refreshing will show you another
microbot with a different hostname, as the requests are load balanced through
out the replicas.
Clean up example
There is also an action to clean up the microbot applications. When you are
done using the microbot application you can delete them from the pods with
one Juju action:
juju run-action kubernetes-worker/0 microbot delete=true
If you no longer need Internet access to your workers remember to unexpose the
kubernetes-worker charm:
juju unexpose kubernetes-worker
To learn more about
Kubernetes Ingress
and how to really tune the Ingress Controller beyond defaults (such as TLS and
websocket support) view the
nginx-ingress-controller
project on github.
Scale out Usage
Any of the applications can be scaled out post-deployment. The charms
update the status messages with progress, so it is recommended to run.
watch -c juju status --color
Scaling kubernetes-worker
The kubernetes-worker nodes are the load-bearing units of a Kubernetes cluster.
By default pods are automatically spread throughout the kubernetes-worker units
that you have deployed.
To add more kubernetes-worker units to the cluster:
juju add-unit kubernetes-worker
or specify machine constraints to create larger nodes:
juju set-constraints kubernetes-worker mem=32G cores=8
juju add-unit kubernetes-worker
Refer to the
machine constraints documentation
for other machine constraints that might be useful for the kubernetes-worker
units.
Scaling Etcd
Etcd is used as a key-value store for the Kubernetes cluster. For reliability
the bundle defaults to three instances in this cluster.
For more scalability, we recommend between 3 and 9 etcd nodes. If you want to
add more nodes:
juju add-unit etcd
The CoreOS etcd documentation has a chart for the
optimal cluster size
to determine fault tolerance.
Adding optional storage
The Canonical Distribution of Kubernetes allows you to connect with durable
storage devices such as Ceph. When paired with the
Juju Storage feature you
can add durable storage easily and across most public clouds.
Deploy a minimum of three ceph-mon and three ceph-osd charms.
juju deploy cs:ceph-mon -n 3
juju deploy cs:ceph-osd -n 3
Relate the charms:
juju add-relation ceph-mon ceph-osd
List the storage pools available to Juju for your cloud:
$ juju storage-pools
Name Provider Attrs
ebs ebs
ebs-ssd ebs volume-type=ssd
loop loop
rootfs rootfs
tmpfs tmpfs
Note: This listing is for the Amazon Web Services public cloud.
Different clouds may have different pool names.
Add a storage pool to the ceph-osd charm by NAME,SIZE,COUNT:
juju add-storage ceph-osd/0 osd-devices=ebs,10G,1
juju add-storage ceph-osd/1 osd-devices=ebs,10G,1
juju add-storage ceph-osd/2 osd-devices=ebs,10G,1
Next relate the storage cluster with the Kubernetes cluster:
juju add-relation kubernetes-master ceph-mon
We are now ready to enlist
Persistent Volumes
in Kubernetes which our workloads can consume via Persistent Volume (PV) claims.
juju run-action kubernetes-master/0 create-rbd-pv name=test size=50
This example created a "test" Radios Block Device (rbd) in the size of 50 MB.
Use watch on your Kubernetes cluster like the following, you should see the PV
become enlisted and be marked as available:
$ watch kubectl get pv
NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
test 50M RWO Available 10s
To consume these Persistent Volumes, your pods will need an associated
Persistant Volume Claim with
them, and is outside the scope of this README. See the
Persistant Volumes
documentation for more information.
Known Limitations and Issues
The following are known issues and limitations with the bundle and charm code:
-
Destroying the the easyrsa charm will result in loss of public key
infrastructure (PKI). -
Deployment locally on LXD will require the use of conjure-up to tune
settings on the host's LXD installation to support Docker and other
componentry.
Kubernetes details
Flannel
Flannel is a virtual network that gives a subnet to each host for use with
container runtimes.
Configuration
iface The interface to configure the flannel SDN binding. If this value is
empty string or undefined the code will attempt to find the default network
adapter similar to the following command:
route | grep default | head -n 1 | awk {'print $8'}
cidr The network range to configure the flannel SDN to declare when
establishing networking setup with etcd. Ensure this network range is not active
on the vlan you're deploying to, as it will cause collisions and odd behavior
if care is not taken when selecting a good CIDR range to assign to flannel.
Known Limitations
This subordinate does not support being co-located with other deployments of
the flannel subordinate (to gain 2 vlans on a single application). If you
require this support please file a bug.
This subordinate also leverages juju-resources, so it is currently only available
on juju 2.0+ controllers.
Further information
Elastic Monitoring
Scaling Elasticsearch
ElasticSearch is used to hold all the log data and server information logged by
Beats. You can add more Elasticsearch nodes by using the Juju command:
juju add-unit elasticsearch
Accessing the Kibana dashboard
The Kibana dashboard can display real time graphs and charts on the details of
the cluster. The Beats charms are sending metrics to Elasticsearch and
Kibana displays the data with graphs and charts.
Get the charm's public address from the juju status command.
- Access Kibana by browser: http://KIBANA_IP_ADDRESS/
- Select the index pattern that you want as default from the left menu.
- Click the green star button to make this index a default.
- Select "Dashboard" from the Kibana header.
- Click the open folder icon to Load a Saved Dashboard.
- Select the "Topbeat Dashboard" from the left menu.

Bundle configuration
-
kubeapi load balancer
-
filebeat Show configuration
- logpath
- /var/lib/docker/containers/*/*.log /var/log/*.log /var/log/*/*.log
-
kubernetes master
-
kibana Show configuration
- dashboards
- beats
-
topbeat
-
elasticsearch
-
etcd
-
kubernetes worker
-
easyrsa
-
flannel