Kubernetes is an open-source platform for deploying, scaling, and operations
of application containers across a cluster of hosts. Kubernetes is portable
in that it works with public, private, and hybrid clouds. Extensible through
a pluggable infrastructure. Self healing in that it will automatically
restart and place containers on healthy nodes if a node ever goes away.
Kubernetes is an open source system for managing
application containers across a cluster of hosts. The Kubernetes project was
started by Google in 2014, combining the experience of running production
workloads combined with best practices from the community.
The Kubernetes project defines some new terms that may be unfamiliar to users
or operators. For more information please refer to the concept guide in the
getting started guide.
This charm is an encapsulation of the Kubernetes master processes and the
operations to run on any cloud for the entire lifecycle of the cluster.
This charm is built from other charm layers using the Juju reactive framework.
The other layers focus on specific subset of operations making this layer
specific to operations of Kubernetes master processes.
This charm is not fully functional when deployed by itself. It requires other
charms to model a complete Kubernetes cluster. A Kubernetes cluster needs a
distributed key value store such as Etcd and the
kubernetes-worker charm which delivers the Kubernetes node services. A cluster
requires a Software Defined Network (SDN) and Transport Layer Security (TLS) so
the components in a cluster communicate securely.
The kubernetes-master charm takes advantage of the Juju Resources
feature to deliver the Kubernetes software.
In deployments on public clouds the Charm Store provides the resource to the
charm automatically with no user intervention. Some environments with strict
firewall rules may not be able to contact the Charm Store. In these network
restricted environments the resource can be uploaded to the model by the Juju
The kubernetes resources used by this charm are snap packages. When not
specified during deployment, these resources come from the public store. By
snapd daemon will refresh all snaps installed from the store
four (4) times per day. A charm configuration option is provided for operators
to control this refresh frequency.
NOTE: this is a global configuration option and will affect the refresh
time for all snaps installed on a system.
## refresh kubernetes-master snaps every tuesday juju config kubernetes-master snapd_refresh="tue" ## refresh snaps at 11pm on the last (5th) friday of the month juju config kubernetes-master snapd_refresh="fri5,23:00" ## delay the refresh as long as possible juju config kubernetes-master snapd_refresh="max" ## use the system default refresh timer juju config kubernetes-master snapd_refresh=""
For more information on the possible values for
snapd_refresh, see the
refresh.timer section in the system options documentation.
This charm supports some configuration options to set up a Kubernetes cluster
that works in your environment:
The domain name to use for the Kubernetes cluster for DNS.
Enables the installation of Kubernetes dashboard, Heapster, Grafana, and
Enable RBAC and Node authorisation.
DNS for the cluster
The DNS add-on allows the pods to have a DNS names in addition to IP addresses.
The Kubernetes cluster DNS server (based off the SkyDNS library) supports
forward lookups (A records), service lookups (SRV records) and reverse IP
address lookups (PTR records). More information about the DNS can be obtained
from the Kubernetes DNS admin guide.
The kubernetes-master charm models a few one time operations called
Juju actions that can be run by
This action creates RADOS Block Device (RBD) in Ceph and defines a Persistent
Volume in Kubernetes so the containers can use durable storage. This action
requires a relation to the ceph-mon charm before it can create the volume.
This action restarts the master processes
kube-scheduler when the user needs a restart.
The kubernetes-master charm is free and open source operations created
by the containers team at Canonical.
Canonical also offers enterprise support and customization services. Please
refer to the Kubernetes product page
for more details.
- (string) Allow kube-apiserver to run in privileged mode. Supported values are "true", "false", and "auto". If "true", kube-apiserver will run in privileged mode by default. If "false", kube-apiserver will never run in privileged mode. If "auto", kube-apiserver will not run in privileged mode by default, but will switch to privileged mode if gpu hardware is detected on a worker node.
- (string) Space separated list of flags and key=value pairs that will be passed as arguments to kube-scheduler. For example a value like this: runtime-config=batch/v2alpha1=true profiling=true will result in kube-scheduler being run with the following options: --runtime-config=batch/v2alpha1=true --profiling=true
- (string) Comma separated authorization modes. Allowed values are "RBAC", "Node", "Webhook", "ABAC", "AlwaysDeny" and "AlwaysAllow".
- (string) Load the nvidia device plugin daemonset. Supported values are "auto" and "false". When "auto", the daemonset will be loaded only if GPUs are detected. When "false" the nvidia device plugin will not be loaded.
- (string) The local domain for cluster dns
- (string) Audit policy passed to kube-apiserver via --audit-policy-file. For more info, please refer to the upstream documentation at https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
- apiVersion: audit.k8s.io/v1beta1 kind: Policy rules: # Don't log read-only requests from the apiserver - level: None users: ["system:apiserver"] verbs: ["get", "list", "watch"] # Don't log kube-proxy watches - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - resources: ["endpoints", "services"] # Don't log nodes getting their own status - level: None userGroups: ["system:nodes"] verbs: ["get"] resources: - resources: ["nodes"] # Don't log kube-controller-manager and kube-scheduler getting endpoints - level: None users: ["system:unsecured"] namespaces: ["kube-system"] verbs: ["get"] resources: - resources: ["endpoints"] # Log everything else at the Request level. - level: Request omitStages: - RequestReceived
- (string) Password to be used for admin user (leave empty for random password).
- (string) Audit webhook config passed to kube-apiserver via --audit-webhook-config-file. For more info, please refer to the upstream documentation at https://kubernetes.io/docs/tasks/debug-application-cluster/audit/
- (string) Snap channel to install Kubernetes master services from
- (string) A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup
- (string) Space separated list of flags and key=value pairs that will be passed as arguments to kube-apiserver. For example a value like this: runtime-config=batch/v2alpha1=true profiling=true will result in kube-apiserver being run with the following options: --runtime-config=batch/v2alpha1=true --profiling=true
- (string) How often snapd handles updates for installed snaps. Setting an empty string will check 4x per day. Set to "max" to delay the refresh as long as possible. You may also set a custom string as described in the 'refresh.timer' section here: https://forum.snapcraft.io/t/system-options/87
- (string) Space-separated list of extra SAN entries to add to the x509 certificate created for the master nodes.
- (string) HTTP/HTTPS web proxy for Snappy to use when accessing the snap store.
- (string) CIDR to user for Kubernetes services. Cannot be changed after deployment.
- (string) Used by the nrpe subordinate charms. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like: juju-myservice-0 If you're running multiple environments with the same services in them this allows you to differentiate between them.
- (string) The address of a Snap Store Proxy to use for snaps e.g. http://snap-proxy.example.com
- (string) Specify the docker registry to use when applying addons
- (string) Space separated list of flags and key=value pairs that will be passed as arguments to kube-controller-manager. For example a value like this: runtime-config=batch/v2alpha1=true profiling=true will result in kube-controller-manager being run with the following options: --runtime-config=batch/v2alpha1=true --profiling=true
- (boolean) Deploy the Kubernetes Dashboard and Heapster addons
- (boolean) Deploy kube-dns addon
- (string) The storage backend for kube-apiserver persistence. Can be "etcd2", "etcd3", or "auto". Auto mode will select etcd3 on new installations, or etcd2 on upgrades.
- (boolean) When true, master nodes will not be upgraded until the user triggers it manually by running the upgrade action.
- (boolean) If true the metrics server for Kubernetes will be deployed onto the cluster.