Ceph is a distributed storage and network file system designed to provide
excellent performance, reliability, and scalability.

Overview

Ceph is a distributed storage and network file system designed to provide
excellent performance, reliability, and scalability.

This charm deploys a Ceph monitor cluster.

Usage

Boot things up by using:

juju deploy -n 3 ceph-mon

By default the ceph-mon cluster will not bootstrap until 3 service units have
been deployed and started; this is to ensure that a quorum is achieved prior to
adding storage devices.

Actions

This charm supports pausing and resuming ceph's health functions on a cluster, for example when doing maintenance on a machine. to pause or resume, call:

juju action do --unit ceph-mon/0 pause-health or juju action do --unit ceph-mon/0 resume-health

Scale Out Usage

You can use the Ceph OSD and Ceph Radosgw charms:

Rolling Upgrades

ceph-mon and ceph-osd charms have the ability to initiate a rolling upgrade.
This is initiated by setting the config value for source. To perform a
rolling upgrade first set the source for ceph-mon. Watch juju status.
Once the monitor cluster is upgraded proceed to setting the ceph-osd source
setting. Again watch juju status for output. The monitors and osds will
sort themselves into a known order and upgrade one by one. As each server is
upgrading the upgrade code will down all the monitor or osd processes on that
server, apply the update and then restart them. You will notice in the
juju status output that the servers will tell you which previous server they
are waiting on.

Supported Upgrade Paths

Currently the following upgrade paths are supported using
the Ubuntu Cloud Archive:
- trusty-firefly -> trusty-hammer
- trusty-hammer -> trusty-jewel

Firefly is available in Trusty, Hammer is in Trusty-Juno (end of life),
Trusty-Kilo, Trusty-Liberty, and Jewel is available in Trusty-Mitaka.

For example if the current config source setting is: cloud:trusty-liberty
changing that to cloud:trusty-mitaka will initiate a rolling upgrade of
the monitor cluster from hammer to jewel.

Edge cases

There's an edge case in the upgrade code where if the previous node never
starts upgrading itself then the rolling upgrade can hang forever. If you
notice this has happened it can be fixed by setting the appropriate key in the
ceph monitor cluster. The monitor cluster will have
keys that look like ceph-mon_ip-ceph-mon-0_1484680239.573482_start and
ceph-mon_ip-ceph-mon-0_1484680274.181742_stop. What each server is looking for
is that stop key to indicate that the previous server upgraded successfully and
it's safe to take itself down. If the stop key is not present it will wait
10 minutes, then consider that server dead and move on.

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

Network traffic can be bound to specific network spaces using the public (front-side) and cluster (back-side) bindings:

juju deploy ceph-mon --bind "public=data-space cluster=cluster-space"

alternatively these can also be provided as part of a Juju native bundle configuration:

ceph-mon:
  charm: cs:xenial/ceph-mon
  num_units: 1
  bindings:
    public: data-space
    cluster: cluster-space

Please refer to the Ceph Network Reference for details on how using these options effects network traffic within a Ceph deployment.

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using ceph-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.

NOTE: The monitor-hosts field is only used to migrate existing clusters to a juju managed solution and should be left blank otherwise.

Contact Information

Authors

Report bugs on Launchpad

Ceph

Technical Footnotes

This charm uses the new-style Ceph deployment as reverse-engineered from the
Chef cookbook at https://github.com/ceph/ceph-cookbooks, although we selected
a different strategy to form the monitor cluster. Since we don't know the
names or addresses of the machines in advance, we use the relation-joined
hook to wait for all three nodes to come up, and then write their addresses
to ceph.conf in the "mon host" parameter. After we initialize the monitor
cluster a quorum forms quickly, and OSD bringup proceeds.

See the documentation for more information on Ceph monitor cluster deployment strategies and pitfalls.

Configuration

sysctl
(string)
                            YAML-formatted associative array of sysctl key/value pairs to be set
persistently. By default we set pid_max, max_map_count and 
threads-max to a high value to avoid problems with large numbers (>20)
of OSDs recovering. very large clusters should set those values even
higher (e.g. max for kernel.pid_max is 4194303).

                        
{ kernel.pid_max : 2097152, vm.max_map_count : 524288, kernel.threads-max: 2097152 }
expected-osd-count
(int)
                            Number of OSDs expected to be deployed in the cluster. This value is used
for calculating the number of placement groups on pool creation. The
number of placement groups for new pools are based on the actual number
of OSDs in the cluster or the expected-osd-count, whichever is greater
A value of 0 will cause the charm to only consider the actual number of
OSDs in the cluster.

                        
nagios_servicegroups
(string)
                            A comma-separated list of nagios servicegroups. If left empty, the
nagios_context will be used as the servicegroup.

                        
default-rbd-features
(int)
                            Restrict the rbd features used to the specified level. If set, this will
inform clients that they should set the config value `rbd default
features`, for example:
.
  rbd default features = 1
.
This needs to be set to 1 when deploying a cloud with the nova-lxd
hypervisor.

                        
monitor-hosts
(string)
                            A space-separated list of ceph mon hosts to use. This field is only used
to migrate an existing cluster to a juju-managed solution and should
otherwise be left unset.

                        
use-syslog
(boolean)
                            If set to True, supporting services will log to syslog.

                        
use-direct-io
(boolean)
                            Configure use of direct IO for OSD journals.
                        
True
source
(string)
                            Optional configuration to support use of additional sources such as:
.
  - ppa:myteam/ppa
  - cloud:xenial-proposed/ocata
  - http://my.archive.com/ubuntu main
.
The last option should be used in conjunction with the key configuration
option.

                        
monitor-secret
(string)
                            The Ceph secret key used by Ceph monitors. This value will become the
mon.key. To generate a suitable value use:
.
  ceph-authtool /dev/stdout --name=mon. --gen-key
.
If left empty, a secret key will be generated.
.
NOTE: Changing this configuration after deployment is not supported and
new service units will not be able to join the cluster.

                        
prefer-ipv6
(boolean)
                            If True enables IPv6 support. The charm will expect network interfaces
to be configured with an IPv6 address. If set to False (default) IPv4
is expected.
.
NOTE: these charms do not currently support IPv6 privacy extension. In
order for this charm to function correctly, the privacy extension must be
disabled and a non-temporary address must be configured/available on
your network interface.

                        
auth-supported
(string)
                            Which authentication flavour to use.
.
Valid options are "cephx" and "none". If "none" is specified,
keys will still be created and deployed so that it can be
enabled later.

                        
cephx
customize-failure-domain
(boolean)
                            Setting this to true will tell Ceph to replicate across Juju's
Availability Zone instead of specifically by host.

                        
ceph-public-network
(string)
                            The IP address and netmask of the public (front-side) network (e.g.,
192.168.0.0/24)
.
If multiple networks are to be used, a space-delimited list of a.b.c.d/x
can be provided.

                        
key
(string)
                            Key ID to import to the apt keyring to support use with arbitary source
configuration from outside of Launchpad archives or PPA's.

                        
config-flags
(string)
                            User provided Ceph configuration. Supports a string representation of
a python dictionary where each top-level key represents a section in
the ceph.conf template. You may only use sections supported in the
template.
.
WARNING: this is not the recommended way to configure the underlying
services that this charm installs and is used at the user's own risk.
This option is mainly provided as a stop-gap for users that either
want to test the effect of modifying some config or who have found
a critical bug in the way the charm has configured their services
and need it fixed immediately. We ask that whenever this is used,
that the user consider opening a bug on this charm at
http://bugs.launchpad.net/charms providing an explanation of why the
config was needed so that we may consider it for inclusion as a
natively supported config in the the charm.

                        
fsid
(string)
                            The unique identifier (fsid) of the Ceph cluster.
.
To generate a suitable value use `uuidgen`.
If left empty, an fsid will be generated.
.
NOTE: Changing this configuration after deployment is not supported and
new service units will not be able to join the cluster.

                        
loglevel
(int)
                            Mon and OSD debug level. Max is 20.
                        
1
pgs-per-osd
(int)
                            The number of placement groups per OSD to target. It is important to
properly size the number of placement groups per OSD as too many
or too few placement groups per OSD may cause resource constraints and
performance degradation. This value comes from the recommendation of
the Ceph placement group calculator (http://ceph.com/pgcalc/) and
recommended values are:
.
100 - If the cluster OSD count is not expected to increase in the
      foreseeable future.
200 - If the cluster OSD count is expected to increase (up to 2x) in the
      foreseeable future.
300 - If the cluster OSD count is expected to increase between 2x and 3x
      in the foreseeable future.

                        
100
nagios_context
(string)
                            Used by the nrpe-external-master subordinate charm.
A string that will be prepended to instance name to set the hostname
in nagios. So for instance the hostname would be something like:
.
    juju-myservice-0
.
If you're running multiple environments with the same services in them
this allows you to differentiate between them.

                        
juju
harden
(string)
                            Apply system hardening. Supports a space-delimited list of modules
to run. Supported modules currently include os, ssh, apache and mysql.

                        
monitor-count
(int)
                            Number of ceph-mon units to wait for before attempting to bootstrap the
monitor cluster. For production clusters the default value of 3 ceph-mon
units is normally a good choice.
.
For test and development environments you can enable single-unit
deployment by setting this to 1.
.
NOTE: To establish quorum and enable partition tolerance a odd number of
ceph-mon units is required.

                        
3
ceph-cluster-network
(string)
                            The IP address and netmask of the cluster (back-side) network (e.g.,
192.168.0.0/24)
.
If multiple networks are to be used, a space-delimited list of a.b.c.d/x
can be provided.