Ceph is a distributed storage and network file system designed to provide
excellent performance, reliability, and scalability.
This charm deploys a Ceph cluster.
The ceph charm has two pieces of mandatory configuration for which no defaults
uuid specific to a ceph cluster used to ensure that different
clusters don't get mixed up - use `uuid` to generate one.
a ceph generated key used by the daemons that manage to cluster
to control security. You can use the ceph-authtool command to
ceph-authtool /dev/stdout --name=mon. --gen-key
These two pieces of configuration must NOT be changed post bootstrap; attempting
todo this will cause a reconfiguration error and new service units will not join
the existing ceph cluster.
The charm also supports specification of the storage devices to use in the ceph
A list of devices that the charm will attempt to detect, initialise and
activate as ceph storage.
This this can be a superset of the actual storage devices presented to
each service unit and can be changed post ceph bootstrap using `juju set`.
At a minimum you must provide a juju config file during initial deployment
with the fsid and monitor-secret options (contents of cepy.yaml below):
osd-devices: /dev/vdb /dev/vdc /dev/vdd /dev/vde
Specifying the osd-devices to use is also a good idea.
Boot things up by using:
juju deploy -n 3 --config ceph.yaml ceph
By default the ceph cluster will not bootstrap until 3 service units have been
deployed and started; this is to ensure that a quorum is achieved prior to adding
Author: Paul Collins <firstname.lastname@example.org>,
James Page <email@example.com>
Report bugs at: http://bugs.launchpad.net/charms/+source/ceph/+filebug
This charm is currently deliberately inflexible and potentially destructive.
It is designed to deploy on exactly three machines. Each machine will run mon
This charm uses the new-style Ceph deployment as reverse-engineered from the
Chef cookbook at https://github.com/ceph/ceph-cookbooks, although we selected
a different strategy to form the monitor cluster. Since we don't know the
names *or* addresses of the machines in advance, we use the relation-joined
hook to wait for all three nodes to come up, and then write their addresses
to ceph.conf in the "mon host" parameter. After we initialize the monitor
cluster a quorum forms quickly, and OSD bringup proceeds.
The osds use so-called "OSD hotplugging". ceph-disk-prepare is used to create
the filesystems with a special GPT partition type. udev is set up to mounti
such filesystems and start the osd daemons as their storage becomes visible to
the system (or after "udevadm trigger").
The Chef cookbook above performs some extra steps to generate an OSD
bootstrapping key and propagate it to the other nodes in the cluster. Since
all OSDs run on nodes that also run mon, we don't need this and did not
See http://ceph.com/docs/master/dev/mon-bootstrap/ for more information on Ceph
monitor cluster deployment strategies and pitfalls.
|2013/04/25 Marco Ceppi Added icon.svg (revno 57)
|2013/04/22 Jorge O. Castro Add categories
|2013/03/18 James Page This adds the python-ceph package to the list of packages installed by this charm, so that the ceph (revno 55)
|2013/02/08 James Page Add support for Ceph Bobtail LTS.
- XFS and BTRFS disk formats for OSD's.
- Separate journal device (revno 54)
|2013/02/08 James Page Paul Collins 2013-01-28 use ceph.list, not quantum.list
|2012/11/22 James Page Updated default source to cloud-archive for precise charm branch (revno 52)
|2012/11/22 Juan L. Negron 1) Support use of cloud: prefix to pull ceph from the Ubuntu cloud archive.
2) Better filesystem ha (revno 51)
|2012/10/19 James Page Added is_leader to ceph (revno 50)
|2012/10/18 James Page Merged changes from pjdc including cephx configuration support and better arbitarty repository handl (revno 49)