ceph

Description

Ceph is a distributed storage and network file system designed to
provide excellent performance, reliability, and scalability. This
package contains all server daemons and management tools for creating,
running, and administering a Ceph storage cluster.


This is a rudimentary start at a workable elastic ceph charm.

To deploy, simply deploy it as a service, and add units. All nodes will
run all components of CEPH currently (mds, osd, and mon). This means
you should not try to use the cluster until it has reached an odd number
of machines.

Because I haven't worked out how to do mkcephfs properly:

First deploy on one node. SSH to it, and run:

sudo mkcephfs -a -c /etc/ceph/ceph.conf

It should create a ceph filesystem with data stored on /mnt. On EC2
instances, this is automatically a large ephemeral drive with ext4,
and should perform reasonably well.

After this, add-unit to grow/shrink the cluster. Use the 'run-xxx'
flags and remote-(mds|osd|mon) to relate one service with another.

When you are not adding units, its probably best to disable the root
ssh with:

juju set name-of-service root-ssh=no

Once done, one should be able to mount the ceph filesystem using any of
the service unit IP's.

After you are done, you can improve security by turning off the root ssh,
which is only used for mkcephfs, with:

Configuration

osd-journal-size
(int) Size of each node's OSD journal in Megabytes
512
run-mon
(string) Set to "yes" to run all members of this service as monitors
yes
run-osd
(string) Set to "yes" to run all members of this service as OSDs
yes
root-ssh
(string) Allow all nodes to ssh as root to all other nodes. This sounds a bit risky, but its needed to mkcephfs, so only turn it on while doing mkcephfs, then turn it back off.
yes
rados-port
(int) What port to listen for radosgw requests on. 0 means do not setup a radosgw on this service.
run-mds
(string) Set to "yes" to run all members of this service as metadata serverss
yes