This is a rudimentary start at a workable elastic ceph charm.
To deploy, simply deploy it as a service, and add units. All nodes will
run all components of CEPH currently (mds, osd, and mon). This means
you should not try to use the cluster until it has reached an odd number
Because I haven't worked out how to do mkcephfs properly:
First deploy on one node. SSH to it, and run:
sudo mkcephfs -a -c /etc/ceph/ceph.conf
It should create a ceph filesystem with data stored on /mnt. On EC2
instances, this is automatically a large ephemeral drive with ext4,
and should perform reasonably well.
After this, add-unit to grow/shrink the cluster. Use the 'run-xxx'
flags and remote-(mds|osd|mon) to relate one service with another.
When you are not adding units, its probably best to disable the root
juju set name-of-service root-ssh=no
Once done, one should be able to mount the ceph filesystem using any of
the service unit IP's.
After you are done, you can improve security by turning off the root ssh,
which is only used for mkcephfs, with:
|2012/01/28 Mark Mims strong config types
|2011/10/20 Clint Byrum add osds dynamically (revno 15)
|2011/10/20 Clint Byrum works for mon scaling. Not so much for mds/osd (revno 14)
|2011/10/19 Clint Byrum adding mon departed (revno 13)
|2011/10/19 Clint Byrum adds mons (revno 12)
|2011/10/19 Clint Byrum communicate mon dir with tar/base64, still need to track pending mons (revno 11)
|2011/10/15 Clint Byrum gtting closer, fixing mechanical problems, still need to time mkcephfs or coordinate it properly (revno 10)
|2011/10/15 Clint Byrum more dynamic mon processing, refactors of local handling (revno 9)
|2011/10/15 Clint Byrum adding remote/server relations (revno 8)