Openstack Charmers Next Ceph

  • By OpenStack Charmers - Testing Charms
  • Cloud
Channel Revision Published Runs on
latest/stable 1259 09 Feb 2022
Ubuntu 17.10 Ubuntu 17.04 Ubuntu 16.04 Ubuntu 14.04
latest/edge 688 19 Mar 2021
Ubuntu 15.10
juju deploy openstack-charmers-next-ceph
Show information

Platform:

Ubuntu
17.10 17.04 16.04 14.04

Learn about actions >

  • add-disk

    Add disk(s) to Ceph

    Params
    • bucket string

      The name of the bucket in Ceph to add these devices into

    • osd-devices string

      The devices to format and set up as osd volumes.

    Required

    osd-devices

  • create-erasure-profile

    Create a new erasure code profile to use on a pool.

    Params
    • coding-chunks integer

      The number of coding chunks, i.e. the number of additional chunks computed by the encoding functions. If there are 2 coding chunks, it means 2 OSDs can be out without losing data.

    • data-chunks integer

      The number of data chunks, i.e. the number of chunks in which the original object is divided. For instance if K = 2 a 10KB object will be divided into K objects of 5KB each.

    • durability-estimator integer

      The number of parity chunks each of which includes each data chunk in its calculation range. The number is used as a durability estimator. For instance, if c=2, 2 OSDs can be down without losing data.

    • failure-domain string

      The failure-domain=host will create a CRUSH ruleset that ensures no two chunks are stored in the same host.

    • locality-chunks integer

      Group the coding and data chunks into sets of size locality. For instance, for k=4 and m=2, when locality=3 two groups of three are created. Each set can be recovered without reading chunks from another set.

    • name string

      The name of the profile

    • plugin string

      The erasure plugin to use for this profile. See http://docs.ceph.com/docs/master/rados/operations/erasure-code-profile/ for more details

    Required

    name, data-chunks, coding-chunks

  • create-pool

    Creates a pool

    Params
    • erasure-profile-name string

      The name of the erasure coding profile to use for this pool. Note this profile must exist before calling create-pool

    • name string

      The name of the pool

    • pool-type string

      The pool type which may either be replicated to recover from lost OSDs by keeping multiple copies of the objects or erasure to get a kind of generalized RAID5 capability.

    • profile-name string

      The crush profile to use for this pool. The ruleset must exist first.

    • replicas integer

      For the replicated pool this is the number of replicas to store of each object.

    Required

    name

  • delete-erasure-profile

    Deletes an erasure code profile.

    Params
    • name string

      The name of the profile

    Required

    name

  • delete-pool

    Deletes the named pool

    Params
    • pool-name string

      The name of the pool

    Required

    pool-name

  • get-erasure-profile

    Display an erasure code profile.

    Params
    • name string

      The name of the profile

    Required

    name

  • list-disks

    List the unmounted disk on the specified unit

  • list-erasure-profiles

    List the names of all erasure code profiles

  • list-pools

    List your cluster’s pools

  • pause

    CAUTION - Set the local osd units in the charm to 'out' but does not stop the osds. Unless the osd cluster is set to noout (see below), this removes them from the ceph cluster and forces ceph to migrate the PGs to other OSDs in the cluster. See the following.

    http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-the-osd "Do not let your cluster reach its full ratio when removing an OSD. Removing OSDs could cause the cluster to reach or exceed its full ratio." Also note that for small clusters you may encounter the corner case where some PGs remain stuck in the active+remapped state. Refer to the above link on how to resolve this.

    pause-health unit can be used before pausing the ceph units to stop the cluster rebalancing the data off this unit. pause-health sets 'noout' on the cluster such that it will not try to rebalance the data accross the remaining units.

    It is up to the user of the charm to determine whether pause-health should be used as it depends on whether the osd is being paused for maintenance or to remove it from the cluster completely.

  • pause-health

    Pause ceph health operations across the entire ceph cluster

  • pool-get

    Get a value for the pool

    Params
    • key string

      Any valid Ceph key from http://docs.ceph.com/docs/master/rados/operations/pools/#get-pool-values

    • pool-name string

      The pool to get this variable from.

    Required

    key, pool-name

  • pool-set

    Set a value for the pool

    Params
    • key string

      Any valid Ceph key from http://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values

    • pool-name string

      The pool to set this variable on.

    • value string

      The value to set

    Required

    key, value, pool-name

  • pool-statistics

    Show a pool’s utilization statistics

  • remove-pool-snapshot

    Remove a pool snapshot

    Params
    • pool-name string

      The name of the pool

    • snapshot-name string

      The name of the snapshot

    Required

    snapshot-name, pool-name

  • rename-pool

    Rename a pool

    Params
    • new-name string

      The new name of the pool

    • pool-name string

      The name of the pool

    Required

    pool-name, new-name

  • resume

    Set the local osd units in the charm to 'in'. Note that the pause option does NOT stop the osd processes.

  • resume-health

    Resume ceph health operations across the entire ceph cluster

  • set-pool-max-bytes

    Set pool quotas for the maximum number of bytes.

    Params
    • max integer

      The name of the pool

    • pool-name string

      The name of the pool

    Required

    pool-name, max

  • snapshot-pool

    Snapshot a pool

    Params
    • pool-name string

      The name of the pool

    • snapshot-name string

      The name of the snapshot

    Required

    snapshot-name, pool-name