cinder #303

Description

Cinder is the block storage service for the OpenStack.

Overview

This charm provides the Cinder volume service for OpenStack. It is intended to
be used alongside the other OpenStack components, starting with the Folsom
release.

Cinder is made up of 3 separate services: an API service, a scheduler and a
volume service. This charm allows them to be deployed in different
combination, depending on user preference and requirements.

This charm was developed to support deploying Folsom on both
Ubuntu Quantal and Ubuntu Precise. Since Cinder is only available for
Ubuntu 12.04 via the Ubuntu Cloud Archive, deploying this charm to a
Precise machine will by default install Cinder and its dependencies from
the Cloud Archive.

Usage

Cinder may be deployed in a number of ways. This charm focuses on 3 main
configurations. All require the existence of the other core OpenStack
services deployed via Juju charms, specifically: mysql, rabbitmq-server,
keystone and nova-cloud-controller. The following assumes these services
have already been deployed.

Basic, all-in-one using local storage and iSCSI

The api server, scheduler and volume service are all deployed into the same
unit. Local storage will be initialized as a LVM phsyical device, and a volume
group initialized. Instance volumes will be created locally as logical volumes
and exported to instances via iSCSI. This is ideal for small-scale deployments
or testing:

cat >cinder.cfg <<END
cinder:
    block-device: sdc
    overwrite: true
END
juju deploy --config=cinder.cfg cinder
juju add-relation cinder keystone
juju add-relation cinder mysql
juju add-relation cinder rabbitmq-server
juju add-relation cinder nova-cloud-controller

Separate volume units for scale out, using local storage and iSCSI

Separating the volume service from the API service allows the storage pool
to easily scale without the added complexity that accompanies load-balancing
the API server. When we've exhausted local storage on volume server, we can
simply add-unit to expand our capacity. Future requests to allocate volumes
will be distributed across the pool of volume servers according to the
availability of storage space.

cat >cinder.cfg <<END
cinder-api:
    enabled-services: api, scheduler
cinder-volume:
    enabled-services: volume
    block-device: sdc
    overwrite: true
END
juju deploy --config=cinder.cfg cinder cinder-api
juju deploy --config=cinder.cfg cinder cinder-volume
juju add-relation cinder-api mysql
juju add-relation cinder-api rabbitmq-server
juju add-relation cinder-api keystone
juju add-relation cinder-api nova-cloud-controller
juju add-relation cinder-volume mysql
juju add-relation cinder-volume rabbitmq-server

# When more storage is needed, simply add more volume servers.
juju add-unit cinder-volume

All-in-one using Ceph-backed RBD volumes

All 3 services can be deployed to the same unit, but instead of relying
on local storage to back volumes an external Ceph cluster is used. This
allows scalability and redundancy needs to be satisified and Cinder's RBD
driver used to create, export and connect volumes to instances. This assumes
a functioning Ceph cluster has already been deployed using the official Ceph
charm and a relation exists between the Ceph service and the nova-compute
service.

cat >cinder.cfg <<END
cinder:
    block-device: None
END
juju deploy --config=cinder.cfg cinder
juju add-relation cinder ceph
juju add-relation cinder keystone
juju add-relation cinder mysql
juju add-relation cinder rabbitmq-server
juju add-relation cinder nova-cloud-controller

Configuration

The default value for most config options should work for most deployments.

Users should be aware of three options, in particular:

openstack-origin: Allows Cinder to be installed from a specific apt repository.
See config.yaml for a list of supported sources.

openstack-origin-git: Allows Cinder to be installed from source.
See config.yaml for a list of supported sources.

block-device: When using local storage, a block device should be specified to
back a LVM volume group. It's important this device exists on
all nodes that the service may be deployed to.

overwrite: Whether or not to wipe local storage that of data that may prevent
it from being initialized as a LVM phsyical device. This includes
filesystems and partition tables. CAUTION

enabled-services: Can be used to separate cinder services between service
service units (see previous section)

HA/Clustering

There are two mutually exclusive high availability options: using virtual
IP(s) or DNS. In both cases, a relationship to hacluster is required which
provides the corosync back end HA functionality.

To use virtual IP(s) the clustered nodes must be on the same subnet such that
the VIP is a valid IP on the subnet for one of the node's interfaces and each
node has an interface in said subnet. The VIP becomes a highly-available API
endpoint.

At a minimum, the config option 'vip' must be set in order to use virtual IP
HA. If multiple networks are being used, a VIP should be provided for each
network, separated by spaces. Optionally, vip_iface or vip_cidr may be
specified.

To use DNS high availability there are several prerequisites. However, DNS HA
does not require the clustered nodes to be on the same subnet.
Currently the DNS HA feature is only available for MAAS 2.0 or greater
environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must
have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s)
must be pre-registered in MAAS before use with DNS HA.

At a minimum, the config option 'dns-ha' must be set to true and at least one
of 'os-public-hostname', 'os-internal-hostname' or 'os-internal-hostname' must
be set in order to use DNS HA. One or more of the above hostnames may be set.

The charm will throw an exception in the following circumstances:
If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster
If both 'vip' and 'dns-ha' are set as they are mutually exclusive
If 'dns-ha' is set and none of the os-{admin,internal,public}-hostname(s) are
set

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

API endpoints can be bound to distinct network spaces supporting the network separation of public, internal and admin endpoints.

Access to the underlying MySQL instance can also be bound to a specific space using the shared-db relation.

To use this feature, use the --bind option when deploying the charm:

juju deploy cinder --bind "public=public-space internal=internal-space admin=admin-space shared-db=internal-space"

alternatively these can also be provided as part of a juju native bundle configuration:

cinder:
  charm: cs:xenial/cinder
  num_units: 1
  bindings:
    public: public-space
    admin: admin-space
    internal: internal-space
    shared-db: internal-space

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using os-*-network configuration options will continue to function; these options are preferred over any network space binding provided if set.

Configuration

volume-group
(string) Name of volume group to create and store Cinder volumes.
cinder-volumes
ssl_key
(string) SSL key to use with certificate specified as ssl_cert.
vip_iface
(string) Default network interface to use for HA vip when it cannot be automatically determined.
eth0
enabled-services
(string) If splitting cinder services between units, define which services to install and configure.
all
use-internal-endpoints
(boolean) Openstack mostly defaults to using public endpoints for internal communication between services. If set to True this option will configure services to use internal endpoints where possible.
os-admin-network
(string) The IP address and netmask of the OpenStack Admin network (e.g. 192.168.0.0/24) . This network will be used for admin endpoints.
haproxy-server-timeout
(int) Server timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 30000ms is used.
remove-missing
(boolean) If True, charm will attempt to remove missing physical volumes from volume group, if logical volumes are not allocated on them.
vip
(string) Virtual IP(s) to use to front API services in HA configuration. . If multiple networks are being used, a VIP should be provided for each network, separated by spaces.
worker-multiplier
(float) The CPU core multiplier to use when configuring worker processes for Cinder. By default, the number of workers for each daemon is set to twice the number of CPU cores a service unit has. When deployed in a LXD container, this default value will be capped to 4 workers unless this configuration option is set.
overwrite
(string) If true, charm will attempt to overwrite block devices containing previous filesystems or LVM, assuming it is not in use.
false
use-syslog
(boolean) Setting this to True will allow supporting services to log to syslog.
verbose
(boolean) Enable verbose logging.
remove-missing-force
(boolean) If True, charm will attempt to remove missing physical volumes from volume group, even when logical volumes are allocated on them. This option overrides 'remove-missing' when set.
haproxy-queue-timeout
(int) Queue timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 5000ms is used.
ssl_cert
(string) SSL certificate to install and use for API ports. Setting this value and ssl_key will enable reverse proxying, point Cinder's entry in the Keystone catalog to use https, and override any certificate and key issued by Keystone (if it is configured to do so).
prefer-ipv6
(boolean) If True enables IPv6 support. The charm will expect network interfaces to be configured with an IPv6 address. If set to False (default) IPv4 is expected. . NOTE: these charms do not currently support IPv6 privacy extension. In order for this charm to function correctly, the privacy extension must be disabled and a non-temporary address must be configured/available on your network interface.
os-public-network
(string) The IP address and netmask of the OpenStack Public network (e.g. 192.168.0.0/24) . This network will be used for public endpoints.
ha-mcastport
(int) Default multicast port number that will be used to communicate between HA Cluster nodes.
5454
volume-usage-audit-period
(string) Time period for which to generate volume usages. The options are hour, day, month, or year.
month
ha-bindiface
(string) Default network interface on which HA cluster will bind to communication with the other members of the HA Cluster.
eth0
nagios_servicegroups
(string) A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup
haproxy-client-timeout
(int) Client timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 30000ms is used.
rabbit-user
(string) Username to request access on rabbitmq-server.
cinder
os-public-hostname
(string) The hostname or address of the public endpoints created for cinder in the keystone identity provider. . This value will be used for public endpoints. For example, an os-public-hostname set to 'cinder.example.com' with ssl enabled will create two public endpoints for cinder: . https://cinder.example.com:443/v2/$(tenant_id)s and https://cinder.example.com:443/v3/$(tenant_id)s
action-managed-upgrade
(boolean) If True enables openstack upgrades for this charm via juju actions. You will still need to set openstack-origin to the new repository but instead of an upgrade running automatically across all units, it will wait for you to execute the openstack-upgrade action for this charm on each unit. If False it will revert to existing behavior of upgrading all units on config change.
os-admin-hostname
(string) The hostname or address of the admin endpoints created for cinder in the keystone identity provider. . This value will be used for admin endpoints. For example, an os-admin-hostname set to 'cinder.admin.example.com' with ssl enabled will create two admin endpoints for cinder: . https://cinder.admin.example.com:443/v2/$(tenant_id)s and https://cinder.admin.example.com:443/v3/$(tenant_id)s
block-device
(string) The block devices on which to create LVM volume group. . May be set to None for deployments that will not need local storage (eg, Ceph/RBD-backed volumes). . This can also be a space-delimited list of block devices to attempt to use in the cinder LVM volume group - each block device detected will be added to the available physical volumes in the volume group. . May be set to the path and size of a local file (/path/to/file.img|$sizeG), which will be created and used as a loopback device (for testing only). $sizeG defaults to 5G
sdb
api-listening-port
(int) OpenStack Volume API listening port.
8776
config-flags
(string) Comma-separated list of key=value config flags. These values will be placed in the cinder.conf [DEFAULT] section.
dns-ha
(boolean) Use DNS HA with MAAS 2.0. Note if this is set do not set vip settings below.
glance-api-version
(int) Newer storage drivers may require the v2 Glance API to perform certain actions e.g. the RBD driver requires requires this to support COW cloning of images. This option will default to v1 for backwards compatibility with older glance services.
1
openstack-origin
(string) Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Ubuntu Cloud Archive e.g. . cloud:<series>-<openstack-release> cloud:<series>-<openstack-release>/updates cloud:<series>-<openstack-release>/staging cloud:<series>-<openstack-release>/proposed . See https://wiki.ubuntu.com/OpenStack/CloudArchive for info on which cloud archives are available and supported. . NOTE: updating this setting to a source that is known to provide a later version of OpenStack will trigger a software upgrade unless action-managed-upgrade is set to True.
distro
ceph-osd-replication-count
(int) This value dictates the number of replicas ceph must make of any object it stores within the cinder rbd pool. Of course, this only applies if using Ceph as a backend store. Note that once the cinder rbd pool has been created, changing this value will not have any effect (although the configuration of a pool can be always be changed within ceph itself or via the charm used to deploy ceph).
3
os-internal-network
(string) The IP address and netmask of the OpenStack Internal network (e.g. 192.168.0.0/24) . This network will be used for internal endpoints.
database
(string) Database to request access.
cinder
openstack-origin-git
(string) Specifies a default OpenStack release name, or a YAML dictionary listing the git repositories to install from. . The default Openstack release name may be one of the following, where the corresponding OpenStack github branch will be used: * liberty * mitaka * newton * master . The YAML must minimally include requirements and cinder repositories, and may also include repositories for other dependencies: repositories: - {name: requirements, repository: 'git://github.com/openstack/requirements', branch: master} - {name: cinder, repository: 'git://github.com/openstack/cinder', branch: master} release: master
region
(string) OpenStack Region
RegionOne
nagios_context
(string) Used by the nrpe-external-master subordinate charm. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like 'juju-myservice-0'. If you are running multiple environments with the same services in them this allows you to differentiate between them.
juju
ssl_ca
(string) SSL CA to use with the certificate and key provided - this is only required if you are providing a privately signed ssl_cert and ssl_key.
harden
(string) Apply system hardening. Supports a space-delimited list of modules to run. Supported modules currently include os, ssh, apache and mysql.
restrict-ceph-pools
(boolean) Cinder can optionally restrict the key it asks Ceph for to only be able to access the pools it needs.
rabbit-vhost
(string) RabbitMQ virtual host to request access on rabbitmq-server.
openstack
debug
(boolean) Enable debug logging.
os-internal-hostname
(string) The hostname or address of the internal endpoints created for cinder in the keystone identity provider. . This value will be used for internal endpoints. For example, an os-internal-hostname set to 'cinder.internal.example.com' with ssl enabled will create two internal endpoints for cinder: . https://cinder.internal.example.com:443/v2/$(tenant_id)s and https://cinder.internal.example.com:443/v3/$(tenant_id)s
haproxy-connect-timeout
(int) Connect timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 5000ms is used.
database-user
(string) Username to request database access.
cinder
vip_cidr
(int) Default CIDR netmask to use for HA vip when it cannot be automatically determined.
24
ephemeral-unmount
(string) Cloud instances provide ephemeral storage which is normally mounted on /mnt. . Providing this option will force an unmount of the ephemeral device so that it can be used as a Cinder storage device. This is useful for testing purposes (cloud deployment is not a typical use case).