Shared File Systems service provides a set of services for management of
shared file systems in a multi-tenant cloud environment. The service resembles
OpenStack block-based storage management from the OpenStack Block Storage
service project. With the Shared File Systems service, you can create a remote
file system, mount the file system on your instances, and then read and write
data from your instances to and from your file system.
This is a pre-release charm for testing.
This charm provides the Manila shared file service for an OpenStack Cloud. It
installs a single instance that, on its own, can't be used.
In order to use the manila charm, a suitable backend charm is needed to
configure a share backend. At the time of writing (Dec 2016) the only backend
charm available for testing is the 'generic backend' charm called
'manila-generic'. This is used to configure a generic fileshare backend that
can implement an NFS server that then uses a cinder backend block storage
service to provide the share instances.
Without a backend subordinate charm related to the manila-charm there will be
no manila backends configured; the manila charm will be stuck in the blocked
Manila share backends are configured using subordinate charms
It's necessary to have the ability to configure a share backend independently
of the main charm. This means that plugin charms will be used to configure
each backend. Multiple backend charms can be related to the manila charm to
allow a manaila (juju) application to support multiple share backends.
Essentially, a plugin needs to be able to configure:
- it's section in the manila.conf along with any network plugin's that it
needs (assuming that it's a share that manages it's own share-instance).
- ensure that the relevant services are restarted.
This pre-release of manila provides (in the charm store):
- charm-manila: the main charm,
- interface-manila-plugin : the interface for plugging in the generic
backend (and other interfaces),
- charm-manila-generic: the plugin for configuring the generic backend.
The backend provides a piece of the manila.conf configuration file with
the sections necessary to configure the backend. This is mostly for the share,
rather than the api level.
Manila (plus manila-generic) relies on services from the mysql/percona,
rabbitmq-server, keystone charms, and a storage backend charm. The following
yaml file will create a small, unconfigured, OpenStack system with the
necessary components to start testing with Manila. Note that these target the
'next' OpenStack charms which are essentially 'edge' charms.
# vim: set ts=2 et: # Juju 2.0 deploy bundle for development ('next') charms # UOSCI relies on this for OS-on-OS deployment testing series: xenial automatically-retry-hooks: False services: mysql: charm: cs:~openstack-charmers/xenial/percona-cluster num_units: 1 constraints: mem=1G options: dataset-size: 50% root-password: mysql rabbitmq-server: charm: cs:~openstack-charmers/xenial/rabbitmq-server num_units: 1 constraints: mem=1G keystone: charm: cs:~openstack-charmers/xenial/keystone num_units: 1 constraints: mem=1G options: admin-password: openstack admin-token: ubuntutesting preferred-api-version: "2" glance: charm: cs:~openstack-charmers/xenial/glance num_units: 1 constraints: mem=1G nova-cloud-controller: charm: cs:~openstack-charmers/xenial/nova-cloud-controller num_units: 1 constraints: mem=1G options: network-manager: Neutron nova-compute: charm: cs:~openstack-charmers/xenial/nova-compute num_units: 1 constraints: mem=4G neutron-gateway: charm: cs:~openstack-charmers/xenial/neutron-gateway num_units: 1 constraints: mem=1G options: bridge-mappings: physnet1:br-ex instance-mtu: 1300 neutron-api: charm: cs:~openstack-charmers/xenial/neutron-api num_units: 1 constraints: mem=1G options: neutron-security-groups: True flat-network-providers: physnet1 neutron-openvswitch: charm: cs:~openstack-charmers/xenial/neutron-openvswitch cinder: charm: cs:~openstack-charmers/xenial/cinder num_units: 1 constraints: mem=1G options: block-device: vdb glance-api-version: 2 overwrite: 'true' ephemeral-unmount: /mnt manila: charm: cs:~openstack-charmers/xenial/manila num_units: 1 options: debug: True manila-generic: charm: cs:~openstack-charmers/xenial/manila-generic options: debug: True relations: - [ keystone, mysql ] - [ manila, mysql ] - [ manila, rabbitmq-server ] - [ manila, keystone ] - [ manila, manila-generic ] - [ glance, keystone] - [ glance, mysql ] - [ glance, "cinder:image-service" ] - [ nova-compute, "rabbitmq-server:amqp" ] - [ nova-compute, glance ] - [ nova-cloud-controller, rabbitmq-server ] - [ nova-cloud-controller, mysql ] - [ nova-cloud-controller, keystone ] - [ nova-cloud-controller, glance ] - [ nova-cloud-controller, nova-compute ] - [ cinder, keystone ] - [ cinder, mysql ] - [ cinder, rabbitmq-server ] - [ cinder, nova-cloud-controller ] - [ "neutron-gateway:amqp", "rabbitmq-server:amqp" ] - [ neutron-gateway, nova-cloud-controller ] - [ neutron-api, mysql ] - [ neutron-api, rabbitmq-server ] - [ neutron-api, nova-cloud-controller ] - [ neutron-api, neutron-openvswitch ] - [ neutron-api, keystone ] - [ neutron-api, neutron-gateway ] - [ neutron-openvswitch, nova-compute ] - [ neutron-openvswitch, rabbitmq-server ] - [ neutron-openvswitch, manila ]
and then (with juju 2.x):
juju deploy manila.yaml
Note that this OpenStack system will need to be configured (in terms of
networking, images, etc.) before testing can commence.
Please report bugs on Launchpad.
For general questions please refer to the OpenStack Charm Guide.
- (string) SSL key to use with certificate specified as ssl_cert.
- (string) Default network interface to use for HA vip when it cannot be automatically determined.
- (boolean) Enable verbose logging
- (int) Server timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
- (string) Virtual IP(s) to use to front API services in HA configuration. If multiple networks are being used, a VIP should be provided for each network, separated by spaces.
- (float) The CPU core multiplier to use when configuring worker processes. By default, the number of workers for each daemon is set to twice the number of CPU cores a service unit has. When deployed in a LXD container, this default value will be capped to 4 workers unless this configuration option is set.
- (boolean) Setting this to True will allow supporting services to log to syslog.
- (int) Queue timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
- (string) SSL certificate to install and use for API ports. Setting this value and ssl_key will enable reverse proxying, point Glance's entry in the Keystone catalog to use https, and override any certficiate and key issued by Keystone (if it is configured to do so).
- (string) The IP address and netmask of the OpenStack Public network (e.g., 192.168.0.0/24) . This network will be used for public endpoints.
- (string) Username used to access rabbitmq queue
- (string) The IP address and netmask of the OpenStack Admin network (e.g., 192.168.0.0/24) . This network will be used for admin endpoints.
- (int) Client timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 90000ms is used.
- (string) The hostname or address of the public endpoints created in the keystone identity provider. . This value will be used for public endpoints. For example, an os-public-hostname set to 'api-public.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-public.example.com:9696/
- (string) The hostname or address of the admin endpoints created in the keystone identity provider. . This value will be used for admin endpoints. For example, an os-admin-hostname set to 'api-admin.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-admin.example.com:9696/
- (string) The 'default_share_type' must match the the configured default_share_type set up in manila using 'manila create-type'.
- (boolean) Openstack mostly defaults to using public endpoints for internal communication between services. If set to True this option will configure services to use internal endpoints where possible.
- (string) The default backend for this manila set. Must be one of the 'share-backends' or the charm will block.
- (boolean) Use DNS HA with MAAS 2.0. Note if this is set do not set vip settings below.
- (string) Repository from which to install. May be one of the following: distro (default), ppa:somecustom/ppa, a deb url sources entry, or a supported Cloud Archive release pocket. Supported Cloud Archive sources include: cloud:precise-folsom, cloud:precise-folsom/updates, cloud:precise-folsom/staging, cloud:precise-folsom/proposed. Note that updating this setting to a source that is known to provide a later version of OpenStack will trigger a software upgrade.
- (string) The IP address and netmask of the OpenStack Internal network (e.g., 192.168.0.0/24) . This network will be used for internal endpoints.
- (string) Database name for Manila
- (string) OpenStack Region
- (string) The share protocols that the backends will be able to provide. The default is good for the generic backends. Other backends may not support both NFS and CIFS. This is a space delimited list of protocols.
- NFS CIFS
- (string) SSL CA to use with the certificate and key provided - this is only required if you are providing a privately signed ssl_cert and ssl_key.
- (string) Rabbitmq vhost
- (boolean) Enable debug logging
- (string) The hostname or address of the internal endpoints created in the keystone identity provider. . This value will be used for internal endpoints. For example, an os-internal-hostname set to 'api-internal.example.com' with ssl enabled will create the following endpoint for neutron-api: . https://api-internal.example.com:9696/
- (int) Connect timeout configuration in ms for haproxy, used in HA configurations. If not provided, default value of 9000ms is used.
- (string) Username for Manila database access
- (int) Default CIDR netmask to use for HA vip when it cannot be automatically determined.