nova compute #321

OpenStack Compute, codenamed Nova, is a cloud computing fabric controller. In
addition to its "native" API (the OpenStack API), it also supports the Amazon
This charm provides the Nova Compute hypervisor service and should be deployed
directly to physical servers.


This charm provides Nova Compute, the OpenStack compute service. It's target
platform is Ubuntu (preferably LTS) + Openstack.


The following interfaces are provided:

  • cloud-compute - Used to relate (at least) with one or more of
    nova-cloud-controller, glance, ceph, cinder, mysql, ceilometer-agent,
    rabbitmq-server, neutron

  • nrpe-external-master - Used to generate Nagios checks.


Nova compute only requires database access if using nova-network. If using
Neutron, no direct database access is required and the shared-db relation need
not be added.


This charm support nova-network (legacy) and Neutron networking.


This charm supports a number of different storage backends depending on
your hypervisor type and storage relations.

NFV support

This charm (in conjunction with the nova-cloud-controller and neutron-api charms)
supports use of nova-compute nodes configured for use in Telco NFV deployments;
specifically the following configuration options (yaml excerpt):

  hugepages: 60%
  vcpu-pin-set: "^0,^2"
  reserved-host-memory: 1024
  pci-passthrough-whitelist: {"vendor_id":"1137","product_id":"0071","address":"*:0a:00.*","physical_network":"physnet1"}

In this example, compute nodes will be configured with 60% of available RAM for
hugepage use (decreasing memory fragmentation in virtual machines, improving
performance), and Nova will be configured to reserve CPU cores 0 and 2 and
1024M of RAM for host usage and use the supplied PCI device whitelist as
PCI devices that as consumable by virtual machines, including any mapping to
underlying provider network names (used for SR-IOV VF/PF port scheduling with
Nova and Neutron's SR-IOV support).

The vcpu-pin-set configuration option is a comma-separated list of physical
CPU numbers that virtual CPUs can be allocated to by default. Each element
should be either a single CPU number, a range of CPU numbers, or a caret
followed by a CPU number to be excluded from a previous range. For example:

vcpu-pin-set: "4-12,^8,15"

The pci-passthrough-whitelist configuration must be specified as follows:

A JSON dictionary which describe a whitelisted PCI device. It should take
the following format:

["device_id": "<id>",] ["product_id": "<id>",]
["address": "[[[[<domain>]:]<bus>]:][<slot>][.[<function>]]" |
"devname": "PCI Device Name",]
{"tag": "<tag_value>",}

where '[' indicates zero or one occurrences, '{' indicates zero or multiple
occurrences, and '|' mutually exclusive options. Note that any missing
fields are automatically wildcarded. Valid examples are:

pci-passthrough-whitelist: {"devname":"eth0", "physical_network":"physnet"}

pci-passthrough-whitelist: {"address":"*:0a:00.*"}

pci-passthrough-whitelist: {"address":":0a:00.", "physical_network":"physnet1"}

pci-passthrough-whitelist: {"vendor_id":"1137", "product_id":"0071"}

pci-passthrough-whitelist: {"vendor_id":"1137", "product_id":"0071", "address": "0000:0a:00.1", "physical_network":"physnet1"}

The following is invalid, as it specifies mutually exclusive options:

pci-passthrough-whitelist: {"devname":"eth0", "physical_network":"physnet", "address":"*:0a:00.*"}

A JSON list of JSON dictionaries corresponding to the above format. For

pci-passthrough-whitelist: [{"product_id":"0001", "vendor_id":"8086"}, {"product_id":"0002", "vendor_id":"8086"}]`

The OpenStack advanced networking documentation
provides further details on whitelist configuration and how to create instances
with Neutron ports wired to SR-IOV devices.


                            Experimental enable apparmor profile. Valid settings: 'complain',
'enforce' or 'disable'.
AA disabled by default.

                            YAML formatted associative array of sysctl values, e.g.:
'{ kernel.pid_max : 4194303 }'

                            Sets vcpu_pin_set option in nova.conf which defines which pcpus that
instance vcpus can or cannot use. For example '^0,^2' to reserve two
cpus for the host.

                            Bridge interface to be configured.
                            Set to 'host-model' to clone the host CPU feature flags; to
'host-passthrough' to use the host CPU model exactly; to 'custom' to
use a named CPU model; to 'none' to not set any CPU model. If
virt_type='kvm|qemu', it will default to 'host-model', otherwise it will
default to 'none'. Defaults to 'host-passthrough' for ppc64el, ppc64le
if no value is set.

                            Only used when migration-auth-type is set to ssh.
Full path to authorized_keys file, can be useful for systems with
non-default AuthorizedKeysFile location. It will be formatted using the
following variables:
  homedir - user's home directory
  username - username

                            Network interface on which to build bridge.
                            Full path to Nova configuration file.
                            Virtualization flavor. Supported flavors are: kvm, xen, uml, lxc, qemu,
NOTE: Changing virtualization flavor after deployment is not supported.

                            Default compute node availability zone.
This option determines the availability zone to be used when it is not
specified in the VM creation request. If this option is not set, the
default availability zone 'nova' is used.
NOTE: Availability zones must be created manually using the
'openstack aggregate create' command.

                            Sets the pci_passthrough_whitelist option in nova.conf which allows PCI
passthrough of specific devices to VMs.
Example applications: GPU processing, SR-IOV networking, etc.
NOTE: For PCI passthrough to work IOMMU must be enabled on the machine
deployed to. This can be accomplished by setting kernel parameters on
capable machines in MAAS, tagging them and using these tags as
constraints in the model.

                            Setting this to True will allow supporting services to log to syslog.

                            Netmask to be assigned to bridge interface.
                            Enable verbose logging.
                            Tell Nova which libvirt image backend to use. Supported backends are rbd,
lvm and qcow2. If no backend is specified, the Nova default (qcow2) is
used. Note that rbd imagebackend is only supported with >= Juno.

                            Defines a relative weighting of the pool as a percentage of the total
amount of data in the Ceph cluster. This effectively weights the number
of placement groups for the pool created to be appropriately portioned
to the amount of data expected. For example, if the ephemeral volumes
for the OpenStack compute instances are expected to take up 20% of the
overall configuration then this value would be specified as 20. Note -
it is important to choose an appropriate value for the pool weight as
this directly affects the number of placement groups which will be
created for the pool. The number of placement groups for a pool can
only be increased, never decreased - so it is important to identify the
percent of data that will likely reside in the pool.

                            Username used to access rabbitmq queue.
                            Set to 1 to enable KSM, 0 to disable KSM, and AUTO to use default
Please note that the AUTO value works for qemu 2.2+ (> Kilo), older
releases will be set to 1 as default.

                            IP to be assigned to bridge interface.
                            Specific cachemodes to use for different disk types e.g:

                            If True enables IPv6 support. The charm will expect network interfaces
to be configured with an IPv6 address. If set to False (default) IPv4
is expected.
NOTE: these charms do not currently support IPv6 privacy extension. In
order for this charm to function correctly, the privacy extension must be
disabled and a non-temporary address must be configured/available on
your network interface.

                            A comma-separated list of nagios servicegroups. If left empty, the
nagios_context will be used as the servicegroup.

                            Enable instance resizing.
NOTE: This also enables passwordless SSH access for user 'nova' between
compute hosts.

                            Configure libvirt or lxd for live migration.
Live migration support for lxd is still considered experimental.
NOTE: This also enables passwordless SSH access for user 'root' between
compute hosts.

                            This option determines whether to start guests that were running
before the host rebooted.

                            If True enables openstack upgrades for this charm via juju actions.
You will still need to set openstack-origin to the new repository but
instead of an upgrade running automatically across all units, it will
wait for you to execute the openstack-upgrade action for this charm on
each unit. If False it will revert to existing behavior of upgrading
all units on config change.

                            Repository from which to install. May be one of the following:
distro (default), ppa:somecustom/ppa, a deb url sources entry or a
supported Ubuntu Cloud Archive (UCA) release pocket.
Supported UCA sources include:
For series=Precise we support UCA for openstack-release=
   * icehouse
For series=Trusty we support UCA for openstack-release=
   * juno
   * kilo
   * ...
NOTE: updating this setting to a source that is known to provide
a later version of OpenStack will trigger a software upgrade.

                            Openstack mostly defaults to using public endpoints for
internal communication between services. If set to True this option will
configure services to use internal endpoints where possible.

                            Whether to run nova-api and nova-network on the compute nodes.
                            Comma-separated list of key=value config flags. These values will be
placed in the nova.conf [DEFAULT] section.

                            TCP authentication scheme for libvirt live migration. Available options
include ssh.

                            This value dictates the number of replicas ceph must make of any
object it stores within the nova rbd pool. Of course, this only
applies if using Ceph as a backend store. Note that once the nova
rbd pool has been created, changing this value will not have any
effect (although it can be changed in ceph by manually configuring
your ceph cluster).

                            The IP address and netmask of the OpenStack Internal network (e.g.
This network will be used to bind vncproxy client.

                            Path used for storing Nova instances data - empty means default of

                            Nova database name.
                            Specifies a default OpenStack release name, or a YAML dictionary
listing the git repositories to install from.
The default Openstack release name may be one of the following, where
the corresponding OpenStack github branch will be used:
  * liberty
  * mitaka
  * newton
  * master
The YAML must minimally include requirements, neutron, and nova
repositories, and may also include repositories for other dependencies:
  - {name: requirements,
     repository: 'git://',
     branch: master}
  - {name: neutron,
     repository: 'git://',
     branch: master}
  - {name: nova,
     repository: 'git://',
     branch: master
  release: master

                            Enable/disable rbd client cache. Leaving this value unset will result in
default Ceph rbd client settings being used (rbd cache is enabled by
default for Ceph >= Giant). Supported values here are "enabled" or

                            Used by the nrpe-external-master subordinate charm. A string that will be
prepended to instance name to set the host name in nagios. So for
instance the hostname would be something like:
If you're running multiple environments with the same services in them
this allows you to differentiate between them.

                            The percentage of system memory to use for hugepages eg '10%' or the
total number of 2M hugepages - eg "1024".
For a systemd system (wily and later) the prefered approach is to enable
hugepages via kernel parameters set in MAAS and systemd will mount them
NOTE: For hugepages to work it must be enabled on the machine deployed
to. This can be accomplished by setting kernel parameters on capable
machines in MAAS, tagging them and using these tags as constraints in the

                            RBD pool to use with Nova libvirt RBDImageBackend. Only required when you
have libvirt-image-backend set to 'rbd'.

                            Apply system hardening. Supports a space-delimited list of modules
to run. Supported modules currently include os, ssh, apache and mysql.

                            Optionally restrict Ceph key permissions to access pools as required.

                            Rabbitmq vhost.
                            Amount of memory in MB to reserve for the host. Defaults to 512MB.

                            Set to a named libvirt CPU model (see names listed in
/usr/share/libvirt/cpu_map.xml). Only has effect if cpu_mode='custom' and

                            Enable debug logging.
                            Username for database access.