neutron gateway #234

Neutron is a virtual network service for Openstack, and a part of
Netstack. Just like OpenStack Nova provides an API to dynamically
request and configure virtual servers, Neutron provides an API to
dynamically request and configure virtual networks. These networks
connect "interfaces" from other OpenStack services (e.g., virtual NICs
from Nova VMs). The Neutron API supports extensions to provide
advanced network capabilities (e.g., QoS, ACLs, network monitoring,
This charm provides central Neutron networking services as part
of a Neutron based OpenStack deployment


Neutron provides flexible software defined networking (SDN) for OpenStack.

This charm is designed to be used in conjunction with the rest of the OpenStack
related charms in the charm store to virtualize the network that Nova Compute
instances plug into.

It's designed as a replacement for nova-network; however it does not yet
support all of the features of nova-network (such as multihost) so may not
be suitable for all.

Neutron supports a rich plugin/extension framework for propriety networking
solutions and supports (in core) Nicira NVP, NEC, Cisco and others...

See the upstream Neutron documentation
for more details.


In order to use Neutron with OpenStack, you will need to deploy the
nova-compute and nova-cloud-controller charms with the network-manager
configuration set to 'Neutron':

    network-manager: Neutron

This decision must be made prior to deploying OpenStack with Juju as
Neutron is deployed baked into these charms from install onwards:

juju deploy nova-compute
juju deploy --config config.yaml nova-cloud-controller
juju add-relation nova-compute nova-cloud-controller

The Neutron Gateway can then be added to the deploying:

juju deploy neutron-gateway
juju add-relation neutron-gateway mysql
juju add-relation neutron-gateway rabbitmq-server
juju add-relation neutron-gateway nova-cloud-controller

The gateway provides two key services; L3 network routing and DHCP services.

These are both required in a fully functional Neutron OpenStack deployment.

See upstream Neutron multi extnet

Configuration Options

Port Configuration

All network types (internal, external) are configured with bridge-mappings and
data-port and the flat-network-providers configuration option of the
neutron-api charm. Once deployed, you can configure the network specifics
using neutron net-create.

If the device name is not consistent between hosts, you can specify the same
bridge multiple times with MAC addresses instead of interface names. The charm
will loop through the list and configure the first matching interface.

Basic configuration of a single external network, typically used as floating IP
addresses combined with a GRE private network:

    bridge-mappings:         physnet1:br-ex
    data-port:               br-ex:eth1
    flat-network-providers:  physnet1

neutron net-create --provider:network_type flat \
    --provider:physical_network physnet1 --router:external=true \
neutron router-gateway-set provider external

Alternative configuration with two networks, where the internal private
network is directly connected to the gateway with public IP addresses but a
floating IP address range is also offered.

    bridge-mappings:         physnet1:br-data external:br-ex
    data-port:               br-data:eth1 br-ex:eth2
    flat-network-providers:  physnet1 external

Alternative configuration with two external networks, one for public instance
addresses and one for floating IP addresses. Both networks are on the same
physical network connection (but they might be on different VLANs, that is
configured later using neutron net-create).

    bridge-mappings:         physnet1:br-data
    data-port:               br-data:eth1
    flat-network-providers:  physnet1

neutron net-create --provider:network_type vlan \
    --provider:segmentation_id 400 \
    --provider:physical_network physnet1 --shared external
neutron net-create --provider:network_type vlan \
    --provider:segmentation_id 401 \
    --provider:physical_network physnet1 --shared --router:external=true \
neutron router-gateway-set provider floating

This replaces the previous system of using ext-port, which always created a bridge
called br-ex for external networks which was used implicitly by external router

Instance MTU

When using Open vSwitch plugin with GRE tunnels default MTU of 1500 can cause
packet fragmentation due to GRE overhead. One solution is to increase the MTU on
physical hosts and network equipment. When this is not possible or practical the
charm's instance-mtu option can be used to reduce instance MTU via DHCP.

juju set neutron-gateway instance-mtu=1400

OpenStack upstream documentation recommends a MTU value of 1400:
OpenStack documentation

Note that this option was added in Havana and will be ignored in older releases.


                            Experimental enable apparmor profile. Valid settings: 'complain', 'enforce' or 'disable'.
AA disabled by default.

                            YAML-formatted associative array of sysctl key/value pairs to be set
persistently e.g. '{ kernel.pid_max : 4194303 }'.

                            The IP address and netmask of the OpenStack Data network (e.g.,

This network will be used for tenant network traffic in overlay

                            Enable metadata on an isolated network (no router ports).

                            If True will enable Pacemaker to monitor the neutron-ha-monitor daemon
on every neutron-gateway unit, which detects neutron agents status and
reschedule resources hosting on failed agents, detects local errors and
release resources when network is unreachable or do necessary recover
tasks. This feature targets to < Juno which doesn't natively support HA
in Neutron itself.

                            Setting this to True will allow supporting services to log to syslog.

                            Optional configuration to support how the L3 agent option
handle_internal_only_routers is configured.
all    => Set to be true everywhere
none   => Set to be false everywhere
leader => Set to be true on one node (the leader) and false everywhere
Use leader and none when configuring multiple floating pools

                            Configure DHCP services to provide MTU configuration to instances
within the cloud.  This is useful in deployments where its not
possible to increase MTU on switches and physical servers to
accommodate the packet overhead of using GRE tunnels.

                            The CPU core multiplier to use when configuring worker processes for
neutron and nova-metadata-api. By default, the number of workers for
each daemon is set to twice the number of CPU cores a service unit has.

                            Enable verbose logging.
                            RabbitMQ Nova Virtual Host
                            RabbitMQ Virtual Host
                            Default multicast port number that will be used to communicate between
HA Cluster nodes.

                            RabbitMQ Nova user
                            Default network interface on which HA cluster will bind to communicate
with the other members of the HA Cluster.

                            A comma-separated list of Nagios servicegroups.
If left empty, the nagios_context will be used as the servicegroup

                            Comma-separated list of key=value config flags with the additional
dhcp options for neutron dnsmasq.

                            RabbitMQ user
                            If True enables openstack upgrades for this charm via juju actions.
You will still need to set openstack-origin to the new repository but
instead of an upgrade running automatically across all units, it will
wait for you to execute the openstack-upgrade action for this charm on
each unit. If False it will revert to existing behavior of upgrading
all units on config change.

                            Space-delimited list of bridge:port mappings. Ports will be added to
their corresponding bridge. The bridges will allow usage of flat or
VLAN network types with Neutron and should match this defined in
Ports provided can be the name or MAC address of the interface to be
added to the bridge. If MAC addresses are used, you may provide multiple
bridge:mac for the same bridge so as to be able to configure multiple
units. In this case the charm will run through the provided MAC addresses
for each bridge until it finds one it can resolve to an interface name.

                            Optional configuration to support use of linux router
Note that this is used only for Cisco n1kv plugin.

                            Space-delimited list of <physical_network>:<vlan_min>:<vlan_max> or
<physical_network> specifying physical_network names usable for VLAN
provider and tenant networks, as well as ranges of VLAN tags on each
available for allocation to tenant networks.

                            Deprecated: Use bridge-mappings and data-port to create a network
which can be used for external connectivity.  You can call the network
external and the bridge br-ex by convention, but neither is required.

Space-delimited list of external ports to use for routing of instance
traffic to the external public network. Valid values are either MAC
addresses (in which case only MAC addresses for interfaces without an IP
address already assigned will be used), or interfaces (eth0)

                            Repository from which to install.  May be one of the following:
distro (default), ppa:somecustom/ppa, a deb url sources entry,
or a supported Cloud Archive release pocket.

Supported Cloud Archive sources include:


For series=Precise we support cloud archives for openstack-release:
   * icehouse

For series=Trusty we support cloud archives for openstack-release:
   * juno
   * kilo
   * liberty
   * mitaka
  * newton

NOTE: updating this setting to a source that is known to provide
a later version of OpenStack will trigger a software upgrade.

                            Network configuration plugin to use for quantum.
Supported values include:

  ovs - ML2 + Open vSwitch
  nsx - VMware NSX
  n1kv - Cisco N1kv
  ovs-odl - ML2 + Open vSwitch with OpenDayLight Controller

                            Specifies a default OpenStack release name, or a YAML dictionary
listing the git repositories to install from.

The default Openstack release name may be one of the following, where
the corresponding OpenStack github branch will be used:
  * liberty
  * mitaka
  * newton
  * master

The YAML must minimally include requirements, neutron-fwaas,
neutron-lbaas, neutron-vpnaas, and neutron repositories, and may
also include repositories for other dependencies:
  - {name: requirements,
     repository: 'git://',
     branch: master}
  - {name: neutron-fwaas,
     repository: 'git://',
     branch: master}
  - {name: neutron-lbaas,
     repository: 'git://',
     branch: master}
  - {name: neutron-vpnaas,
     repository: 'git://',
     branch: master}
  - {name: neutron,
     repository: 'git://',
     branch: master}
  release: master

                            Optional configuration to set the external-network-id. Only needed when
configuring multiple external networks and should be used in conjunction
with run-internal-router.

                            Used by the nrpe-external-master subordinate charm.
A string that will be prepended to instance name to set the host name
in Nagios. So for instance the hostname would be something like:
If you're running multiple environments with the same services in them
this allows you to differentiate between them.

                            Space-delimited list of Neutron flat network providers.

                            Apply system hardening. Supports a space-delimited list of modules
to run. Supported modules currently include os, ssh, apache and mysql.

                            The metadata network is used by solutions which do not leverage the l3
agent for providing access to the metadata service.

                            Enable debug logging.
                            Space-separated list of ML2 data bridge mappings with format