percona cluster #250

Percona XtraDB Cluster provides an active/active MySQL
compatible alternative implemented using the Galera
synchronous replication extensions.

Overview

Percona XtraDB Cluster is a high availability and high scalability solution for
MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the
Galera library of MySQL high availability solutions in a single product package
which enables you to create a cost-effective MySQL cluster.

This charm deploys Percona XtraDB Cluster onto Ubuntu.

Usage

WARNING: Its critical that you follow the bootstrap process detailed in this
document in order to end up with a running Active/Active Percona Cluster.

Proxy Configuration

If you are deploying this charm on MAAS or in an environment without direct
access to the internet, you will need to allow access to repo.percona.com
as the charm installs packages direct from the Percona respositories. If you
are using squid-deb-proxy, follow the steps below:

echo "repo.percona.com" | sudo tee /etc/squid-deb-proxy/mirror-dstdomain.acl.d/40-percona
sudo service squid-deb-proxy restart

Deployment

The first service unit deployed acts as the seed node for the rest of the
cluster; in order for the cluster to function correctly, the same MySQL passwords
must be used across all nodes:

cat > percona.yaml << EOF
percona-cluster:
    root-password: my-root-password
    sst-password: my-sst-password
EOF

Once you have created this file, you can deploy the first seed unit:

juju deploy --config percona.yaml percona-cluster

Once this node is full operational, you can add extra units one at a time to the
deployment:

juju add-unit percona-cluster

A minimum cluster size of three units is recommended.

Memory Configuration

Percona Cluster is extremely memory sensitive. Setting memory values too low
will give poor performance. Setting them too high will create problems that are
very difficult to diagnose. Please take time to evaluate these settings for
each deployment environment rather than copying and pasting bundle
configurations.

The Percona Cluster charm needs to be able to be deployed in small low memory
development environments as well as high performance production environments.
The charm configuration opinionated defaults favor the developer environment in
order to ease initial testing. Production environments need to consider
carefully the memory requirements for the hardware or cloud in use. Consult a
MySQL memory calculator [2] to understand the implications of the values.

Between the 5.5 and 5.6 releases a significant default was changed.
The performance schema [1] defaulted to on for 5.6 and later. This allocates
all the memory that would be required to handle max-connections plus several
other memory settings. With 5.5 memory was allocated during runtime as needed.

The charm now makes performance schema configurable and defaults to off (False).
With the performance schema turned off memory is allocated when needed during
run time. It is important to understand this can lead to run time memory
exhaustion if the configuration values are set too high. Consult a MySQL memory
calculator [2] to understand the implications of the values.

Particularly consider the max-connections setting, this value is a balance
between connection exhaustion and memory exhaustion. Occasionally connection
exhaustion occurs in large production HA clouds with max-connections less than
2000. The common practice became to set max-connections unrealistically high
near 10k or 20k. In the move to 5.6 on Xenial this became a problem as Percona
would fail to start up or behave erratically as memory exhaustion occurred on
the host due to performance schema being turned on. Even with the default now
turned off this value should be carefully considered against the production
requirements and resources available.

[1] http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-6.html#mysqld-5-6-6-performance-schema
[2] http://www.mysqlcalculator.com/

HA/Clustering

When more than one unit of the charm is deployed with the hacluster charm
the percona charm will bring up an Active/Active cluster. The process of
clustering the units together takes some time. Due to the nature of
asynchronous hook execution it is possible client relationship hooks may
be executed before the cluster is complete. In some cases, this can lead
to client charm errors.

To guarantee client relation hooks will not be executed until clustering is
completed use the min-cluster-size configuration setting:

juju deploy -n 3 percona-cluster
juju config percona-cluster min-cluster-size=3

When min-cluster-size is not set the charm will still cluster, however,
there are no guarantees client relation hooks will not execute before it is
complete.

Single unit deployments behave as expected.

There are two mutually exclusive high availability options: using virtual
IP(s) or DNS. In both cases, a relationship to hacluster is required which
provides the corosync back end HA functionality.

To use virtual IP(s) the clustered nodes must be on the same subnet such that
the VIP is a valid IP on the subnet for one of the node's interfaces and each
node has an interface in said subnet. The VIP becomes a highly-available API
endpoint.

At a minimum, the config option 'vip' must be set in order to use virtual IP
HA. If multiple networks are being used, a VIP should be provided for each
network, separated by spaces. Optionally, vip_iface or vip_cidr may be
specified.

To use DNS high availability there are several prerequisites. However, DNS HA
does not require the clustered nodes to be on the same subnet.
Currently the DNS HA feature is only available for MAAS 2.0 or greater
environments. MAAS 2.0 requires Juju 2.0 or greater. The clustered nodes must
have static or "reserved" IP addresses registered in MAAS. The DNS hostname(s)
must be pre-registered in MAAS before use with DNS HA.

At a minimum, the config option 'dns-ha' must be set to true and
'os-access-hostname' must be set in order to use DNS HA.
The charm will throw an exception in the following circumstances:
If neither 'vip' nor 'dns-ha' is set and the charm is related to hacluster
If both 'vip' and 'dns-ha' are set as they are mutually exclusive
If 'dns-ha' is set and os-access-hostname is not set

Network Space support

This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.

You can ensure that database connections are bound to a specific network space by binding the appropriate interfaces:

juju deploy percona-cluster --bind "shared-db=internal-space"

alternatively these can also be provided as part of a juju native bundle configuration:

percona-cluster:
  charm: cs:xenial/percona-cluster
  num_units: 1
  bindings:
    shared-db: internal-space

NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.

NOTE: Existing deployments using the access-network configuration option will continue to function; this option is preferred over any network space binding provided if set.

Limitiations

Note that Percona XtraDB Cluster is not a 'scale-out' MySQL solution; reads
and writes are channelled through a single service unit and synchronously
replicated to other nodes in the cluster; reads/writes are as slow as the
slowest node you have in your deployment.

Configuration

vip_iface
(string)
                            Network interface on which to place the Virtual IP.
                        
eth0
binlogs-path
(string)
                            Location on the filesystem where binlogs are going to be placed.
Default mimics what mysql-common package would do for mysql.
Make sure you do not put binlogs inside mysql datadir (/var/lib/mysql/)!

                        
/var/log/mysql/mysql-bin.log
binlogs-max-size
(string)
                            Sets the max_binlog_size mysql configuration option, which will limit the
size of the binary log files. The server will automatically rotate binlgos
after they grow to be bigger than this value.
Keep in mind that transactions are never split between binary logs, so
therefore binary logs might get larger than configured value.

                        
100M
access-network
(string)
                            The IP address and netmask of the 'access' network (e.g. 192.168.0.0/24)
.
This network will be used for access to database services.

                        
vip
(string)
                            Virtual IP to use to front Percona XtraDB Cluster in active/active HA
configuration

                        
table-open-cache
(int)
                            Sets table_open_cache (formerly known as table_cache) to mysql.
                        
2048
cluster-network
(string)
                            The IP address and netmask of the cluster (replication) network (e.g.
192.168.0.0/24)
.
This network will be used for wsrep_cluster replication.

                        
source
(string)
                            Package install location for Percona XtraDB Cluster (defaults to distro
for >= 14.04)

                        
ha-mcastport
(int)
                            Default multicast port number that will be used to communicate between HA
Cluster nodes.

                        
5490
binlogs-expire-days
(int)
                            Sets the expire_logs_days mysql configuration option, which will make
mysql server automatically remove logs older than configured number of
days.

                        
10
enable-binlogs
(boolean)
                            Turns on MySQL binary logs. The placement of the logs is controlled with
the binlogs_path config option.

                        
prefer-ipv6
(boolean)
                            If True enables IPv6 support. The charm will expect network interfaces
to be configured with an IPv6 address. If set to False (default) IPv4
is expected.
.
NOTE: these charms do not currently support IPv6 privacy extension. In
order for this charm to function correctly, the privacy extension must be
disabled and a non-temporary address must be configured/available on
your network interface.

                        
nagios_servicegroups
(string)
                            A comma-separated list of nagios servicegroups. If left empty, the
nagios_context will be used as the servicegroup.

                        
performance-schema
(boolean)
                            The performance schema attempts to automatically size the values of
several of its parameters at server startup if they are not set
explicitly. When set to on (True) memory is allocated at startup time.
The implications of this is any memory related charm config options such
as max-connections and innodb-buffer-pool-size must be explicitly set for
the environment percona is running in or percona may fail to start.
Default to off (False) at startup time giving 5.5 like behavior. The
implication of this is one can set configuration values that could lead
to memory exhaustion during run time as memory is not allocated at
startup time.

                        
innodb-file-per-table
(boolean)
                            Turns on innodb_file_per_table option, which will make MySQL put each
InnoDB table into separate .idb file. Existing InnoDB tables will remain
in ibdata1 file - full dump/import is needed to get rid of large
ibdata1 file

                        
True
sst-password
(string)
                            Re-sync account password for new cluster nodes; must be configured
pre-deployment for Active-Active clusters.

                        
lp1366997-workaround
(boolean)
                            Adds two config options (wsrep_drupal_282555_workaround and
wsrep_retry_autocommit) as a workaround for Percona Primary Key bug (see
LP 1366997).

                        
key
(string)
                            Key ID to import to the apt keyring to support use with arbitary source
configuration from outside of Launchpad archives or PPA's.

                        
innodb-buffer-pool-size
(string)
                            By default this value will be set according to 50% of system total
memory but also can be set to any specific value for the system.
Supported suffixes include K/M/G/T. If suffixed with %, one will get that
percentage of system total memory allocated.

                        
max-connections
(int)
                            Maximum connections to allow. A value of -1 means use the server's
compiled-in default.  This is not typically that useful so the
charm will configure PXC with a default max-connections value of 600.
Note: Connections take up memory resources. Either at startup time with
performance-schema=True or during run time with performance-schema=False.
This value is a balance between connection exhaustion and memory
exhaustion.
Consult a MySQL memory calculator like http://www.mysqlcalculator.com/ to
understand memory resources consumed by connections.
See also performance-schema.

                        
600
dns-ha
(boolean)
                            Use DNS HA with MAAS 2.0. Note if this is set do not set vip
settings below.

                        
dataset-size
(string)
                            (DEPRECATED - use innodb-buffer-pool-size) How much data should be kept
in memory in the DB. This will be used to tune settings in the database
server appropriately. Supported suffixes include K/M/G/T. If suffixed
with %, one will get that percentage of RAM allocated to the dataset.

                        
root-password
(string)
                            Root password for MySQL access; must be configured pre-deployment for
Active-Active clusters.

                        
vip_cidr
(int)
                            Netmask that will be used for the Virtual IP.
                        
24
nagios_context
(string)
                            Used by the nrpe-external-master subordinate charm. A string that will be
prepended to instance name to set the host name in nagios. So for
instance the hostname would be something like:
.
  juju-myservice-0
.
If you're running multiple environments with the same services in them
this allows you to differentiate between them.

                        
juju
wait-timeout
(int)
                            The number of seconds the server waits for activity on a noninteractive
connection before closing it. -1 means use the server's compiled in
default.

                        
-1
os-access-hostname
(string)
                            The hostname or address of the access endpoint for percona-cluster.

                        
harden
(string)
                            Apply system hardening. Supports a space-delimited list of modules
to run. Supported modules currently include os, ssh, apache and mysql.

                        
sst-method
(string)
                            Percona method for taking the State Snapshot Transfer (SST), can be:
'rsync', 'xtrabackup', 'xtrabackup-v2', 'mysqldump', 'skip' - see
https://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_sst_method

                        
xtrabackup-v2
min-cluster-size
(int)
                            Minimum number of units expected to exist before charm will attempt to
bootstrap percona cluster. If no value is provided this setting is
ignored.

                        
ha-bindiface
(string)
                            Default network interface on which HA cluster will bind to communication
with the other members of the HA Cluster.

                        
eth0