Percona XtraDB Cluster provides an active/active MySQL
compatible alternative implemented using the Galera
synchronous replication extensions.
Percona XtraDB Cluster is a high availability and high scalability solution for
MySQL clustering. Percona XtraDB Cluster integrates Percona Server with the
Galera library of MySQL high availability solutions in a single product package
which enables you to create a cost-effective MySQL cluster.
This charm deploys Percona XtraDB Cluster onto Ubuntu.
WARNING: Its critical that you follow the bootstrap process detailed in this
document in order to end up with a running Active/Active Percona Cluster.
If you are deploying this charm on MAAS or in an environment without direct
access to the internet, you will need to allow access to repo.percona.com
as the charm installs packages direct from the Percona respositories. If you
are using squid-deb-proxy, follow the steps below:
echo "repo.percona.com" | sudo tee /etc/squid-deb-proxy/mirror-dstdomain.acl.d/40-percona sudo service squid-deb-proxy restart
The first service unit deployed acts as the seed node for the rest of the
cluster; in order for the cluster to function correctly, the same MySQL passwords
must be used across all nodes:
cat > percona.yaml << EOF percona-cluster: root-password: my-root-password sst-password: my-sst-password EOF
Once you have created this file, you can deploy the first seed unit:
juju deploy --config percona.yaml percona-cluster
Once this node is full operational, you can add extra units one at a time to the
juju add-unit percona-cluster
A minimium cluster size of three units is recommended.
In order to access the cluster, use the hacluster charm to provide a single IP
juju set percona-cluster vip=10.0.3.200 juju deploy hacluster juju add-relation hacluster percona-cluster
Clients can then access using the vip provided. This vip will be passed to
juju add-relation keystone percona-cluster
Network Space support
This charm supports the use of Juju Network Spaces, allowing the charm to be bound to network space configurations managed directly by Juju. This is only supported with Juju 2.0 and above.
You can ensure that database connections are bound to a specific network space by binding the appropriate interfaces:
juju deploy percona-cluster --bind "shared-db=internal-space"
alternatively these can also be provided as part of a juju native bundle configuration:
percona-cluster: charm: cs:xenial/percona-cluster num_units: 1 bindings: shared-db: internal-space
NOTE: Spaces must be configured in the underlying provider prior to attempting to use them.
NOTE: Existing deployments using the access-network configuration option will continue to function; this option is preferred over any network space binding provided if set.
Note that Percona XtraDB Cluster is not a 'scale-out' MySQL solution; reads
and writes are channelled through a single service unit and synchronously
replicated to other nodes in the cluster; reads/writes are as slow as the
slowest node you have in your deployment.
Network interface on which to place the Virtual IP.
The IP address and netmask of the 'access' network (e.g., 192.168.0.0/24) . This network will be used for access to database services.
Virtual IP to use to front Percona XtraDB Cluster in active/active HA configuration
Package install location for Percona XtraDB Cluster (defaults to distro for >= 14.04)
Default multicast port number that will be used to communicate between HA Cluster nodes.
Default network interface on which HA cluster will bind to communication with the other members of the HA Cluster.
Percona method for taking the State Snapshot Transfer (SST), can be: 'rsync', 'xtrabackup', 'xtrabackup-v2', 'mysqldump', 'skip' - see https://www.percona.com/doc/percona-xtradb-cluster/5.5/wsrep-system-index.html#wsrep_sst_method
Root password for MySQL access; must be configured pre-deployment for Active-Active clusters.
A comma-separated list of nagios servicegroups. If left empty, the nagios_context will be used as the servicegroup.
Sets table_open_cache (formerly known as table_cache) to mysql.
Turns on innodb_file_per_table option, which will make MySQL put each InnoDB table into separate .idb file. Existing InnoDB tables will remain in ibdata1 file - full dump/import is needed to get rid of large ibdata1 file
Re-sync account password for new cluster nodes; must be configured pre-deployment for Active-Active clusters.
Adds two config options (wsrep_drupal_282555_workaround and wsrep_retry_autocommit) as a workaround for Percona Primary Key bug (see LP 1366997).
Key ID to import to the apt keyring to support use with arbitary source configuration from outside of Launchpad archives or PPA's.
By default this value will be set according to 50% of system total memory but also can be set to any specific value for the system. Supported suffixes include K/M/G/T. If suffixed with %, one will get that percentage of system total memory allocated.
Maximum connections to allow. A value of -1 means use the server's compiled-in default.
(DEPRECATED - use innodb-buffer-pool-size) How much data should be kept in memory in the DB. This will be used to tune settings in the database server appropriately. Supported suffixes include K/M/G/T. If suffixed with %, one will get that percentage of RAM allocated to the dataset.
Netmask that will be used for the Virtual IP.
Used by the nrpe-external-master subordinate charm. A string that will be prepended to instance name to set the host name in nagios. So for instance the hostname would be something like: . juju-myservice-0 . If you're running multiple environments with the same services in them this allows you to differentiate between them.
The number of seconds the server waits for activity on a noninteractive connection before closing it. -1 means use the server's compiled in default.
Apply system hardening. Supports a space-delimited list of modules to run. Supported modules currently include os, ssh, apache and mysql.
Minimum number of units expected to exist before charm will attempt to bootstrap percona cluster. If no value is provided this setting is ignored.
If True enables IPv6 support. The charm will expect network interfaces to be configured with an IPv6 address. If set to False (default) IPv4 is expected. . NOTE: these charms do not currently support IPv6 privacy extension. In order for this charm to function correctly, the privacy extension must be disabled and a non-temporary address must be configured/available on your network interface.