MariaDB Galera Cluster on Red Hat OpenShift/Kubernetes
In this blog post we present a solution with which you’re able to operate a MariaDB Galera Cluster on Kubernetes or compatible solutions (e.g. Red Hat OpenShift Container Platform, CoreOS Tectonic or Canonical’s Kubernetes). It allows cloud-native applications running in Kubernetes and their required databases to be run on the same infrastructure, using the same tools to manage them.
The functionality is based on a Kubernetes feature called PetSets, an API thought for stateful applications and services.
What are PetSets
During the last year Kubernetes developed from yet another container management tool, to being the single project that pushes the idea of running containers in production forward. Because of the architecture of Kubernetes it was, up to now, not easy to run stateful services and applications on top of it. A first idea for so called “nominal services” was proposed early on, but was not realized because of other priorities.
The release of Kubernetes v1.3 provides a first glance at a possible solution. The addition of PetSets as an API in the Alpha state provides a solution that is tailored for the requirements of stateful applications and services. Pods in a PetSet receive a unique identity and numeric index (e.g. app-0, app-1, …) which is consistent over the lifetime of the pod. Additionally Persistent Volumes stay attached to the pod, even if the pod is migrated to another host. Because a Service needs to be assigned to each PetSet, it is possible to query this service to receive information about the pods in the PetSet. This makes it possible to e.g. automate the bootstrapping of a cluster, or change the configuration of the cluster during runtime, when the status of the cluster changes (scale up, scale down, maintenance or outage of a host, …).
How can MariaDB Galera Cluster be run on Kubernetes
To operate a MariaDB Galera Cluster on top of Kubernetes it is especially important to implement the bootstrap of the cluster and to construct the configuration based on the current pods in the PetSet. In our case those tasks are taken care of by two init containers. In the galera-init image the tools to perform the bootstrap are packaged, and the second container runs the peer-finder
binary, which queries the SRV record of the Kubernetes service and generates the configuration for the MariaDB Galera Cluster based on the current members of the PetSet.
When the first pet starts wsrep_cluster_address=gcomm://
is used and the pod automatically bootstraps the cluster. Subsequently started pets add the hostnames they receive from the SRV record of the service to wsrep_cluster_address and automatically join the cluster. Below is an example of what the configuration would look like after the start of the second pet.
wsrep_cluster_address=gcomm://mariadb-0.galera.lf2.svc.cluster.local,mariadb-1.galera.lf2.svc.cluster.local
wsrep_cluster_name=mariadb
wsrep_node_address=mariadb-1.galera.lf2.svc.cluster.local
The sources for running MariaDB Galera Cluster on your own Kubernetes or Red Hat OSCP infrastructure are available on Github: https://github.com/adfinis-sygroup/openshift-mariadb-galera
Limitations of PetSet in Alpha
The full limitations of PetSets in the Alpha state are available in the Kubernetes documentation and concern multiple areas, but mostly tasks that currently require manual interaction. For example it’s only possible to increase the number of replicas by updating the replicas parameter in the PetSet definition, but it’s not possible to decrease them. Also an update of the PetSet by updating the PetSet definition is not possible, so to update to a new image version within the PetSet you’ll need to create a new PetSet and join it with the old cluster.
What does the future hold for PetSets
With the release of Kubernetes v1.5 PetSets will leave the Alpha state and will be promoted to the Beta state. This is an important step, because the external API of a feature in the Beta state does not change. The release of Kubernetes v1.5 is planned for the 9th of December 2016.
Future work
We’ll follow up on this post as soon as Kubernetes v1.5 has been released, because some YAML definitions will need to be updated and we’re going to provide further instructions on how to test this on your own infrastructure.