Tech Blog.

Thoughts, stories, ideas.

MySQL/MariaDB HA: Galera Cluster vs. DRBD replication

20. August 2016
Trademark notice

DRBD® and LINBIT® are trademarks or registered trademarks of LINBIT in Austria, the United States, and other countries.

Other names mentioned in this document may be trademarks or registered trademarks of their respective owners.

License information

This is a commercial document from LINBIT and Adfinis SyGroup. There are distribution terms that apply to this document, for more information please visit http://links.linbit.com/t-and-c.

  1. About
  2. Introduction
    2.1. DRBD Overview
    2.2. Galera Cluster Overview
  3. Comparison
    3.1. Network Traffic
    3.2. Commit Latency
    3.3. Replication
    3.4. Load Balancing
    3.5. Failover
    3.6. Resynchronization
  4. Summary
  5. Further documentation

1. About

The DRBD-software is a Linux-kernel block-level replication facility that is widely used as an shared-nothing cluster building block. It is included in vanilla kernels since 2.6.33, and most distributions ship the necessary userspace utilities. Furthermore, many distributions have newer DRBD versions than the one included in the kernel package in extra packages.

DRBD can replicate across multiple network protocols, and in (currently) three modes – from synchronously for local HA clusters, to asynchronous for pushing data to a disaster recovery site.

DRBD is developed and supported world-wide by LINBIT; that includes most distributions and architectures, with a few SLA levels up to 24/7 email and phone reachability.

Galera Cluster is a synchronous multi-master database cluster, it provides High-Availability by replicating transactions to all nodes in the cluster. By removing the overhead introduced by a two-phase commit and moving to a certification based replication mechanism the solution allows near linear scalability while still maintaining High-Availability and consistency.

Galera is developed by Codership and fully integrated and supported in solutions by MariaDB . Adfinis SyGroup is a MariaDB partner and provides assistance in implementing, monitoring, and maintaining MariaDB based infrastructures.

2. Introduction

This Tech-Guide compares two different High-Availability Solutions for MySQL databases; one is a block-device based replication solution, the other extends MariaDB internals to provide synchronous replication.

A few differences will be shown, advantages and disadvantages discussed.

2.1. DRBD Overview

DRBD is a block-device based replication solution, i.e. it will simply ensure that a range of storage blocks (a partition, harddisk, logical volume, etc.) is the same on two (or with DRBD 9 more) nodes.

This means that it is completely independent of the application using that storage, and even the specific filesystem doesn’t matter – it works equally well with XFS, ext4, BTRFS, and so on.

DRBD typically is used via TCP/IP connections; with DRBD 9 an RDMA transport is available, too, which reduces the network latency and therefore raises the number of available IOPs quite a bit.

2.2. Galera Cluster Overview

Galera Cluster works within the MariaDB binary. Via configuration settings the mysql binary loads the Galera shared library, which enables the network communication and replication to other mysql processes on remote nodes.

Currently Galera Cluster is only compatible with the InnoDB storage engine, because only this engine provides the required transaction support. Support for further storage engines will be possible as soon as they support transactions.

3. Comparison

3.1. Network Traffic

As DRBD is completely unaware of the filesystem and application stack above, it will simply replicate all writes to the remote node(s) – this means application data, transaction logs, and indices as well as filesystem meta-data (eg. the filesystem journal, inodes, directories).

Galera Cluster will just send the logical changes, i.e. the contents of the transaction packed into a Galera write-set, over the network. An UPDATE statement involving thousands of rows will be about the size of the updated records, there’s no further overhead for indices or transaction logs.

The Galera Cluster communication can use either unicast (TCP) or multicast (UDP) connections. Multicast is especially well suited for big environments to further reduce network traffic.

3.2. Commit Latency

With DRBD there is only one active Master for that database; so, as soon as the final disk write for the COMMIT is being done, DRBD can deliver an acknowledge to the application. Depending on the storage stack[1] this might be below 100 μsec.

In Galera Cluster the contents of a transaction are broadcasted to every node in the cluster. As soon as the client commits the transaction on one node, this node broadcasts the write-set to the other nodes and they acknowledge the receipt. Each node then performs a certification of the write-set and commits the transaction locally. The originating node acknowledges the transaction to the client after the local certification has been successful.

Additional latency arises only during the broadcast step and equals the longest roundtrip time to any node in the cluster. For deployments within the same colocation this is normally below 400 μsec.

3.3. Replication

DRBD supports synchronous and asynchronous replication; the latter is useful for disaster-recovery across long distances. In that case there’s a separate product (the DRBD Proxy), which supports compression of the replication data stream, and so reduces the amount of bandwidth required.

Galera Cluster can only be used synchronously, but standard MariaDB asynchronous replication slaves can be attached to each Galera Cluster node. As there is an additional latency associated with each commit, there is a limit to the number of transactions that can be processed in WAN deployments. A rule of thumb for the maximum number of transactions is 1/RTT trx/s.

3.4. Load Balancing

DRBD is typically used in an Active/Passive setup, i.e. each DRBD resource is only active on one node[2]. That means that only one node will have access to the filesystem storing a database[3] ; this node will have to do all the statement parsing, data fetching, deciding, and writing.

Galera Cluster is a pure Multi-Master solution – each node will provide its full resources. The only performance impact is caused by broadcasting the transaction to all nodes. Each node can be used for read-only queries, so read performance scales linearly. Because of optimistic locking some degree of write scalability can be achieved, but this depends on the application structure[4] and at best you’ll be able to increase write performance by about 15%.

3.5. Failover

In an HA environment there has to be some planning-ahead for problems, too.

If the active node in a DRBD environment goes down (for whatever reason), the Cluster stack (typically Pacemaker with Heartbeat or Corosync) has to detect the problem and switch the services over to another node. In the worst case this will entail a filesystem check, a database recovery, and then the time required to get the caches hot again[5].

In Galera Cluster, when a single one node goes down, the remaining nodes in the cluster continue working without interruption. A client currently connected to the failing node would retry the connection via a loadbalancer[6], but other than that would not notice any interruption. When the crashed node gets started up again, it might need a filesystem check and database recovery too, and would not be available for load balancing purposes during that time.

3.6. Resynchronization

After a crash the crashed node has to ensure getting the latest data.

DRBD, being in the block device layer, keeps a bitmap of dirty blocks, and will simply copy them over as soon as the DRBD connection is established again after the crash. This copying is done in on-disk-order; performance is only limited by the storage and network hardware, with a 10 GBit network and FusionIO cards you should be able to drive 1.2 GByte/sec.

Galera Cluster has two ways of updating data on a joining node. If the node was already a member of the cluster before and only left the cluster for a short period of time[7] it tries to perform an Incremental State Transfer (IST) by pulling changes from the write-set cache of another node in the cluster.

If no other node in the cluster can provide the required changes for performing an IST from its write-set cache, or a new node joins the cluster, a Snapshot State Transfer (SST) will be performed. This means that all the data of the database will be transfered to the joining node. Galera selects a so called Donor node which is going to be the source of the transfer. As being a Donor can have a serious impact on performance, Donor nodes are often excluded from load balancing to ensure consistent read and write performance within the cluster.

4. Summary

Here is a concluding table, based on the discussion above.

  DRBD Galera Cluster Advantage
Network traffic all changed disk blocks only transaction contents Galera
Latency μsec to msec, depends on storage system msec, because of Userspace/Kernel transitions DRBD
Replication synchronous or asynchronous (disaster-recovery) synchronous or asynchronous DRBD/Galera
Load Balancing Block level data can be read from other nodes full Multi-Master Galera
Failover via Cluster stack; seconds to minutes of downtime other node continue without interruption Galera
Resynchronization Only changed disk blocks, block-device order IST/SST

 

5. Further documentation

The DRBD Project Page. Located at http://drbd.linbit.org, with lots of information, including a Users’ Guide that weighs in (at the last count) at 172 pages in PDF format – one of the most extensive project documentations in the Open Source world!

The LINBIT Home Page. Starting at http://www.linbit.com, this answers all questions about paid support from the developers. An overview about supported platforms, SLAs, and prices is at http://www.linbit.com/en/products-and-services/drbd-support/pricing?id=358

The Adfinis SyGroup Home Page. “We are the Linux Engineers” is the proud slogan on https://adfinis.com. With more than 15 years of Linux experience Adfinis SyGroup is the first address in Suisse.

The MariaDB Home Page. The MariaDB Corporation https://mariadb.com is the main driving force behind the development of MariaDB and provides support and assistance for MariaDB products together with its partners.


  1. RAID controller with BBU, FusionIO cards, some SSDs

  2. Although having an Active/Active cluster by spreading the active resources across the nodes is recommended.

  3. Splitting one database doesn’t work, but you can run multiple mysql processes in a DRBD cluster, each with its own database and separate service-IP-address.

  4. E.g., database hot spots.

  5. All these can be mitigated by a good storage system. One of our customers could change this time from approx. 45 minutes to 30 seconds by switching the storage from harddisks to FusionIO cards

  6. E.g., MariaDB MaxScale

  7. For example, because of a reboot or upgrade and subsequent restart of MariaDB.