{question}
Which is better to use for data resiliency, backups, or some form of replication like high availability or having a DR cluster?
{question}
{answer}
This depends on your priorities, and typically you want a combination of them. All of these options allow you to store a redundant copy of your data. The differences lie in how you switch over to that copy its usability. Consider what is most important to you:
- Speed and ease of transition to the other copy of the data during failure scenarios.
- Ensuring the data copy is up to date.
- Cost of storing the data copy.
- Purpose of the cluster (prod, dev, qa, testing, etc.).
Production Cluster Considerations
For a production cluster, SingleStore Support strongly recommends enabling high availability and taking periodic backups. You might also consider setting up a second cluster for DR replication.
High Availability
- Immediate, automatic transition.
- Data is up to date.
- Requires twice as much disk/memory space for storing data copy in the same cluster.
- Great for maintaining production online.
High availability protects you from the loss of a single leaf or a few leaves. In high availability, database partitions are replicated between leaves of a single cluster. Each leaf holds master partitions that serve content and respond to queries and an equal count of replica partitions that are neither writable nor readable and serve only as a copy of the data. Data is written to master partitions and immediately automatically replicated to their replica partitions. This is accomplished at the host level by sending snapshots and transaction logs between leaves, not affecting your database workload. This process ensures that data is always as close to up-to-date as possible in the data copy, especially on SingleStore versions 7.0+ where fast sync replication is supported.
If a leaf node goes down, the master aggregator detects it within a couple of seconds by failed heartbeats, which automatically triggers failover. That leaf is removed from cluster metadata, so queries cannot be sent to it. For any master partitions on that leaf, their replica partitions in the cluster are promoted to master. As master partitions, they now serve content while the leaf is down. A few queries may have been lost while the failure was detected and automatically failed over. However, it requires no interaction by the DBA to implement failover; it is all automatic.
If the failed leaf can recover independently, it is automatically failed back once it comes back online. This can account for a small amount of downtime as well. In detail: if necessary, the leaf is restarted by the cluster management system and recovers any rowstore tables into memory. When it reconnects, the master aggregator will automatically attach the partitions and then attach the leaf. As a final step, an auto rebalance is performed. This is different from a rebalance a user might trigger, which can move partitions around the cluster. With an auto rebalance, the master aggregator promotes replica partitions and demotes others to ensure that each leaf holds half master partitions and half replica partitions to balance the query workload on leaves.
In the rare case that a leaf cannot recover on its own, the cluster would continue in this state where one leaf is offline, and the copies of its partitions serve the content. Once the database administrator or other team can fix the issue and bring the leaf online, it will automatically rejoin the cluster.
The downside is that an entire second copy of the data is stored locally on the same cluster. When you enable high availability, there must be at least 50% memory and disk available on leaves to store this extra data.
Note that if both copies of a partition are on offline leaves, there can still be cluster downtime.
Additionally, there is a new form of replication in which leaves are not paired. Instead, there is a configurable fanout level, which sets several leaves as the target for replica partitions from each leaf.
For more information, check here.
Backups
- Slow, manual transition.
- Data up through the previous day or previous week.
- Low cost for storage.
- Backup data for all types of clusters.
Backups are a safety measure for restoring the cluster in case of a major cluster-wide issue. In some cases, it can be faster or easier to restore from a backup than to perform intricate cluster surgery to return nodes to health or to diagnose data issues. This requires manual intervention to drop the database in question, move the backup files into place (if necessary), and restore the database.
It is a best practice always to have a recent backup of each database, no matter what type of cluster it is. For example, you could take nightly backups or take backups on weekends. Be sure to take backups during a low traffic time. Though the cluster is online during a backup, it consumes many threads across the cluster and causes a lot of disk I/O for writing the files in the backup. There is also a brief blocking period between when you start a backup and when the backup can start executing. This occurs if any queries are running when you start the backup. For consistency, the backup will wait for those queries to finish before it can start executing. While it waits, it also prevents new queries from running. This is similar to the behavior of ALTER-type statements. To avoid impact on your workload, take backups at a low traffic time when there are not many distributed queries running and not a lot of long-running queries.
As a best practice, SingleStore Support also recommends taking backups before any cluster maintenance. This provides you a safe, healthy point to restore if there is an issue, such as for cluster maintenance for upgrades and host maintenance for deploying an OS patch.
Unlike other resiliency methods, backup storage is more adaptable. If desired, you can take backups to the default location in leaf data directories. However, you can also take a backup to an external location like an NFS or S3 or simply move the backup files off leaves after the backup completes. Since there is already significant compression for columnstore tables, backups are not much smaller than the data size itself (only file descriptors are compressed). However, since backups can be stored off the cluster, you can put them in cold storage, which typically lowers the cost of storage as compared to high access storage.
You can also move the backup files to a location accessible to a different SingleStore cluster and restore the database there. This is a way to copy the data to multiple clusters. The other cluster can have a different topology (fewer/more leaves). However, the database will have the same count of partitions and the database will store the same amount of data.
For more information and configuration, check here.
Disaster Recovery Cluster Replication
- Fast but manual transition.
- Data is almost as up-to-date as high availability.
- Expensive to keep a second cluster.
- Can serve as QA/dev/testing/analytics in downtime.
Disaster Recovery (DR) Cluster Replication protects from loss of an entire cluster and provides an entire cluster with up-to-date data, ready to accept your workload right away. For example, if there is a power outage at your data center, or an AWS EC2 region goes offline, it is ideal if a workload was replicated to another cluster in a different data center or AWS region. Additionally, the workload can be redirected to a DR cluster while resolving an issue or performing maintenance on the primary database.
In DR/Cluster Replication, database partitions are replicated from one cluster to another cluster. Similar to High Availability (HA), replication is performed on the host level by copying snapshots and transaction logs to the secondary database. This has been structured to ensure as little impact on the primary database as possible. Replication is initiated from the secondary database, and in case of failure of the primary, replication is stopped from the secondary database. When you stop replication, the secondary database comes online as a regular read/write database. Direct your workload to this cluster instead to prevent service interruptions.
Since data is always actively replicating from the primary database to the secondary, it is kept up-to-date. There is a potential for a lag between the two clusters if there are slow disks in the cluster or network latency. However, cluster replication should be considered nearly up-to-date, slightly less performant than high availability since there is typically more physical distance to traverse.
It is also simple to maintain cluster replication. You can restart nodes or entire clusters, and the cluster will resume replication afterwards. The primary cluster is not impacted whether the secondary is replicating or not. Therefore, if you restart the secondary cluster, then when the cluster starts up, the cluster checks its replay position against the primary and replicates any data it is missing. If you restart the primary cluster, there is no data to replicate, so the secondary cluster waits. When the primary cluster starts up again, the secondary cluster re-establishes a connection, confirms replay position, and continues replicating.
Note that if you need to temporarily end cluster replication and then start it again, you must use the command `PAUSE REPLICATING`, which allows you to `CONTINUE REPLICATING` afterwards. Once replication has been stopped with `STOP REPLICATING`, it cannot be restarted. In that case, to return to the original state of replication, the database must be dropped, and then the original `REPLICATE DATABASE` command can be run.
One downside to consider is the cost of maintaining an entire separate cluster. The secondary cluster does not have to have the same topology as the primary cluster. It could have fewer aggregators, fewer leaves, and not have high availability enabled. The only requirement is that the second cluster has enough disk/memory to store the same data set as the primary, and it will also have the same count of partitions. A configuration like that could be considered an extra backup, covering all your bases as you repair issues on the primary cluster. However, it may not offer the same performance or may not be resilient against its own failures, such as if the second cluster had a leaf go offline. If you fully scope out a secondary cluster with the same resources as the primary, it can be used for a few purposes, detailed below.
Analytics Database
During replication, the primary database behaves as expected, and it can be used for reads and writes. In an improvement over HA, the replica partitions in the secondary database are read-only. This enables the first non-DR-related use of the secondary database: analytics database. In general, there are 2 types of workloads: transactional and analytic. Transactional workloads are typically composed of lots of small reads and writes of a few rows. On the other hand, analytical workloads are large reads that generally scan over a large part of the data and aggregate results, for example, for reports or dashboards. By separating the analytical workload onto its own cluster, you remove the impact of the large, long-running reads from the primary cluster, where they can delay the start of DDL operations or request for cluster resources against transactional queries.
Data Movement
Cluster replication is more than DR. As a use case; cluster replication can be a lightweight way to copy current production data to other clusters. Simply configure permissions, start replication, and once the second cluster has caught up or replicated as much data as you define, you can stop replication. The secondary database returns online as a normal database and is ready to serve reads and writes.
Two Way Replication
As previously described, replication is performed on the database level by copying snapshots and transaction logs and replaying them on the target cluster. Therefore, you could have two clusters: cluster_one and cluster_two. On cluster_one, you might have primary database A, which replicates to cluster_two where it is a secondary database. On cluster_two, you could have a primary database B, which replicates to cluster_one, a secondary database. This is very similar to how leaves are paired and replicated between them for High Availability.
In this case, if a cluster level outage occurs, only one of the databases experiences an issue. Only the database whose primary was on that failed cluster experiences an outage. This means only stopping replication for one database and only redirecting your app for that one database.
In this case, it is best to have identical resources and topology in both clusters. This ensures the same performance on both clusters since they will both be handled as online production clusters.
For more information, check here.
{answer}