{question}
How to manage SingleStore DB using Toolbox in case we lose the MA server?
{question}
{answer}
SingleStore toolbox is essentially an SSH frontend for interfacing withmemsqlctl
individual nodes. The Toolbox is not needed to keep the cluster running, but it allows us to manage the cluster. The toolbox can be configured on a standalone host to manage a cluster.
This article explains the configuration of the toolbox in case we lose the Master Aggregator server.
The answer to this is straightforward: install SingleStore tools on the aggregator server or any other server and register the cluster nodes to manage them.
We can even run the toolbox on multiple nodes simultaneously. For example, If we are running tools on MA and CA as well. We will be able to manage the cluster from any of those Nodes.
Important Note: If there have been changes to the cluster via toolbox on MA like registering/unregistering of a node. We need to make sure to register/unregister the node on the CA toolbox well. Toolbox would not share information with the other separate instances of Toolbox. Avoid using both the Toolbox simultaneously. Manage the cluster only with one toolbox. If we lose the toolbox server along with the MA node, Then we can use the toolbox on CA to manage the cluster.
Steps needed to be performed on the aggregator node or on any node from which we want to manage the cluster in case of the loss the MA host:
1. Install Toolbox on the desired server from which you manage the cluster. The node can be an aggregator node or any node. Click here to learn about how to install the SingleStore toolbox.
2. Register the hosts of the cluster with the new toolbox. Click here to learn about how to register hosts.
For Example: Let's have 3 Nodes - 1 MA, 1 CA, 1 Leaf.
From the current MA, we can see three nodes:
$ sdb-admin list-nodes
+------------+------------+------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+------------+------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| 28613E8F5B | Master | 10.0.3.146 | 3306 | Running | True | 7.3.8 | Online | | 0.0.0.0 |
| 9E22875E43 | Aggregator | 10.0.3.115 | 3306 | Running | True | 7.3.8 | Online | | 0.0.0.0 |
| D2930A4677 | Leaf | 10.0.3.78 | 3307 | Running | True | 7.3.8 | Online | 1 | 0.0.0.0 |
+------------+------------+------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
In case we lose the Master Aggregator: In this case, Let's take we lost master node 10.0.3.146, and we are setting the aggregator to be our new Master node. Click here to learn more about how to set aggregator to be the master. Our new Master is the 10.0.3.115 node. Now we have lost the toolbox which was on the MA server. Let's configure the toolbox on the new master node to manage the cluster.
From New Master Node:
The current state of this server is no toolbox installed to manage the running cluster:
$ sdb-admin list-nodes
-bash: sdb-admin: command not found
Click here to learn about how to install the toolbox. Once the toolbox is installed, we need to register the current cluster nodes as below—the same steps needed to be performed for N number of nodes with its corresponding --identity-file.
Let's register the current master and leaf node on this new toolbox.
Commands:
$ sdb-toolbox-config register-host --host admin@10.0.3.115:22 --identity-file /tmp/ssh_private_key.pem
$ sdb-toolbox-config register-host --host admin@10.0.3.78:22 --identity-file /tmp/ssh_private_key.pem
To learn about the register-node command, click here.
Now we can manage this cluster from this new toolbox instance,
$ sdb-admin list-nodes
+------------+--------+------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| 9E22875E43 | Master | 10.0.3.115 | 3306 | Running | True | 7.3.8 | Online | | 0.0.0.0 |
| D2930A4677 | Leaf | 10.0.3.78 | 3307 | Running | True | 7.3.8 | Online | 1 | 0.0.0.0 |
+------------+--------+------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
Note: If it's a non-sudo install using the tarball method. While registering nodes, we need to use the "--tar-install-dir" flag option to point to the node directory. Click here to learn about the tarball deployment of the cluster.
{answer}