{question}
[ 6.370497] memsqld[1079]: segfault at 7ffdd8e32ff8 ip 00007f19b7536377(...)
{question}
{answer}
If you're using an RHEL-like distro you should be on SingleStore version 8.1.41 or 8.5.15 or higher. We've seen a problem for RHEL-based distros (i.e. centos, Rocky, oracle, etc) that result in SingleStore segfaulting upon boot. That issue is fixed in the versions mentioned above.
For example, it'll fail like you're seeing in dmesg::
[ 160.191986] memsqld[3064]: segfault at 7ffff83e0ff8 ip 00007f5574a33126 sp 00007ffff83e1000 error 6 in libsaverbp.so[7f5574a2c000+f000] likely on CPU 1 (core 1, socket 0)
Other distros w/ kernel versions we've seen this happen to for reference:
RHEL 8
4.18.0-553.el8_10.x86_64
RHEL 9
5.14.0-427.13.1.el9_4.x86_64
5.14.0-427.18.1.el9_4.x86_64
Rocky Linux 9
5.14.0-162.6.1.el9_1.0.1.x86_64
Centos 9
5.14.0-391.el9.x86_64
Oracle Enterprise Linux 9
5.15.0-206.153.7.el9uek.x86_64
Below there is the provide the upgrade steps for RPM or TAR-based installs, depending on what you're doing. Please let me know if you have any questions about the below steps.
If the cluster is already patched, you'll need to do an offline upgrade. The steps to do so are as follows:
rpm Installation
- Stop all nodes with
sdb-admin stop-node
and confirm everything is shut off withsdb-admin list-nodes
ubuntu@AZEROTH:/tmp$ sdb-admin list-nodes
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| 3F313471E7 | Master | 172.31.32.103 | 3306 | Stopped | False | 8.5.11 | Unknown | | 0.0.0.0 |
| E97C62143F | Leaf | 172.31.32.103 | 3307 | Stopped | False | 8.5.11 | Unknown | | 0.0.0.0 |
| 5DFA68A57E | Leaf | 172.31.32.103 | 3308 | Stopped | False | 8.5.11 | Unknown | | 0.0.0.0 |
| 11CD3CC534 | Leaf | 172.31.32.103 | 3309 | Stopped | False | 8.5.11 | Unknown | | 0.0.0.0 |
| 49E853BC90 | Leaf | 172.31.32.103 | 3310 | Stopped | False | 8.5.11 | Unknown | | 0.0.0.0 |
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
2. On each host within the cluster, manually download the SingleStore rpm package you want (8.1.41+ or 8.5.14+) and transfer it to each host.
Then, execute the installation using Linux commands appropriate for your distribution, i.e.:sudo yum install /path/to/singlestoredb-server.xxx.rpm
3. Run sdb-deploy list-versions
and the version you installed from step 3 should show up as “Yes” in the Active column
4. Turn the cluster back on with sdb-admin start-node --all
and it should come online as the version you installed (marked active):
ubuntu@AZEROTH:/tmp$ sdb-admin list-nodes
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| 3F313471E7 | Master | 172.31.32.103 | 3306 | Stopped | False | 8.5.14 | Unknown | | 0.0.0.0 |
| E97C62143F | Leaf | 172.31.32.103 | 3307 | Stopped | False | 8.5.14 | Unknown | | 0.0.0.0 |
| 5DFA68A57E | Leaf | 172.31.32.103 | 3308 | Stopped | False | 8.5.14 | Unknown | | 0.0.0.0 |
| 11CD3CC534 | Leaf | 172.31.32.103 | 3309 | Stopped | False | 8.5.14 | Unknown | | 0.0.0.0 |
| 49E853BC90 | Leaf | 172.31.32.103 | 3310 | Stopped | False | 8.5.14 | Unknown | | 0.0.0.0 |
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
5. Housekeeping - To uninstall old versions of SingleStore you can run sdb-deploy uninstall --version X.X.X --all
Tar Installation
1. Stop all nodes with sdb-admin stop-node
and confirm everything is shut off with sdb-admin list-nodes
2. On each host within the cluster, manually download the v8.1.41/8.5.15 or higher of the SingleStore package and transfer it (or someplace all nodes have access to). Then run sdb-deploy install --file-path /path/to/8.x.x-.tar.gz
[ec2-user@ip-172-31-23-252 tmp]$ sdb-deploy install --file-path /tmp/memsql-server-8.1.44-98c100dce9.x86_64.tar.gz
+-------+---------------+------------+
| Index | Hostname | Local Host |
+-------+---------------+------------+
| 1 | 172.31.23.252 | Yes |
| 2 | 172.31.28.139 | No |
| 3 | All Hosts | |
+-------+---------------+------------+
Select host(s): 3
Toolbox will perform the following actions:
· Install singlestoredb-server 8.1.44-98c100dce9 on 172.31.23.252
· Install singlestoredb-server 8.1.44-98c100dce9 on 172.31.28.139
Would you like to continue? [y/N]: y
WARNING: The cluster already has an active (or "running") version of SingleStore installed. A new version may be installed, but it will not be set as the active version. To install a new version and make it the active version, upgrade the cluster by running the 'sdb-deploy upgrade --preinstall-path' command.
✓ Installed memsql-server-8.1.44-98c100dce9 on host 172.31.23.252 (1/2)
✓ Installed memsql-server-8.1.44-98c100dce9 on host 172.31.28.139 (2/2)
✓ Successfully installed on 2 hosts
3. Now locate your packages.hcl
file and set "current = true"
to the new version you just installed in the previous step, i.e.:
[ec2-user@ip-172-31-23-252 tmp]$ sudo find / -name packages.hcl
/home/ec2-user/memsql/packages.hcl
[ec2-user@ip-172-31-23-252 tmp]$ sudo vi /home/ec2-user/memsql/packages.hcl
Change this:
version = 1
package {
path = "memsql-server-8.1.41-abf78215b3"
current = true
}
package {
path = "memsql-server-8.1.44-98c100dce9"
}
to this:
version = 1
package {
path = "memsql-server-8.1.41-abf78215b3"
}
package {
path = "memsql-server-8.1.44-98c100dce9"
current = true
}
4. Run sdb-deploy list-versions
it should show the newest version marked as "Active"
[ec2-user@ip-172-31-23-252 tmp]$ sdb-deploy list-versions
+---------------+-------------------------------------------------------+---------+--------+
| Host | Package | Version | Active |
+---------------+-------------------------------------------------------+---------+--------+
| 172.31.23.252 | /home/ec2-user/memsql/memsql-server-8.1.41-abf78215b3 | 8.1.41 | No |
| 172.31.23.252 | /home/ec2-user/memsql/memsql-server-8.1.44-98c100dce9 | 8.1.44 | Yes | <-----------
| 172.31.28.139 | /home/ec2-user/memsql/memsql-server-8.1.41-abf78215b3 | 8.1.41 | Yes |
| 172.31.28.139 | /home/ec2-user/memsql/memsql-server-8.1.44-98c100dce9 | 8.1.44 | No |
+---------------+-------------------------------------------------------+---------+--------+
5. Repeat the process of updating the packages.hcl
file for all nodes in the cluster
6. Now if you run sdb-admin list-node
you should see all nodes are still offline, but the version column is labeled as the new one from step 2, i.e.:
[ec2-user@ip-172-31-23-252 tmp]$ sdb-admin list-nodes
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| 758F93E56B | Master | 172.31.23.252 | 3306 | Stopped | False | 8.1.44 | Unknown | | 0.0.0.0 |
| DF62A768C7 | Leaf | 172.31.23.252 | 3307 | Stopped | False | 8.1.44 | Unknown | | 0.0.0.0 |
| EDB92B2C3C | Leaf | 172.31.28.139 | 3307 | Stopped | False | 8.1.44 | Unknown | | 0.0.0.0 |
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
7. You can now start the cluster with sdb-admin start-node --all -y
[ec2-user@ip-172-31-23-252 tmp]$ sdb-admin start-node --all -y
Toolbox is about to perform the following actions:
· Start all nodes in the cluster
Would you like to continue? [Y/n]:
Automatically selected yes, non-interactive mode enabled
✓ Successfully connected to host 172.31.23.252
✓ Successfully connected to host 172.31.28.139
...
[ec2-user@ip-172-31-23-252 tmp]$ sdb-admin list-nodes
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| MemSQL ID | Role | Host | Port | Process State | Connectable? | Version | Recovery State | Availability Group | Bind Address |
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
| 758F93E56B | Master | 172.31.23.252 | 3306 | Running | True | 8.1.44 | Online | | 0.0.0.0 |
| DF62A768C7 | Leaf | 172.31.23.252 | 3307 | Running | True | 8.1.44 | Online | 1 | 0.0.0.0 |
| EDB92B2C3C | Leaf | 172.31.28.139 | 3307 | Running | True | 8.1.44 | Online | 1 | 0.0.0.0 |
+------------+--------+---------------+------+---------------+--------------+---------+----------------+--------------------+--------------+
6. Housekeeping - To uninstall old versions of SingleStore you can run sdb-deploy uninstall --version X.X.X --all
{answer}