Install a single-node TiDB database cluster on CentOS 8

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

Welcome to today’s article on how to install a single-node TiDB database cluster on CentOS 8 Linux server. TiDB is a MySQL-compatible open source NewSQL database that supports analytical processing (HTAP) and mixed transaction workloads. The main key functions of TiDB are high availability, horizontal scalability and strong consistency. The database solution covers OLTP (online transaction processing), OLAP (online analytical processing) and HTAP services.

This setting will be performed in a single node instance and is suitable for Lab and Dev environments. For production environments that require a high-availability cluster with at least three machines in the cluster, this guide should not be referred to. Ask TiDB official documentation A page about production setup requirements and recommendations. an examination Release notes Learn about all new software features

Install a single-node TiDB database cluster on CentOS 8

This setup is done on a server with the following hardware and software requirements:

  • operating system: CentOS 8 (64-bit)
  • memory: 16 GB
  • CPU: 8 cores +
  • disk space: Above 50GB
  • Super user SSH access
  • Internet access on the server

If you have performed strict operations on other components (such as PD, TiKV, TiFlash, TiCDC, and Monitor), these minimum requirements may not meet the requirements. Please be enthusiastic about the suggestions provided in the documentation before working on a specific component.

Step 1: Update the server

Before we start installing the TiDB database on CentOS 8, please log in to the computer and perform a system update.

sudo dnf -y update

Reboot the system after upgrading.

sudo systemctl reboot

Step 2: Disable system swap and firewalld

TiDB needs enough memory space for operations, so swapping is not recommended. Therefore, it is recommended to permanently disable system swapping.

echo "vm.swappiness = 0" | sudo tee -a /etc/sysctl.conf
sudo swapoff -a && sudo swapon -a
sudo sysctl -p

In a TiDB cluster, access ports between nodes must be opened to ensure the transmission of information such as read and write requests and data heartbeats. I suggest you disable firewalld for this lab setting.

sudo firewall-cmd --state
sudo systemctl status firewalld.service

If you want to open the port in the firewall, check network port Requirements documents.

Step 3: Download and install TiUP

The next step is to download the TiUP installer script to the CentOS 8 machine.

curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh -o tiup_installer.sh

Give the script execution bit.

chmod +x tiup_installer.sh

Ensure that the tar package is installed.

sudo yum -y install tar

Execute the script to start the installation.

sudo ./tiup_installer.sh

Execution output:

WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json
You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json
Set mirror to https://tiup-mirrors.pingcap.com success
Detected shell: bash
Shell profile:  /root/.bash_profile
/root/.bash_profile has been modified to to add tiup to PATH
open a new terminal or source /root/.bash_profile to use it
Installed path: /root/.tiup/bin/tiup
===============================================
Have a try:     tiup playground
===============================================

Source updated bash configuration file.

source /root/.bash_profile

The next step is to install the cluster components of TiUP:

# tiup cluster
The component `cluster` is not installed; downloading from repository.
download https://tiup-mirrors.pingcap.com/cluster-v1.1.2-linux-amd64.tar.gz 9.87 MiB / 9.87 MiB 100.00% 9.28 MiB p/s
Starting component `cluster`:
Deploy a TiDB cluster for production

If the TiUP cluster is already installed on the computer, please update the software version:

# tiup update --self && tiup update cluster
download https://tiup-mirrors.pingcap.com/tiup-v1.1.2-linux-amd64.tar.gz 4.32 MiB / 4.32 MiB 100.00% 4.91 MiB p/s
Updated successfully!
component cluster version v1.1.2 is already installed
Updated successfully!

Step 4: Create and start a local TiDB cluster

Since TiUP needs to simulate the deployment on multiple machines, it is recommended to increase the connection limit of the sshd service.

# vi /etc/ssh/sshd_config
MaxSessions 30

After making the changes, restart the sshd service.

sudo systemctl restart sshd

Create a topology configuration file named tidb-topology.yaml.

cat >tidb-topology.yaml<

where:

  • User: "tidb": Use tidb system user (created automatically during deployment) to perform internal management of the cluster. By default, port 22 is used to log in to the target computer via SSH.
  • copy. Enable placement rules: Set this PD parameter to ensure the normal operation of TiFlash.
  • host: The IP of the target machine.

Run the cluster deployment command:

tiup cluster deploy   ./tidb-topology.yaml --user root -p

replace:

  • With the name of the cluster you want to use.
  • TiDB cluster version. Use the following command to get all supported TiDB versions:
# tiup list tidb

I will use the latest version returned by the above command:

# tiup cluster deploy local-tidb  v4.0.6 ./tidb-topology.yaml --user root -p

Press "ΓΏ"Key and provide the password of the root user to complete the deployment:

Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]:  y
Input SSH password:
+ Generate SSH keys ... Done
+ Download TiDB components
......

You should see the TiDB component being downloaded.

Input SSH password:
+ Generate SSH keys ... Done
+ Download TiDB components
  - Download pd:v4.0.6 (linux/amd64) ... Done
  - Download tikv:v4.0.6 (linux/amd64) ... Done
  - Download tidb:v4.0.6 (linux/amd64) ... Done
  - Download tiflash:v4.0.6 (linux/amd64) ... Done
  - Download prometheus:v4.0.6 (linux/amd64) ... Done
  - Download grafana:v4.0.6 (linux/amd64) ... Done
  - Download node_exporter:v0.17.0 (linux/amd64) ... Done
  - Download blackbox_exporter:v0.12.0 (linux/amd64) ... Done
+ Initialize target host environments
  - Prepare 127.0.0.1:22 ... Done
+ Copy files
  - Copy pd -> 127.0.0.1 ... Done
  - Copy tikv -> 127.0.0.1 ... Done
  - Copy tikv -> 127.0.0.1 ... Done
  - Copy tikv -> 127.0.0.1 ... Done
  - Copy tidb -> 127.0.0.1 ... Done
  - Copy tiflash -> 127.0.0.1 ... Done
  - Copy prometheus -> 127.0.0.1 ... Done
  - Copy grafana -> 127.0.0.1 ... Done
  - Copy node_exporter -> 127.0.0.1 ... Done
  - Copy blackbox_exporter -> 127.0.0.1 ... Done
+ Check status
Deployed cluster `local-tidb` successfully, you can start the cluster via `tiup cluster start local-tidb`

Start the cluster:

# tiup cluster start local-tidb

Sample output:

....
Starting component pd
	Starting instance pd 127.0.0.1:2379
	Start pd 127.0.0.1:2379 success
Starting component node_exporter
	Starting instance 127.0.0.1
	Start 127.0.0.1 success
Starting component blackbox_exporter
	Starting instance 127.0.0.1
	Start 127.0.0.1 success
Starting component tikv
	Starting instance tikv 127.0.0.1:20162
	Starting instance tikv 127.0.0.1:20160
	Starting instance tikv 127.0.0.1:20161
	Start tikv 127.0.0.1:20161 success
	Start tikv 127.0.0.1:20162 success
	Start tikv 127.0.0.1:20160 success
Starting component tidb
	Starting instance tidb 127.0.0.1:4000
	Start tidb 127.0.0.1:4000 success
....

Step 5: Access the TiDB cluster

To view the list of currently deployed clusters:

# tiup cluster list
Starting component `cluster`: /root/.tiup/components/cluster/v1.1.2/tiup-cluster list
Name        User  Version  Path                                             PrivateKey
----        ----  -------  ----                                             ----------
local-tidb  tidb  v4.0.6   /root/.tiup/storage/cluster/clusters/local-tidb  /root/.tiup/storage/cluster/clusters/local-tidb/ssh/id_rsa

To view the topology and status of the cluster:

# tiup cluster display local-tidb
Starting component `cluster`: /root/.tiup/components/cluster/v1.1.2/tiup-cluster display local-tidb
tidb Cluster: local-tidb
tidb Version: v4.0.6
ID               Role        Host       Ports                            OS/Arch       Status    Data Dir                    Deploy Dir
--               ----        ----       -----                            -------       ------    --------                    ----------
127.0.0.1:3000   grafana     127.0.0.1  3000                             linux/x86_64  inactive  -                           /tidb-deploy/grafana-3000
127.0.0.1:2379   pd          127.0.0.1  2379/2380                        linux/x86_64  Up|L|UI   /tidb-data/pd-2379          /tidb-deploy/pd-2379
127.0.0.1:9090   prometheus  127.0.0.1  9090                             linux/x86_64  inactive  /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090
127.0.0.1:4000   tidb        127.0.0.1  4000/10080                       linux/x86_64  Up        -                           /tidb-deploy/tidb-4000
127.0.0.1:9000   tiflash     127.0.0.1  9000/8123/3930/20170/20292/8234  linux/x86_64  N/A       /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000
127.0.0.1:20160  tikv        127.0.0.1  20160/20180                      linux/x86_64  Up        /tidb-data/tikv-20160       /tidb-deploy/tikv-20160
127.0.0.1:20161  tikv        127.0.0.1  20161/20181                      linux/x86_64  Up        /tidb-data/tikv-20161       /tidb-deploy/tikv-20161
127.0.0.1:20162  tikv        127.0.0.1  20162/20182                      linux/x86_64  Up        /tidb-data/tikv-20162       /tidb-deploy/tikv-20162

Once started, you can use the mysql command line client tool to access the TiDB cluster.

# yum install mariadb -y
# mysql -h 127.0.01 -P 4000 -u root
Welcome to the MariaDB monitor.  Commands end with ; or g.
Your MySQL connection id is 2
Server version: 5.7.25-TiDB-v4.0.6 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

MySQL [(none)]> SELECT VERSION();
+--------------------+
| VERSION()          |
+--------------------+
| 5.7.25-TiDB-v4.0.6 |
+--------------------+
1 row in set (0.001 sec)

MySQL [(none)]> EXIT

Dashboard access:

  • Visit http://{grafana-ip}: the Grafana monitoring dashboard on 3000. The default username and password are both admin.
  • Enter TiDB dashboard At http://{pd-ip}: 2379/dashboard. The default username is root, The password is empty.

What's next

To
You can download this article in PDF format via the link below to support us.

Download the guide in PDF format

turn off
To

To
To

Sidebar