[TiDB] 查看表的分区信息

首先

我在以下每个时间点执行了SHOW TABLE REGIONS命令,并确认了区域信息。

    • データ追加前

 

    • データ追加後

 

    TiKVの異常終了

确认动作环境

我在以下环境中测试了 TiDB v7.0.0 的运行:

[root@tisim ~]# tiup cluster display demo-cluster
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster display demo-cluster
Cluster type:       tidb
Cluster name:       demo-cluster
Cluster version:    v7.0.0
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.3.171:2379/dashboard
Grafana URL:        http://192.168.3.171:3000
ID                   Role        Host           Ports                            OS/Arch       Status   Data Dir                    Deploy Dir
--                   ----        ----           -----                            -------       ------   --------                    ----------
192.168.3.171:3000   grafana     192.168.3.171  3000                             linux/x86_64  Up       -                           /tidb-deploy/grafana-3000
192.168.3.171:2379   pd          192.168.3.171  2379/2380                        linux/x86_64  Up|L|UI  /tidb-data/pd-2379          /tidb-deploy/pd-2379
192.168.3.171:9090   prometheus  192.168.3.171  9090/12020                       linux/x86_64  Up       /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090
192.168.3.171:4000   tidb        192.168.3.171  4000/10080                       linux/x86_64  Up       -                           /tidb-deploy/tidb-4000
192.168.3.171:9000   tiflash     192.168.3.171  9000/8123/3930/20170/20292/8234  linux/x86_64  Up       /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000
192.168.3.171:20160  tikv        192.168.3.171  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160       /tidb-deploy/tikv-20160
192.168.3.171:20161  tikv        192.168.3.171  20161/20181                      linux/x86_64  Up       /tidb-data/tikv-20161       /tidb-deploy/tikv-20161
192.168.3.171:20162  tikv        192.168.3.171  20162/20182                      linux/x86_64  Up       /tidb-data/tikv-20162       /tidb-deploy/tikv-20162
Total nodes: 8

确认区域信息

使用SHOW TABLE REGIONS语句查看trips表的区域信息。

mysql> SHOW TABLE trips REGIONS;
+-----------+---------------+--------------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
| REGION_ID | START_KEY     | END_KEY            | LEADER_ID | LEADER_STORE_ID | PEERS            | SCATTERING | WRITTEN_BYTES | READ_BYTES | APPROXIMATE_SIZE(MB) | APPROXIMATE_KEYS | SCHEDULING_CONSTRAINTS | SCHEDULING_STATE |
+-----------+---------------+--------------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
|      3005 | t_94_         | t_94_r_182668      |      3007 |               1 | 3006, 3007, 3008 |          0 |            27 |          0 |                   39 |           177442 |                        |                  |
|      3009 | t_94_r_182668 | t_94_r_595336      |      3011 |               1 | 3010, 3011, 3012 |          0 |            39 |          0 |                   37 |           231736 |                        |                  |
|        10 | t_94_r_595336 | t_281474976710654_ |       156 |               1 | 11, 156, 204     |          0 |      35241357 |  149414338 |                   37 |           231736 |                        |                  |
+-----------+---------------+--------------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
3 rows in set (0.00 sec)

可以通过指定条件来筛选出指定的行。将行的region_id设为3009并执行操作。

mysql> SHOW TABLE trips REGIONS WHERE region_id = 3009 \G
*************************** 1. row ***************************
             REGION_ID: 3009
             START_KEY: t_94_r_182668
               END_KEY: t_94_r_595336
             LEADER_ID: 3011
       LEADER_STORE_ID: 1
                 PEERS: 3010, 3011, 3012
            SCATTERING: 0
         WRITTEN_BYTES: 0
            READ_BYTES: 0
  APPROXIMATE_SIZE(MB): 60
      APPROXIMATE_KEYS: 388612
SCHEDULING_CONSTRAINTS:
      SCHEDULING_STATE:
1 row in set (0.00 sec)

从具有region_id为3009的区域信息中可以得知以下事项。

    • start_keyがt_94_r_182668、かつend_keyがt_94_r_595336である事から、こちらのリージョンのキー値の範囲が、182668から595336である事がわかります。また、プレフィックスのt_94は、テーブルidが94であることを示しています。

 

    • leader_idが3011で、peersが3010, 3011, 3012である事から、3011のリージョンがリーダ(読み書き可能)で、3010と3012のリージョンが、リーダのレプリカである事を示しています。

 

    • leader_store_idが1である事から、3011のリージョンは、id1のTiKVに格納されている事がわかります。

 

    • approximate_size(MB)の60は、こちらのリージョンにおよそ60MBのデータが格納されている事を示しています。

 

    approximate_keysの388612は、こちらのリージョンにおよそ388612個のキー値が格納されている事を示しています。
每个区域都有一个上限大小(默认为96MiB),如果超过上限,将被分为两个区域。
区域的默认冗余度为3。

从上述状态下插入了数据。我们将确认数据添加后的区域信息。

mysql> SHOW TABLE trips REGIONS;
+-----------+----------------+----------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
| REGION_ID | START_KEY      | END_KEY        | LEADER_ID | LEADER_STORE_ID | PEERS            | SCATTERING | WRITTEN_BYTES | READ_BYTES | APPROXIMATE_SIZE(MB) | APPROXIMATE_KEYS | SCHEDULING_CONSTRAINTS | SCHEDULING_STATE |
+-----------+----------------+----------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
|      3005 | t_94_          | t_94_r_182668  |      3007 |               1 | 3006, 3007, 3008 |          0 |             0 |          0 |                   28 |           182223 |                        |                  |
|      3013 | t_94_r_182668  | t_94_r_842765  |      3015 |               1 | 3014, 3015, 3016 |          0 |            39 |          0 |                   41 |           243353 |                        |                  |
|      3017 | t_94_r_842765  | t_94_r_1245906 |      3019 |               1 | 3018, 3019, 3020 |          0 |             0 |          0 |                   21 |           123626 |                        |                  |
|      3009 | t_94_r_1245906 | 78000000       |      3011 |               1 | 3010, 3011, 3012 |          0 |     161824266 |          0 |                   21 |           123626 |                        |                  |
+-----------+----------------+----------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
4 rows in set (0.01 sec)

在添加数据之前,3009区域管理182,668至595,336的范围。但是,在添加数据之后,划分了3009区域,确认新的3013和3017区域现在管理上述范围的键。

当TiKV发生异常终止时,请确认其操作。

我们将扩展一个 TiKV 节点,并在由 4 个 TiKV 节点组成的集群中,测试在强制停止一个 TiKV 节点时的行为。

反正读cmds就好了

根据TiUP的操作步骤,我们进行了TiDB集群的扩容。首先,需要准备TiKV的配置文件。

[root@tisim ~]# cat scale-out.yml
tikv_servers:
 - host: 192.168.3.171
   port: 20163
   status_port: 20183
   config:
     server.labels: { host: "logic-host-4" }
如果像作者的环境一样,在一个节点上组建一个 TiKV 集群时,请注意避免与现有的 TiKV 端口冲突。

我会在事前进行检查。

[root@tisim ~]# tiup cluster check demo-cluster scale-out.yml --cluster --user root -p
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster check demo-cluster scale-out.yml --cluster --user root -p
Input SSH password:

+ Detect CPU Arch Name
  - Detecting node 192.168.3.171 Arch info ... Done

+ Detect CPU OS Name
  - Detecting node 192.168.3.171 OS info ... Done
+ Download necessary tools
  - Downloading check tools for linux/amd64 ... Done
+ Collect basic system information
+ Collect basic system information
  - Getting system info of 192.168.3.171:22 ... Done
+ Check time zone
  - Checking node 192.168.3.171 ... Done

+ Check system requirements
+ Check system requirements
+ Check system requirements
  - Checking node 192.168.3.171 ... Done
  - Checking node 192.168.3.171 ... Done
+ Cleanup check files
  - Cleanup check files on 192.168.3.171:22 ... Done
Node           Check         Result  Message
----           -----         ------  -------
192.168.3.171  thp           Pass    THP is disabled
192.168.3.171  cpu-cores     Pass    number of CPU cores / threads: 4
192.168.3.171  memory        Pass    memory size is 0MB
192.168.3.171  network       Pass    network speed of enp0s3 is 1000MB
192.168.3.171  selinux       Pass    SELinux is disabled
192.168.3.171  command       Pass    numactl: policy: default
192.168.3.171  os-version    Pass    OS is CentOS Linux 7 (Core) 7.9.2009
192.168.3.171  cpu-governor  Warn    Unable to determine current CPU frequency governor policy
192.168.3.171  disk          Warn    mount point / does not have 'noatime' option set

尽管有一个警告,但我将忽略它并执行扩容操作。

[root@tisim ~]# tiup cluster scale-out demo-cluster scale-out.yml -p
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster scale-out demo-cluster scale-out.yml -p
Input SSH password:

+ Detect CPU Arch Name
  - Detecting node 192.168.3.171 Arch info ... Done

+ Detect CPU OS Name
  - Detecting node 192.168.3.171 OS info ... Done
Please confirm your topology:
Cluster type:    tidb
Cluster name:    demo-cluster
Cluster version: v7.0.0
Role  Host           Ports        OS/Arch       Directories
----  ----           -----        -------       -----------
tikv  192.168.3.171  20163/20183  linux/x86_64  /tidb-deploy/tikv-20163,/tidb-data/tikv-20163
Attention:
    1. If the topology is not what you expected, check your yaml file.
    2. Please confirm there is no port/directory conflicts in same host.
Do you want to continue? [y/N]: (default=N)

输入y。

Do you want to continue? [y/N]: (default=N) y
+ [ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/demo-cluster/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/demo-cluster/ssh/id_rsa.pub
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ [Parallel] - UserSSH: user=tidb, host=192.168.3.171
+ Download TiDB components
  - Download tikv:v7.0.0 (linux/amd64) ... Done
+ Initialize target host environments
+ Deploy TiDB instance
  - Deploy instance tikv -> 192.168.3.171:20163 ... Done
+ Copy certificate to remote host
+ Generate scale-out config
  - Generate scale-out config tikv -> 192.168.3.171:20163 ... Done
+ Init monitor config
Enabling component tikv
        Enabling instance 192.168.3.171:20163
        Enable instance 192.168.3.171:20163 success
Enabling component node_exporter
        Enabling instance 192.168.3.171
        Enable 192.168.3.171 success
Enabling component blackbox_exporter
        Enabling instance 192.168.3.171
        Enable 192.168.3.171 success
+ [ Serial ] - Save meta
+ [ Serial ] - Start new instances
Starting component tikv
        Starting instance 192.168.3.171:20163
        Start instance 192.168.3.171:20163 success
Starting component node_exporter
        Starting instance 192.168.3.171
        Start 192.168.3.171 success
Starting component blackbox_exporter
        Starting instance 192.168.3.171
        Start 192.168.3.171 success
+ Refresh components conifgs
  - Generate config pd -> 192.168.3.171:2379 ... Done
  - Generate config tikv -> 192.168.3.171:20160 ... Done
  - Generate config tikv -> 192.168.3.171:20161 ... Done
  - Generate config tikv -> 192.168.3.171:20162 ... Done
  - Generate config tikv -> 192.168.3.171:20163 ... Done
  - Generate config tidb -> 192.168.3.171:4000 ... Done
  - Generate config tiflash -> 192.168.3.171:9000 ... Done
  - Generate config prometheus -> 192.168.3.171:9090 ... Done
  - Generate config grafana -> 192.168.3.171:3000 ... Done
+ Reload prometheus and grafana
  - Reload prometheus -> 192.168.3.171:9090 ... Done
  - Reload grafana -> 192.168.3.171:3000 ... Done
+ [ Serial ] - UpdateTopology: cluster=demo-cluster
Scaled cluster `demo-cluster` out successfully

确认扩展后的拓扑结构。可以确认tikv-20163如预期般已添加。

[root@tisim ~]# tiup cluster display demo-cluster
tiup is checking updates for component cluster ...
Starting component `cluster`: /root/.tiup/components/cluster/v1.12.1/tiup-cluster display demo-cluster
Cluster type:       tidb
Cluster name:       demo-cluster
Cluster version:    v7.0.0
Deploy user:        tidb
SSH type:           builtin
Dashboard URL:      http://192.168.3.171:2379/dashboard
Grafana URL:        http://192.168.3.171:3000
ID                   Role        Host           Ports                            OS/Arch       Status   Data Dir                    Deploy Dir
--                   ----        ----           -----                            -------       ------   --------                    ----------
192.168.3.171:3000   grafana     192.168.3.171  3000                             linux/x86_64  Up       -                           /tidb-deploy/grafana-3000
192.168.3.171:2379   pd          192.168.3.171  2379/2380                        linux/x86_64  Up|L|UI  /tidb-data/pd-2379          /tidb-deploy/pd-2379
192.168.3.171:9090   prometheus  192.168.3.171  9090/12020                       linux/x86_64  Up       /tidb-data/prometheus-9090  /tidb-deploy/prometheus-9090
192.168.3.171:4000   tidb        192.168.3.171  4000/10080                       linux/x86_64  Up       -                           /tidb-deploy/tidb-4000
192.168.3.171:9000   tiflash     192.168.3.171  9000/8123/3930/20170/20292/8234  linux/x86_64  Up       /tidb-data/tiflash-9000     /tidb-deploy/tiflash-9000
192.168.3.171:20160  tikv        192.168.3.171  20160/20180                      linux/x86_64  Up       /tidb-data/tikv-20160       /tidb-deploy/tikv-20160
192.168.3.171:20161  tikv        192.168.3.171  20161/20181                      linux/x86_64  Up       /tidb-data/tikv-20161       /tidb-deploy/tikv-20161
192.168.3.171:20162  tikv        192.168.3.171  20162/20182                      linux/x86_64  Up       /tidb-data/tikv-20162       /tidb-deploy/tikv-20162
192.168.3.171:20163  tikv        192.168.3.171  20163/20183                      linux/x86_64  Up       /tidb-data/tikv-20163       /tidb-deploy/tikv-20163
Total nodes: 9

强制停止 TiKV.

查看旅行表的地区信息。

mysql> SHOW TABLE trips REGIONS;
+-----------+----------------+----------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
| REGION_ID | START_KEY      | END_KEY        | LEADER_ID | LEADER_STORE_ID | PEERS            | SCATTERING | WRITTEN_BYTES | READ_BYTES | APPROXIMATE_SIZE(MB) | APPROXIMATE_KEYS | SCHEDULING_CONSTRAINTS | SCHEDULING_STATE |
+-----------+----------------+----------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
|      3005 | 72000001       | t_94_r_182668  |      3008 |               2 | 3006, 3008, 3029 |          0 |           502 |     108038 |                   32 |           252517 |                        |                  |
|      3013 | t_94_r_182668  | t_94_r_842765  |      3015 |               1 | 3014, 3015, 3016 |          0 |             0 |          0 |                   99 |           641578 |                        |                  |
|      3017 | t_94_r_842765  | t_94_r_1245906 |      3019 |               1 | 3018, 3019, 3023 |          0 |           155 |          0 |                   65 |           423135 |                        |                  |
|      3009 | t_94_r_1245906 | 78000000       |      3011 |               1 | 3011, 3012, 3025 |          0 |           140 |          0 |                  108 |           530975 |                        |                  |
+-----------+----------------+----------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
4 rows in set (0.01 sec)

我想要将 leader_store_id 为1的 TikV 设定为强制停止的目标。为了确定 leader_store_id 为1的 TiKV,我会检查 information_schema.tikv_store_status。

mysql> SELECT store_id, address FROM information_schema.tikv_store_status;
+----------+---------------------+
| store_id | address             |
+----------+---------------------+
|        1 | 192.168.3.171:20160 |
|        2 | 192.168.3.171:20162 |
|        3 | 192.168.3.171:20161 |
|      114 | 192.168.3.171:3930  |
|     3021 | 192.168.3.171:20163 |
+----------+---------------------+
5 rows in set (0.00 sec)

根据上述的情况,我们发现 store_id 为 1 的 TiKV 的地址是 192.168.3.171:20160,因此我们将停止该进程。

[root@tisim ~]# ps -ef |grep tikv-20160|grep -v grep
tidb      1121     1  2 14:44 ?        00:04:06 bin/tikv-server --addr 0.0.0.0:20160 --advertise-addr 192.168.3.171:20160 --status-addr 0.0.0.0:20180 --advertise-status-addr 192.168.3.171:20180 --pd 192.168.3.171:2379 --data-dir /tidb-data/tikv-20160 --config conf/tikv.toml --log-file /tidb-deploy/tikv-20160/log/tikv.log
[root@tisim ~]# kill -9 1121

我們將檢查狀態。tikv-20160的狀態已經變為斷開連接。

[root@tisim ~]# tiup cluster display demo-cluster
(省略)
ID                   Role        Host           Ports                            OS/Arch       Status        Data Dir                    Deploy Dir
--                   ----        ----           -----                            -------       ------        --------                    ----------
192.168.3.171:20160  tikv        192.168.3.171  20160/20180                      linux/x86_64  Disconnected  /tidb-data/tikv-20160       /tidb-deploy/tikv-20160
192.168.3.171:20161  tikv        192.168.3.171  20161/20181                      linux/x86_64  Up            /tidb-data/tikv-20161       /tidb-deploy/tikv-20161
192.168.3.171:20162  tikv        192.168.3.171  20162/20182                      linux/x86_64  Up            /tidb-data/tikv-20162       /tidb-deploy/tikv-20162
192.168.3.171:20163  tikv        192.168.3.171  20163/20183                      linux/x86_64  Up            /tidb-data/tikv-20163       /tidb-deploy/tikv-20163

当重新确认区域信息时,可以从leader_id的值中确认leader_store_id已被更改为非1,并且一个作为副本的区域已被提升为领导者。

mysql> SHOW TABLE trips REGIONS;
+-----------+----------------+----------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
| REGION_ID | START_KEY      | END_KEY        | LEADER_ID | LEADER_STORE_ID | PEERS            | SCATTERING | WRITTEN_BYTES | READ_BYTES | APPROXIMATE_SIZE(MB) | APPROXIMATE_KEYS | SCHEDULING_CONSTRAINTS | SCHEDULING_STATE |
+-----------+----------------+----------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
|      3005 | 72000001       | t_94_r_182668  |      3008 |               2 | 3006, 3008, 3029 |          0 |           635 |     109725 |                   32 |           252517 |                        |                  |
|      3013 | t_94_r_182668  | t_94_r_842765  |      3014 |               3 | 3014, 3015, 3016 |          0 |            39 |          0 |                  102 |           665224 |                        |                  |
|      3017 | t_94_r_842765  | t_94_r_1245906 |      3023 |            3021 | 3018, 3019, 3023 |          0 |            39 |          0 |                   62 |           403141 |                        |                  |
|      3009 | t_94_r_1245906 | 78000000       |      3012 |               2 | 3011, 3012, 3025 |          0 |             0 |     558135 |                   79 |           505124 |                        |                  |
+-----------+----------------+----------------+-----------+-----------------+------------------+------------+---------------+------------+----------------------+------------------+------------------------+------------------+
4 rows in set (0.01 sec)

顺便提一下,之前终止的tikv-20160已经自动恢复了,以下是/var/log/messages的摘录。

May 13 17:58:41 tisim systemd: tikv-20160.service: main process exited, code=killed, status=9/KILL
May 13 17:58:41 tisim systemd: Unit tikv-20160.service entered failed state.
May 13 17:58:41 tisim systemd: tikv-20160.service failed.
May 13 17:58:56 tisim systemd: tikv-20160.service holdoff time over, scheduling restart.
May 13 17:58:56 tisim systemd: Stopped tikv service.
May 13 17:58:56 tisim systemd: Started tikv service.

最后

希望本文章能帮助您,我们在本文中确认了以下内容。

    • SHOW TABLE REGIONS文を使用したリージョン情報

 

    • tiupを使用したTiKVのスケールアウト

 

    TiKV異常終了時のリージョン再配置
广告
将在 10 秒后关闭
bannerAds