GBase 8C 集群安装部署全攻略

前言

GBase 8c是南大通用自主研发的一款多模多态的分布式数据库,通过智能优化,智能运维,智能安全实现DB智能化,使GBase8c具备高性能、高可用、弹性伸缩、高安全性等智能特性。
支持行存、列存、内存等多种存储模式,单机、主备式、分布式等多种部署形态和ORACLE、PG、MYSQL多种兼容模式,可以部署在物理机、虚拟机、容器、私有云和公有云,为关键行业核心系统、互联网业务系统和政企业务系统提供安全、稳定、可靠的数据存储和管理服务,满足各种应用场景。本文将介绍如何进行分布式的安装部署。

相关组件

image-20250310232006559

部署方式

支持三种部署方式:单节点、主备及分布式:

image-20250310232038163

本次部署方式采用分布式的部署方式。

安装前准备

机器准备

分布式数据库,需要3台机器,内存4G(swap大小为8G),硬盘大小50G,安装规划如下:

主机名 IP地址 操作系统 角色
gbase8c1 10.10.10.34 CentOS 7.9 gha_server(高可用服务)、dcs(分布式配置存储)、gtm(全局事务管理)、coordinator(协调器)
gbase8c2 10.10.10.35 CentOS 7.9 datanode1(数据节点)、dcs(分布式配置存储)
gbase8c3 10.10.10.36 CentOS 7.9 datanode2(数据节点)、dcs(分布式配置存储)

针对4G内存大小,swap大小需要为8G,才能安装成功

[root@gbase8c1 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:          15546        6210        3880        1581        5454        7564
Swap:          8147           0        8147

关闭防火墙

–三个节点都关闭防火墙

# systemctl stop firewalld
# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

关闭selinux

–三个节点都关闭

####修改/etc/selinux/config文件中的SELINUX值为disabled
# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

修改系统内核参数

cat >> /etc/sysctl.conf <<EOF
kernel.sem = 40960 2048000 40960 20480
EOF

#让修改生效
sysctl -p

安装依赖包

yum install bison flex libaio-devel lsb_release patch ncurses-devel  bzip2 openssl -y

修改hosts文件

–三个节点都执行

cat >> /etc/hosts <<EOF
10.10.10.34	 gbase8c1	
10.10.10.35	 gbase8c2
10.10.10.36	 gbase8c3
EOF

创建用户

–三个节点都创建

# useradd gbase
# echo "Gbase@123" |passwd gbase --stdin
Changing password for user gbase.
passwd: all authentication tokens updated successfully.

配置sudo

–三个节点添加gbase 用户至sudo列表,后续数据库安装配置操作无需root用户。

# visudo             ###在root ALL=(ALL) ALL在下方增加 gbase ALL=(ALL) NOPASSWD:ALL
root    ALL=(ALL)       ALL
gbase   ALL=(ALL)       NOPASSWD:ALL

完成配置后,数据库安装就可以用 gbase 用户了

配置免密互信

–三个节点需要配置gbase用户免密互信:

--3个节点生成秘钥文件
[gbase@gbase8c1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/gbase/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/gbase/.ssh/id_rsa.
Your public key has been saved in /home/gbase/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:NRoY6os+re20Bu5pDt6/RWdsBMR7groGzs9DYbQoL2I gbase@gbase8c1
The key's randomart image is:
+---[RSA 2048]----+
|      +o         |
|   . . +.        |
|  o o o o.o      |
|.. = . oo= .     |
|... +  .S=       |
|oEo+ .. +        |
|*o+o+  .         |
|.=*Bo..          |
| =OO*o.          |
+----[SHA256]-----+
$ ssh-copy-id gbase@gbase8c1
$ ssh-copy-id gbase@gbase8c2
$ ssh-copy-id gbase@gbase8c3

--检查互信
[gbase@gbase8c1 ~]$ ssh gbase@gbase8c1 date
Fri Mar  7 21:10:53 CST 2025
[gbase@gbase8c1 ~]$ ssh gbase@gbase8c2 date
Fri Mar  7 21:10:55 CST 2025
[gbase@gbase8c1 ~]$ ssh gbase@gbase8c3 date
Fri Mar  7 21:10:58 CST 2025

配置NTP服务

–要保证3个节点的时间进行同步,人为选取NTP主节点(通常是GTM主节点),另外2个节点与它进行对时。

[root@gbase8c1 ~]# vi /etc/ntp.conf
###添加本机ip
restrict 10.10.10.34 nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
##注释以下内容
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
##添加以下2行
server 127.127.1.0
Fudge  127.127.1.0 stratum 10

--启动ntp服务
[root@gbase8c1 ~]# systemctl start ntpd
[root@gbase8c1 ~]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2025-03-10 23:43:36 CST; 4s ago
  Process: 29446 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 29447 (ntpd)
   CGroup: /docker/e7ff60899b159f0e16156801bae5649ccb06983ff489abe0cdd252941cb2fcfa/system.slice/ntpd.service
           └─29447 /usr/sbin/ntpd -u ntp:ntp -g
           ‣ 29447 /usr/sbin/ntpd -u ntp:ntp -g

Mar 10 23:43:36 gbase8c1 ntpd[29447]: ntp_io: estimated max descriptors: 1048576, initial socket boundary: 16
Mar 10 23:43:36 gbase8c1 ntpd[29447]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
Mar 10 23:43:36 gbase8c1 ntpd[29447]: Listen and drop on 1 v6wildcard :: UDP 123
Mar 10 23:43:36 gbase8c1 ntpd[29447]: Listen normally on 2 lo 127.0.0.1 UDP 123
Mar 10 23:43:36 gbase8c1 ntpd[29447]: Listen normally on 3 eth0 10.10.10.34 UDP 123
Mar 10 23:43:36 gbase8c1 ntpd[29447]: Listening on routing socket on fd #20 for interface updates
Mar 10 23:43:36 gbase8c1 ntpd[29447]: 0.0.0.0 c016 06 restart
Mar 10 23:43:36 gbase8c1 ntpd[29447]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM
Mar 10 23:43:36 gbase8c1 ntpd[29447]: 0.0.0.0 c011 01 freq_not_set
Mar 10 23:43:37 gbase8c1 ntpd[29447]: 0.0.0.0 c514 04 freq_mode

–另外两个节点配置ntp

[root@gbase8c2 ~]# vi /etc/ntp.conf
restrict 10.10.10.35 nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 10.10.10.34
Fudge 10.10.10.34 stratum 10
[root@gbase8c2 ~]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2025-03-10 23:48:31 CST; 4s ago
  Process: 5063 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 5064 (ntpd)
   CGroup: /docker/b3100ed9ec209749c11bcf42afdb298d47c32b875c9fe6156af5e448eecf42ef/system.slice/ntpd.service
           └─5064 /usr/sbin/ntpd -u ntp:ntp -g
           ‣ 5064 /usr/sbin/ntpd -u ntp:ntp -g

Mar 10 23:48:31 gbase8c2 ntpd[5064]: syntax error in /etc/ntp.conf line 27, column 1
Mar 10 23:48:31 gbase8c2 ntpd[5064]: ntp_io: estimated max descriptors: 1048576, initial socket boundary: 16
Mar 10 23:48:31 gbase8c2 ntpd[5064]: Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123
Mar 10 23:48:31 gbase8c2 ntpd[5064]: Listen and drop on 1 v6wildcard :: UDP 123
Mar 10 23:48:31 gbase8c2 ntpd[5064]: Listen normally on 2 lo 127.0.0.1 UDP 123
Mar 10 23:48:31 gbase8c2 ntpd[5064]: Listen normally on 3 eth0 10.10.10.35 UDP 123
Mar 10 23:48:31 gbase8c2 ntpd[5064]: Listening on routing socket on fd #20 for interface updates
Mar 10 23:48:31 gbase8c2 ntpd[5064]: 0.0.0.0 c016 06 restart
Mar 10 23:48:31 gbase8c2 ntpd[5064]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM
Mar 10 23:48:31 gbase8c2 ntpd[5064]: 0.0.0.0 c011 01 freq_not_set

###节点3也类似的配置

集群安装

上传安装包

安装包下载地址:https://www.gbase.cn/download/gbase-8c?category=DOCUMENT

–上传安装包到主节点(10.10.10.34)的/home/gbase/gbase_package

# su - gbase
$ mkdir /home/gbase/gbase_package
###拷贝安装包GBase8cV5_S3.0.0B114_centos7.8_x86_64.tar.gz到/home/gbase/gbase_package下

解压安装包

[gbase@gbase8c1 gbase_package]$ tar -xf GBase8cV5_S3.0.0B114_centos7.8_x86_64.tar.gz 
[gbase@gbase8c1 gbase_package]$ ls -lrt
total 521404
-rw-rw-r-- 1 gbase gbase 163047726 Nov  7  2023 GBase8cV5_S3.0.0B114_CentOS_x86_64.tar.bz2
-rw-rw-r-- 1 gbase gbase        65 Nov  7  2023 GBase8cV5_S3.0.0B114_CentOS_x86_64.sha256
-rw------- 1 gbase gbase    383797 Nov  7  2023 upgrade_sql.tar.gz
-rw------- 1 gbase gbase        65 Nov  7  2023 upgrade_sql.sha256
-rw-rw-r-- 1 gbase gbase   1036193 Nov  7  2023 GBase8cV5_S3.0.0B114_CentOS_x86_64_pgpool.tar.gz
-rw-rw-r-- 1 gbase gbase 103175364 Nov  7  2023 GBase8cV5_S3.0.0B114_CentOS_x86_64_om.tar.gz
-rw-rw-r-- 1 gbase gbase        65 Nov  7  2023 GBase8cV5_S3.0.0B114_CentOS_x86_64_om.sha256
-rw-r--r-- 1 gbase gbase 266255210 Mar  7 17:10 GBase8cV5_S3.0.0B114_centos7.8_x86_64.tar.gz

##再次解压GBase8cV5_S3.0.0B114_CentOS_x86_64_om.tar.gz 
[gbase@gbase8c1 gbase_package]$ tar -xf GBase8cV5_S3.0.0B114_CentOS_x86_64_om.tar.gz 

修改 gbase8c.yml

编辑集群部署文件 gbase8c.yml,修改如下:

[gbase@gbase8c1 ~]$ vi /home/gbase/gbase_package/gbase.yml
gha_server:
  - gha_server1:
      host: 10.10.10.34
      port: 20001
dcs:
  - host: 10.10.10.34
    port: 2379
  - host: 10.10.10.35
    port: 2379
  - host: 10.10.10.36
    port: 2379
gtm:
  - gtm1:
      host: 10.10.10.34
      agent_host: 10.10.10.34
      role: primary
      port: 6666
      agent_port: 8001
      work_dir: /home/gbase/data/gtm/gtm1

coordinator:
  - cn1:
      host: 10.10.10.34
      agent_host: 10.10.10.34
      role: primary
      port: 5432
      agent_port: 8003
      work_dir: /home/gbase/data/coord/cn1
datanode:
  - dn1:
      - dn1_1:
          host: 10.10.10.35
          agent_host: 10.10.10.35
          role: primary
          port: 15432
          agent_port: 8005
          work_dir: /home/gbase/data/dn1/dn1_1
  - dn2:
      - dn2_1:
          host: 10.10.10.36
          agent_host: 10.10.10.36
          role: primary
          port: 20010
          agent_port: 8007
          work_dir: /home/gbase/data/dn2/dn2_1
env:
  # cluster_type allowed values: multiple-nodes, single-inst, default is multiple-nodes
  cluster_type: multiple-nodes
  pkg_path: /home/gbase/gbase_package # 安装包所在目录
  prefix: /home/gbase/gbase_db # 运行目录
  version: V5_S3.0.0B78
  user: gbase
  port: 22
# constant:
#  virtual_ip: 100.0.1.254/24

yml配置文件说明:

gha_server 集群管理器,作用类似patroni
dcs 集群状态管理器,作用类似etcd
gtm 全局事务管理器
coordinator 协调器
datanode 数据节点

• host:由数据面节点(CN、DN)访问连接的 IP
• port:集群节点连接端口
• agent_host:由控制面访问连接的IP
• role:节点角色类型,gtm、cn、dn节点

必须设置的参数。
• agent_port:高可用端口号
• work_dir:节点数据存放路径。
• cluster_type:集群类型,分布式下参数值为multiple-nodes
• pkg_path:安装目录。owner为gbase
• prefix:运行目录。owner为gbase。
• version:安装包版本,仅修改后两位数字

执行安装脚本

[gbase@gbase8c1 ~]$ cd /home/gbase/gbase_package/script
[gbase@gbase8c1 script]$ 
[gbase@gbase8c1 script]$ ./gha_ctl install -c gbase -p /home/gbase/gbase_package

{
    "ret":0,
    "msg":"Success"
}

"msg":"Success" 表示部署成功

#-c 表示集群名称,可选字段,缺省值为gbase,需要与配置文件名统一。
#-p 指定配置文件保存路径,默认/tmp

集群管理

查看集群状态

[root@gbase8c1 gbase_package]# su - gbase
Last login: Fri Mar  7 21:35:48 CST 2025 on pts/0
[gbase@gbase8c1 ~]$ gha_ctl monitor -l http://10.10.10.34:2379
{
    "cluster": "gbase",
    "version": "V5_S3.0.0B114",
    "server": [
        {
            "name": "gha_server1",
            "host": "10.10.10.34",
            "port": "20001",
            "state": "running",
            "isLeader": true
        }
    ],
    "gtm": [
        {
            "name": "gtm1",
            "host": "10.10.10.34",
            "port": "6666",
            "workDir": "/home/gbase/data/gtm/gtm1",
            "agentPort": "8001",
            "state": "running",
            "role": "primary",
            "agentHost": "10.10.10.34"
        }
    ],
    "coordinator": [
        {
            "name": "cn1",
            "host": "10.10.10.34",
            "port": "5432",
            "workDir": "/home/gbase/data/coord/cn1",
            "agentPort": "8003",
            "state": "running",
            "role": "primary",
            "agentHost": "10.10.10.34",
            "central": true
        }
    ],
    "datanode": {
        "dn1": [
            {
                "name": "dn1_1",
                "host": "10.10.10.35",
                "port": "15432",
                "workDir": "/home/gbase/data/dn1/dn1_1",
                "agentPort": "8005",
                "state": "running",
                "role": "primary",
                "agentHost": "10.10.10.35"
            }
        ],
        "dn2": [
            {
                "name": "dn2_1",
                "host": "10.10.10.36",
                "port": "20010",
                "workDir": "/home/gbase/data/dn2/dn2_1",
                "agentPort": "8007",
                "state": "running",
                "role": "primary",
                "agentHost": "10.10.10.36"
            }
        ]
    },
    "dcs": {
        "clusterState": "healthy",
        "members": [
            {
                "url": "http://10.10.10.36:2379",
                "id": "1ee8d2017f324082",
                "name": "node_2",
                "isLeader": false,
                "state": "healthy"
            },
            {
                "url": "http://10.10.10.34:2379",
                "id": "9c5365ebdda29888",
                "name": "node_0",
                "isLeader": true,
                "state": "healthy"
            },
            {
                "url": "http://10.10.10.35:2379",
                "id": "e0dea71e4a2e0936",
                "name": "node_1",
                "isLeader": false,
                "state": "healthy"
            }
        ]
    }
}

或
[gbase@gbase8c1 ~]$ gha_ctl monitor -l http://10.10.10.34:2379 -H
+----+-------------+-------------+-------+---------+--------+
| No |     name    |     host    |  port |  state  | leader |
+----+-------------+-------------+-------+---------+--------+
| 0  | gha_server1 | 10.10.10.34 | 20001 | running |  True  |
+----+-------------+-------------+-------+---------+--------+
+----+------+-------------+------+---------------------------+---------+---------+
| No | name |     host    | port |          work_dir         |  state  |   role  |
+----+------+-------------+------+---------------------------+---------+---------+
| 0  | gtm1 | 10.10.10.34 | 6666 | /home/gbase/data/gtm/gtm1 | running | primary |
+----+------+-------------+------+---------------------------+---------+---------+
+----+------+-------------+------+----------------------------+---------+---------+
| No | name |     host    | port |          work_dir          |  state  |   role  |
+----+------+-------------+------+----------------------------+---------+---------+
| 0  | cn1  | 10.10.10.34 | 5432 | /home/gbase/data/coord/cn1 | running | primary |
+----+------+-------------+------+----------------------------+---------+---------+
+----+-------+-------+-------------+-------+----------------------------+---------+---------+
| No | group |  name |     host    |  port |          work_dir          |  state  |   role  |
+----+-------+-------+-------------+-------+----------------------------+---------+---------+
| 0  |  dn1  | dn1_1 | 10.10.10.35 | 15432 | /home/gbase/data/dn1/dn1_1 | running | primary |
| 1  |  dn2  | dn2_1 | 10.10.10.36 | 20010 | /home/gbase/data/dn2/dn2_1 | running | primary |
+----+-------+-------+-------------+-------+----------------------------+---------+---------+
+----+-------------------------+--------+---------+----------+
| No |           url           |  name  |  state  | isLeader |
+----+-------------------------+--------+---------+----------+
| 0  | http://10.10.10.36:2379 | node_2 | healthy |  False   |
| 1  | http://10.10.10.34:2379 | node_0 | healthy |   True   |
| 2  | http://10.10.10.35:2379 | node_1 | healthy |  False   |
+----+-------------------------+--------+---------+----------+

数据库启停

–停止数据库服务

[gbase@gbase8c1 ~]$ gha_ctl stop all -l http://10.10.10.34:2379
{
    "ret":0,
    "msg":"Success"
}

–启动数据库服务

[gbase@gbase8c1 ~]$ gha_ctl start all -l http://10.10.10.34:2379
{
    "ret":0,
    "msg":"Success"
}

原创文章,作者:kirin,如若转载,请注明出处:https://blog.ytso.com/tech/bigdata/318113.html

(0)
上一篇 2天前
下一篇 2天前

相关推荐

发表回复

登录后才能评论