本篇内容主要讲解“greenplum集群的搭建过程”,感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习“greenplum集群的搭建过程”吧!
环境说明
本次环境一共四台虚拟机,一台为master,三台为segment节点,其中segment3节点为standby master。 主机名:gpms,gps1,gps2,gps3
版本信息,redhat7.3+gp5.16
前期准备
--系统参数 cat <<EOF >>/etc/sysctl.conf #add by xyy for greenplum 20181016 kernel.shmmax = 500000000 kernel.shmmni = 4096 kernel.shmall = 4000000000 kernel.sem = 500 1024000 200 4096 kernel.sysrq = 1 kernel.core_uses_pid = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.msgmni = 2048 net.ipv4.tcp_syncookies = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.conf.all.arp_filter = 1 net.ipv4.ip_local_port_range = 10000 65535 net.core.netdev_max_backlog = 10000 net.core.rmem_max = 2097152 net.core.wmem_max = 2097152 vm.overcommit_memory = 2 vm.overcommit_memory = 2 vm.swappiness = 10 vm.dirty_expire_centisecs = 500 vm.dirty_writeback_centisecs = 100 vm.dirty_background_ratio = 0 vm.dirty_ratio=0 vm.dirty_background_bytes = 1610612736 vm.dirty_bytes = 4294967296 EOF --资源限制 vi /etc/security/limits.conf * soft nofile 65536 * hard nofile 65536 * soft nproc 131072 * hard nproc 131072 * soft core unlimited
创建用户,目录等,每个节点均需要
groupdel gpadmin userdel gpadmin groupadd -g 530 gpadmin useradd -g 530 -u 530 -m -d /home/gpadmin -s /bin/bash gpadmin chown -R gpadmin:gpadmin /home/gpadmin passwd gpadmin mkdir /opt/greenplum chown -R gpadmin:gpadmin /opt/greenplum --hosts 192.168.80.161 gpms 192.168.80.162 gps1 192.168.80.163 gps2 192.168.80.164 gps3
master节点安装
su - gpadmin /opt/greenplum/greenplum-db ./greenplum-db-5.16.0-rhel7-x86_64.bin source /opt/greenplum/greenplum-db/greenplum_path.sh [gpadmin@gptest conf]$ pwd /home/gpadmin/conf [gpadmin@gptest conf]$ cat hostlist gpms gps1 gps2 gps3 [gpadmin@gptest conf]$ cat seg_hosts gps1 gps2 gps3 [gpadmin@gptest conf]$
配置互信,批量打包解压
--ssh 互信 gpssh-exkeys -f hostlist --批量操作命令 gpssh -f hostlist --打包 tar -cvf gp5.6.tar greenplum-db-5.16.0/ gpscp -f /home/gpadmin/conf/seg_hosts gp5.6.tar =:/opt/greenplum/ gpssh -f seg_hosts cd /opt/gr* tar -xvf gp5.6.tar ln -s greenplum-db-5.16.0 greenplum-db --创建相关目录 gpssh -f hostlist mkdir -p /home/gpadmin/gpdata/gpmaster mkdir -p /home/gpadmin/gpdata/gpdatap1 mkdir -p /home/gpadmin/gpdata/gpdatap2 mkdir -p /home/gpadmin/gpdata/gpdatam1 mkdir -p /home/gpadmin/gpdata/gpdatam2 --配置环境变量 echo "source /opt/greenplum/greenplum-db/greenplum_path.sh" >> /home/gpadmin/.bash_profile echo "export MASTER_DATA_DIRECTORY=/home/gpadmin/gpdata/gpmaster/gpseg-1" >> /home/gpadmin/.bash_profile echo "export PGPORT=2345" >> /home/gpadmin/.bash_profile echo "export PGDATABASE=testdb" >> /home/gpadmin/.bash_profile
数据库初始化
cd /opt/greenplum/greenplum-db/docs/cli_help/gpconfigs [gpadmin@gptest conf]$ vi gpinitsystem_config [gpadmin@gptest conf]$ cat gpinitsystem_config | grep -v '#' | grep -v '^$' ARRAY_NAME="Greenplum Data Platform" #数据节点名称前缀 SEG_PREFIX=gpseg #primary 起始端口号 PORT_BASE=33000 #primary 数据目录 declare -a DATA_DIRECTORY=(/home/gpadmin/gpdata/gpdatap1 /home/gpadmin/gpdata/gpdatap2) #master所在主机 MASTER_HOSTNAME=gpms #master数据目录 MASTER_DIRECTORY=/home/gpadmin/gpdata/gpmaster MASTER_PORT=2345 TRUSTED_SHELL=/usr/bin/ssh CHECK_POINT_SEGMENTS=8 ENCODING=UNICODE #mirror 起始端口号 MIRROR_PORT_BASE=43000 #primary segment 主备同步的起始端口号 REPLICATION_PORT_BASE=34000 #mirror segment主备同步的起始端口号 MIRROR_REPLICATION_PORT_BASE=44000 #mirror segment数据目录 declare -a MIRROR_DATA_DIRECTORY=(/home/gpadmin/gpdata/gpdatam1 /home/gpadmin/gpdata/gpdatam2) --初始化数据库 gpinitsystem -c gpinitsystem_config -h seg_hosts -s gps3 -S
创建数据库等操作命令参考
--create select * from pg_filespace; create tablespace tbs_siling filespace siling_fs; select a.spcname,b.fsname from pg_tablespace a,pg_filespace b where spcfsoid=b.oid; 创建 数据库 与 用户 并 授权 create database testdb tablespace tbs_siling;; create user testuser password 'testuser'; grant all on database testdb to testuser; select rolname,oid from pg_roles; --设置用户的 表空间 及 授权 alter user testuser set default_tablespace='tbs_siling'; grant all on tablespace tbs_siling to testuser; --创建 模式 并 授权 create schema siling_mode; grant all on schema siling_mode to testuser; --启停数据库 gpstart -a gpstop -a --远程连接数据库 --修改密码 alter role gpadmin with password 'gpadmin'; host all all 192.168.80.0/0 md5 gpstop -u psql -h 192.168.80.161 -d testdb -p 2345 --greenplum 数据分布在所有segment上,当查询数据时,master展现的数据时限接收到的数据顺序,每个segment的数据到达master的顺序是随机的。所以select顺序也是随机的。 select gp_segment_id ,count(*) from test2020 group by gp_segment_id; --集群节点分布情况 mode:s 表示已同步,r重新同步,c不同步。 status:u up d down select * from gp_segment_configuration;
到此,相信大家对“greenplum集群的搭建过程”有了更深的了解,不妨来实际操作一番吧!这里是亿速云网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!
原创文章,作者:kepupublish,如若转载,请注明出处:https://blog.ytso.com/229815.html