环境准备
四台虚拟机
- 192.168.2.38(管理节点)
- 192.168.2.81(工作节点)
- 192.168.2.100(工作节点)
- 192.168.2.102(工作节点)
时间同步
每台机器都执行
yum install -y ntp
cat <<EOF>>/var/spool/cron/root
00 12 * * * /usr/sbin/ntpdate -u ntp1.aliyun.com && /usr/sbin/hwclock -w
EOF
##查看计划任务
crontab -l
##手动执行
/usr/sbin/ntpdate -u ntp1.aliyun.com && /usr/sbin/hwclock -w
Docker
安装Docker
curl -sSL https://get.daocloud.io/docker | sh
启动docker
sudo systemctl start docker
搭建Swarm集群
打开防火墙(Swarm需要)
-
管理节点打开2377
# manager firewall-cmd --zone=public --add-port=2377/tcp --permanent
-
所有节点打开以下端口
# 所有node firewall-cmd --zone=public --add-port=7946/tcp --permanent firewall-cmd --zone=public --add-port=7946/udp --permanent firewall-cmd --zone=public --add-port=4789/tcp --permanent firewall-cmd --zone=public --add-port=4789/udp --permanent
-
所有节点重启防火墙
# 所有node firewall-cmd --reload systemctl restart docker
-
图个方便可以直接关闭防火墙
创建Swarm
docker swarm init --advertise-addr your_manager_ip
查看join-token
[root@manager ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-51b7t8whxn8j6mdjt5perjmec9u8qguxq8tern9nill737pra2-ejc5nw5f90oz6xldcbmrl2ztu 192.168.2.61:2377
[root@manager ~]#
加入Swarm
docker swarm join --token SWMTKN-1-
51b7t8whxn8j6mdjt5perjmec9u8qguxq8tern9nill737pra2-ejc5nw5f90oz6xldcbmrl2ztu
192.168.2.38:2377
#查看节点
docker node ls
服务约束
添加label
sudo docker node update --label-add redis1=true 管理节点名称
sudo docker node update --label-add redis2=true 工作节点名称
sudo docker node update --label-add redis3=true 工作节点名称
sudo docker node update --label-add redis4=true 工作节点名称
单机集群
弊端:容器都部署在一个机器上,机器挂了,就全挂了。
创建容器
Tips:这里可以写个脚本启动,因为这种方式不常用,这里就不写那个脚本了
docker create --name redis-node1 --net host -v /data/redis-data/node1:/data redis --cluster-enabled yes --cluster-config-file nodes-node-1.conf --port 6379
docker create --name redis-node2 --net host -v /data/redis-data/node2:/data redis --cluster-enabled yes --cluster-config-file nodes-node-2.conf --port 6380
docker create --name redis-node3 --net host -v /data/redis-data/node3:/data redis --cluster-enabled yes --cluster-config-file nodes-node-3.conf --port 6381
docker create --name redis-node4 --net host -v /data/redis-data/node4:/data redis --cluster-enabled yes --cluster-config-file nodes-node-4.conf --port 6382
docker create --name redis-node5 --net host -v /data/redis-data/node5:/data redis --cluster-enabled yes --cluster-config-file nodes-node-5.conf --port 6383
docker create --name redis-node6 --net host -v /data/redis-data/node6:/data redis --cluster-enabled yes --cluster-config-file nodes-node-6.conf --port 6384
启动容器
docker start redis-node1 redis-node2 redis-node3 redis-node4 redis-node5 redis-node6
进入容器启动集群
# 进入其中一个节点
docker exec -it redis-node1 /bin/bash
# 创建集群
redis-cli --cluster create 192.168.2.38:6379 192.168.2.38:6380 192.168.2.38:6381 192.168.2.38:6382 192.168.2.38:6383 192.168.2.38:6384 --cluster-replicas 1
# --cluster-replicas 1 一比一,一主一从
分布式集群
redis集群至少需要3个主节点,所以这里搭建三主三从的集群,由于只有4台机器,所以在脚本中把前三个节点放到一台机器上了。
部署
在swarm集群的Manager节点中创建
mkdir /root/redis-swarm
cd /root/redis-swarm
vi docker-compose.yml
docker compose.yml
说明:
-
前6个服务为redis节点,最后一个redis-start是用于创建集群,利用redis-cli客户端搭建集群,该服务搭建完redis集群后会自动停止运行。
-
redis-start需要等待前6个redis节点的执行完毕才能创建集群,因此需要用到脚本wait-for-it.sh
-
由于redis-cli –cluster create不支持网络别名,所以另写脚本redis-start.sh
使用这套脚本同样可以单机部署集群,只需要在启动时不使用swarm启动就可以了,然后把docker-compose.yml中的网络模式
driver: overlay
给注释掉即可
version: '3.7'
services:
redis-node1:
image: redis
hostname: redis-node1
ports:
- 6379:6379
networks:
- redis-swarm
volumes:
- "node1:/data"
command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-1.conf
deploy:
mode: replicated
replicas: 1
resources:
limits:
# cpus: '0.001'
memory: 5120M
reservations:
# cpus: '0.001'
memory: 512M
placement:
constraints:
- node.role==manager
redis-node2:
image: redis
hostname: redis-node2
ports:
- 6380:6379
networks:
- redis-swarm
volumes:
- "node2:/data"
command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-2.conf
deploy:
mode: replicated
replicas: 1
resources:
limits:
# cpus: '0.001'
memory: 5120M
reservations:
# cpus: '0.001'
memory: 512M
placement:
constraints:
- node.role==manager
redis-node3:
image: redis
hostname: redis-node3
ports:
- 6381:6379
networks:
- redis-swarm
volumes:
- "node3:/data"
command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-3.conf
deploy:
mode: replicated
resources:
limits:
# cpus: '0.001'
memory: 5120M
reservations:
# cpus: '0.001'
memory: 512M
replicas: 1
placement:
constraints:
- node.role==manager
redis-node4:
image: redis
hostname: redis-node4
ports:
- 6382:6379
networks:
- redis-swarm
volumes:
- "node4:/data"
command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-4.conf
deploy:
mode: replicated
replicas: 1
resources:
limits:
# cpus: '0.001'
memory: 5120M
reservations:
# cpus: '0.001'
memory: 512M
placement:
constraints:
- node.labels.redis2==true
redis-node5:
image: redis
hostname: redis-node5
ports:
- 6383:6379
networks:
- redis-swarm
volumes:
- "node5:/data"
command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-5.conf
deploy:
mode: replicated
replicas: 1
resources:
limits:
# cpus: '0.001'
memory: 5120M
reservations:
# cpus: '0.001'
memory: 512M
placement:
constraints:
- node.labels.redis3==true
redis-node6:
image: redis
hostname: redis-node6
ports:
- 6384:6379
networks:
- redis-swarm
volumes:
- "node6:/data"
command: redis-server --cluster-enabled yes --cluster-config-file nodes-node-6.conf
deploy:
mode: replicated
replicas: 1
resources:
limits:
# cpus: '0.001'
memory: 5120M
reservations:
# cpus: '0.001'
memory: 512M
placement:
constraints:
- node.labels.redis4==true
redis-start:
image: redis
hostname: redis-start
networks:
- redis-swarm
volumes:
- "$PWD/start:/redis-start"
depends_on:
- redis-node1
- redis-node2
- redis-node3
- redis-node4
- redis-node5
- redis-node6
command: /bin/bash -c "chmod 777 /redis-start/redis-start.sh && chmod 777 /redis-start/wait-for-it.sh && /redis-start/redis-start.sh"
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 5
placement:
constraints:
- node.role==manager
networks:
redis-swarm:
driver: overlay
volumes:
node1:
node2:
node3:
node4:
node5:
node6:
原创文章,作者:ItWorker,如若转载,请注明出处:https://blog.ytso.com/291871.html