28-搭建Keepalived+LVS+Nginx高可用集群负载均衡


搭建Keepalived+LVS+NGINX高可用集群负载均衡

架构图

28-搭建Keepalived+LVS+Nginx高可用集群负载均衡

搭建Keepalived+LVS

为了搭建主备模式架构, 再创建一台192.168.247.139的虚拟机

在138, 139上安装Keepalived, 并注册为系统服务,但是不用修改配置文件

修改主LVS上的Keepalived配置文件

vi /etc/keepalived/keepalived.conf

配置文件

! Configuration File for keepalived

global_defs {
   router_id LVS_138
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.247.150
    }
}

# 配置集群地址访问的IP+端口 端口和Nginx保持一致, 都是80
virtual_server 192.168.247.150 80 {
    # 健康检查的时间, 单位:秒
    delay_loop 6
    # 负载策略 轮询
    lb_algo rr
    # LVS模式 DR
    lb_kind DR
    # 会话持久化时间
    persistence_timeout 5
    # 协议
    protocol TCP

    # 负载均衡的真实服务器, 也就是Nginx的节点真实IP
    real_server 192.168.247.136 80{
        weight 1
        # 设置健康检查
        TCP_CHECK {
            # 检查端口 80
            connect_port 80
            # 超时时间 2s
            connect_timeout 2
            # 重试次数 2次
            nb_get_retry 2
            # 间隔时间 3s
            delay_before_retry 3
        }
    }
    real_server 192.168.247.137 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 2
            nb_get_retry 2
            delay_before_retry 3
        }

    }
}

清除之前的LVS配置

ipvsadm -C

重启Keepalived

systemctl restart keepalived

查看LVS映射表

[root@localhost etc]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.247.150:80 rr persistent 5
  -> 192.168.247.136:80           Route   1      0          0
  -> 192.168.247.137:80           Route   1      0          0
[root@localhost etc]#

已经出现了配置

修改备LVS上的Keepalived配置文件

vi /etc/keepalived/keepalived.conf

配置文件

! Configuration File for keepalived

global_defs {
   router_id LVS_139
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.247.150
    }
}

virtual_server 192.168.247.150 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 5
    protocol TCP

    real_server 192.168.247.136 80{
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 2
            nb_get_retry 2
            delay_before_retry 3
        }
    }
    real_server 192.168.247.137 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 2
            nb_get_retry 2
            delay_before_retry 3
        }

    }
}

重启Keepalived

systemctl restart keepalived

查看LVS的映射表

[root@localhost keepalived]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.247.150:80 rr persistent 5
  -> 192.168.247.136:80           Route   1      0          0
  -> 192.168.247.137:80           Route   1      0          0
[root@localhost keepalived]#

已经出现配置

测试高可用

访问测试

28-搭建Keepalived+LVS+Nginx高可用集群负载均衡

可以访问

测试Keepalived停止

手动停止主LVS上的Keepalived

systemctl stop keepalived

再次访问

28-搭建Keepalived+LVS+Nginx高可用集群负载均衡

已经自动切换到备LVS上了

此时恢复主LVS

systemctl start keepalived

28-搭建Keepalived+LVS+Nginx高可用集群负载均衡

150重新绑定回主LVS节点

测试Nginx停止

28-搭建Keepalived+LVS+Nginx高可用集群负载均衡

现在访问的是master

手动停止136上的Nginx

./nginx -s quit

测试访问

28-搭建Keepalived+LVS+Nginx高可用集群负载均衡

就只能访问到备用的Nginx了

28-搭建Keepalived+LVS+Nginx高可用集群负载均衡

LVS的持久化过期时间已经过了,但是还是直接访问137节点了, 查看映射规则

28-搭建Keepalived+LVS+Nginx高可用集群负载均衡

应为136已经被停止, 现在映射表只有137节点

手动恢复136

./nginx

28-搭建Keepalived+LVS+Nginx高可用集群负载均衡

映射表重新加载了136节点, 健康检查生效

到此Keepalived+LVS+NGINX高可用集群负载均衡搭建完成~

原创文章,作者:ItWorker,如若转载,请注明出处:https://blog.ytso.com/tech/aiops/288027.html

(0)
上一篇 2022年9月7日 01:08
下一篇 2022年9月7日 01:09

相关推荐

发表回复

登录后才能评论