K8s多节点部署—->使用Nginx服务实现负载均衡—->UI界面展示
特别注意:此实验开始前必须要先部署单节master的k8s群集
可以见本人上一篇博客:https://blog.csdn.net/JarryZho/article/details/104193913
环境部署:
相关软件包及文档:
链接:https://pan.baidu.com/s/1l4vVCkZ03la-VpIFXSz1dA
提取码:rg99
使用Nginx做负载均衡:
lb1:192.168.195.147/24 mini-2
lb2:192.168.195.133/24 mini-3
Master节点:
master1:192.168.18.128/24 CentOS 7-3
master2:192.168.18.132/24 mini-1
Node节点:
node1:192.168.18.148/24 CentOS 7-4
node2:192.168.18.145/24 CentOS 7-5
VRRP漂移地址:192.168.18.100
多master群集架构图:
——master2部署——
第一步:优先关闭master2的防火墙服务
[root@master2 ~]# systemctl stop firewalld.service
[root@master2 ~]# setenforce 0
第二步:在master1上操作,复制kubernetes目录到master2
[root@master1 k8s]# scp -r /opt/kubernetes/ root@192.168.18.132:/opt
The authenticity of host '192.168.18.132 (192.168.18.132)' can't be established.
ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk.
ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.18.132' (ECDSA) to the list of known hosts.
root@192.168.18.132's password:
token.csv 100% 84 90.2KB/s 00:00
kube-apiserver 100% 934 960.7KB/s 00:00
kube-scheduler 100% 94 109.4KB/s 00:00
kube-controller-manager 100% 483 648.6KB/s 00:00
kube-apiserver 100% 184MB 82.9MB/s 00:02
kubectl 100% 55MB 81.5MB/s 00:00
kube-controller-manager 100% 155MB 70.6MB/s 00:02
kube-scheduler 100% 55MB 77.4MB/s 00:00
ca-key.pem 100% 1675 1.2MB/s 00:00
ca.pem 100% 1359 1.5MB/s 00:00
server-key.pem 100% 1675 1.2MB/s 00:00
server.pem 100% 1643 1.7MB/s 00:00
第三步:复制master1中的三个组件启动脚本kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service到master2
[root@master1 k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.18.132:/usr/lib/systemd/system/
root@192.168.18.132's password:
kube-apiserver.service 100% 282 286.6KB/s 00:00
kube-controller-manager.service 100% 317 223.9KB/s 00:00
kube-scheduler.service 100% 281 362.4KB/s 00:00
第四步:master2上操作,修改配置文件kube-apiserver中的IP
[root@master2 ~]# cd /opt/kubernetes/cfg/
[root@master2 cfg]# ls
kube-apiserver kube-controller-manager kube-scheduler token.csv
[root@master2 cfg]# vim kube-apiserver
5 --bind-address=192.168.18.132 /
7 --advertise-address=192.168.18.132 /
#第5和7行IP地址需要改为master2的地址
#修改完成后按Esc退出插入模式,输入:wq保存退出
第五步:拷贝master1上已有的etcd证书给master2使用
特别注意:master2一定要有etcd证书,否则apiserver服务无法启动
[root@master1 k8s]# scp -r /opt/etcd/ root@192.168.18.132:/opt/
root@192.168.18.132's password:
etcd 100% 516 535.5KB/s 00:00
etcd 100% 18MB 90.6MB/s 00:00
etcdctl 100% 15MB 80.5MB/s 00:00
ca-key.pem 100% 1675 1.4MB/s 00:00
ca.pem 100% 1265 411.6KB/s 00:00
server-key.pem 100% 1679 2.0MB/s 00:00
server.pem 100% 1338 429.6KB/s 00:00
第六步:启动master2中的三个组件服务
[root@master2 cfg]# systemctl start kube-apiserver.service
[root@master2 cfg]# systemctl enable kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master2 cfg]# systemctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2020-02-07 09:16:57 CST; 56min ago
[root@master2 cfg]# systemctl start kube-controller-manager.service
[root@master2 cfg]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@master2 cfg]# systemctl status kube-controller-manager.service
● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2020-02-07 09:17:02 CST; 57min ago
[root@master2 cfg]# systemctl start kube-scheduler.service
[root@master2 cfg]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master2 cfg]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2020-02-07 09:17:07 CST; 58min ago
第七步:增加环境变量并生效
[root@master2 cfg]# vim /etc/profile
#末尾添加
export PATH=$PATH:/opt/kubernetes/bin/
[root@master2 cfg]# source /etc/profile
[root@master2 cfg]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.18.145 Ready <none> 21h v1.12.3
192.168.18.148 Ready <none> 22h v1.12.3
#此时可以看到node1和node2的加入情况
此时master2部署完毕
——Nginx负载均衡部署——
注意:此处使用nginx服务实现负载均衡,1.9版本之后的nginx具有了四层的转发功能(负载均衡),该功能中多了stream
多节点原理:
和单节点不同,多节点的核心点就是需要指向一个核心的地址,我们之前在做单节点的时候已经将vip地址定义过写入k8s-cert.sh脚本文件中(192.168.18.100),vip开启apiserver,多master开启端口接受node节点的apiserver请求,此时若有新的节点加入,不是直接找moster节点,而是直接找到vip进行spiserver的请求,然后vip再进行调度,分发到某一个master中进行执行,此时master收到请求之后就会给改node节点颁发证书
第一步:上传keepalived.conf和nginx.sh两个文件到lb1和lb2的root目录下
`lb1`
[root@lb1 ~]# ls
anaconda-ks.cfg keepalived.conf 公共 视频 文档 音乐
initial-setup-ks.cfg nginx.sh 模板 图片 下载 桌面
`lb2`
[root@lb2 ~]# ls
anaconda-ks.cfg keepalived.conf 公共 视频 文档 音乐
initial-setup-ks.cfg nginx.sh 模板 图片 下载 桌面
第二步:lb1(192.168.18.147)操作
[root@lb1 ~]# systemctl stop firewalld.service
[root@lb1 ~]# setenforce 0
[root@lb1 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
#修改完成后按Esc退出插入模式,输入:wq保存退出
`重新加载yum仓库`
[root@lb1 ~]# yum list
`安装nginx服务`
[root@lb1 ~]# yum install nginx -y
[root@lb1 ~]# vim /etc/nginx/nginx.conf
#在12行下插入以下内容
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.18.128:6443; #此处为master1的ip地址
server 192.168.18.132:6443; #此处为master2的ip地址
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
#修改完成后按Esc退出插入模式,输入:wq保存退出
`检测语法`
[root@lb1 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@lb1 ~]# cd /usr/share/nginx/html/
[root@lb1 html]# ls
50x.html index.html
[root@lb1 html]# vim index.html
14 <h2>Welcome to mater nginx!</h2> #14行中添加master以作区分
#修改完成后按Esc退出插入模式,输入:wq保存退出
`启动服务`
[root@lb2 ~]# systemctl start nginx
浏览器验证访问,输入192.168.18.147,可以访问master的nginx主页
部署keepalived服务
[root@lb1 html]# yum install keepalived -y
`修改配置文件`
[root@lb1 html]# cd ~
[root@lb1 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf
cp:是否覆盖"/etc/keepalived/keepalived.conf"? yes
#用我们之前上传的keepalived.conf配置文件,覆盖安装完成后原有的配置文件
[root@lb1 ~]# vim /etc/keepalived/keepalived.conf
18 script "/etc/nginx/check_nginx.sh" #18行目录改为/etc/nginx/,脚本后写
23 interface ens33 #eth0改为ens33,此处的网卡名称可以使用ifconfig命令查询
24 virtual_router_id 51 #vrrp路由ID实例,每个实例是唯一的
25 priority 100 #优先级,备服务器设置90
31 virtual_ipaddress {
32 192.168.18.100/24 #vip地址改为之前设定好的192.168.18.100
#38行以下删除
#修改完成后按Esc退出插入模式,输入:wq保存退出
`写脚本`
[root@lb1 ~]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$") #统计数量
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
#匹配为0,关闭keepalived服务
#写入完成后按Esc退出插入模式,输入:wq保存退出
[root@lb1 ~]# chmod +x /etc/nginx/check_nginx.sh
[root@lb1 ~]# ls /etc/nginx/check_nginx.sh
/etc/nginx/check_nginx.sh #此时脚本为可执行状态,绿色
[root@lb1 ~]# systemctl start keepalived
[root@lb1 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff
inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33
valid_lft 1370sec preferred_lft 1370sec
inet `192.168.18.100/24` scope global secondary ens33 #此时漂移地址在lb1中
valid_lft forever preferred_lft forever
inet6 fe80::1cb1:b734:7f72:576f/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
第三步:lb2(192.168.18.133)操作
[root@lb2 ~]# systemctl stop firewalld.service
[root@lb2 ~]# setenforce 0
[root@lb2 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
#修改完成后按Esc退出插入模式,输入:wq保存退出
`重新加载yum仓库`
[root@lb2 ~]# yum list
`安装nginx服务`
[root@lb2 ~]# yum install nginx -y
[root@lb2 ~]# vim /etc/nginx/nginx.conf
#在12行下插入以下内容
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.18.128:6443; #此处为master1的ip地址
server 192.168.18.132:6443; #此处为master2的ip地址
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
#修改完成后按Esc退出插入模式,输入:wq保存退出
`检测语法`
[root@lb2 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@lb2 ~]# vim /usr/share/nginx/html/index.html
14 <h2>Welcome to backup nginx!</h2> #14行中添加backup以作区分
#修改完成后按Esc退出插入模式,输入:wq保存退出
`启动服务`
[root@lb2 ~]# systemctl start nginx
浏览器验证访问,输入192.168.18.133,可以访问master的nginx主页
部署keepalived服务
[root@lb2 ~]# yum install keepalived -y
`修改配置文件`
[root@lb2 ~]# cp keepalived.conf /etc/keepalived/keepalived.conf
cp:是否覆盖"/etc/keepalived/keepalived.conf"? yes
#用我们之前上传的keepalived.conf配置文件,覆盖安装完成后原有的配置文件
[root@lb2 ~]# vim /etc/keepalived/keepalived.conf
18 script "/etc/nginx/check_nginx.sh" #18行目录改为/etc/nginx/,脚本后写
22 state BACKUP #22行角色MASTER改为BACKUP
23 interface ens33 #eth0改为ens33
24 virtual_router_id 51 #vrrp路由ID实例,每个实例是唯一的
25 priority 90 #优先级,备服务器为90
31 virtual_ipaddress {
32 192.168.18.100/24 #vip地址改为之前设定好的192.168.18.100
#38行以下删除
#修改完成后按Esc退出插入模式,输入:wq保存退出
`写脚本`
[root@lb2 ~]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$") #统计数量
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
#匹配为0,关闭keepalived服务
#写入完成后按Esc退出插入模式,输入:wq保存退出
[root@lb2 ~]# chmod +x /etc/nginx/check_nginx.sh
[root@lb2 ~]# ls /etc/nginx/check_nginx.sh
/etc/nginx/check_nginx.sh #此时脚本为可执行状态,绿色
[root@lb2 ~]# systemctl start keepalived
[root@lb2 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff
inet 192.168.18.133/24 brd 192.168.18.255 scope global dynamic ens33
valid_lft 958sec preferred_lft 958sec
inet6 fe80::578f:4368:6a2c:80d7/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
#此时没有192.168.18.100,因为地址在lb1(master)上
第四步:验证地址漂移
`停止lb1中的nginx服务`
[root@lb1 ~]# pkill nginx
[root@lb1 ~]# systemctl status nginx
● nginx.service - nginx - high performance web server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since 五 2020-02-07 12:16:39 CST; 1min 40s ago
#此时状态为关闭
`检查keepalived服务是否同时被关闭`
[root@lb1 ~]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
Active: inactive (dead)
#此时keepalived服务被关闭,说明check_nginx.sh脚本执行成功
[root@lb1 ~]# ps -ef |grep nginx |egrep -cv "grep|$$"
0
#此时判断条件为0,应该停止keepalived服务
`查看lb1上的漂移地址是否存在`
[root@lb1 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff
inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33
valid_lft 1771sec preferred_lft 1771sec
inet6 fe80::1cb1:b734:7f72:576f/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
#此时192.168.18.100漂移地址消失,如果双机热备成功,该地址应该漂移到lb2上
`再查看lb2看漂移地址是否存在`
[root@lb2 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff
inet 192.168.18.133/24 brd 192.168.18.255 scope global dynamic ens33
valid_lft 1656sec preferred_lft 1656sec
inet 192.168.18.100/24 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::578f:4368:6a2c:80d7/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
#此时漂移地址192.168.18.100到了lb2上,说明双机热备成功
第五步:恢复操作
`在lb1上启动nginx和keepalived服务`
[root@lb1 ~]# systemctl start nginx
[root@lb1 ~]# systemctl start keepalived
`漂移地址又会重新回到lb1上`
[root@lb1 ~]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff
inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33
valid_lft 1051sec preferred_lft 1051sec
inet 192.168.18.100/24 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::1cb1:b734:7f72:576f/64 scope link
valid_lft forever preferred_lft forever
inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
#反之lb2上的漂移地址就会消失
第六步:此时我们用宿主机的cmd命令测试测试漂移地址是否联通
C:/Users/zhn>ping 192.168.18.100
正在 Ping 192.168.18.100 具有 32 字节的数据:
来自 192.168.18.100 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.18.100 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.18.100 的回复: 字节=32 时间=1ms TTL=64
来自 192.168.18.100 的回复: 字节=32 时间<1ms TTL=64
192.168.18.100 的 Ping 统计信息:
数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
往返行程的估计时间(以毫秒为单位):
最短 = 0ms,最长 = 1ms,平均 = 0ms
#此时可以ping通,说明可以访问此虚拟IP
第七步:在宿主机中使用192.168.18.100地址访问到的就应该是我们之前设置的master的nginx主页,也就是lb1
第八步:开始修改node节点配置文件统一VIP(bootstrap.kubeconfig,kubelet.kubeconfig)
node1:
[root@node1 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
5 server: https://192.168.18.100:6443 #5行改为Vip的地址
#修改完成后按Esc退出插入模式,输入:wq保存退出
[root@node1 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
5 server: https://192.168.18.128:6443 #5行改为Vip的地址
#修改完成后按Esc退出插入模式,输入:wq保存退出
[root@node1 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
5 server: https://192.168.18.128:6443 #5行改为Vip的地址
#修改完成后按Esc退出插入模式,输入:wq保存退出
`替换完成直接自检`
[root@node1 ~]# cd /opt/kubernetes/cfg/
[root@node1 cfg]# grep 100 *
bootstrap.kubeconfig: server: https://192.168.18.100:6443
kubelet.kubeconfig: server: https://192.168.18.100:6443
kube-proxy.kubeconfig: server: https://192.168.18.100:6443
[root@node1 cfg]# systemctl restart kubelet.service
[root@node1 cfg]# systemctl restart kube-proxy.service
node2:
[root@node2 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
5 server: https://192.168.18.100:6443 #5行改为Vip的地址
#修改完成后按Esc退出插入模式,输入:wq保存退出
[root@node2 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
5 server: https://192.168.18.128:6443 #5行改为Vip的地址
#修改完成后按Esc退出插入模式,输入:wq保存退出
[root@node2 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
5 server: https://192.168.18.128:6443 #5行改为Vip的地址
#修改完成后按Esc退出插入模式,输入:wq保存退出
`替换完成直接自检`
[root@node2 ~]# cd /opt/kubernetes/cfg/
[root@node2 cfg]# grep 100 *
bootstrap.kubeconfig: server: https://192.168.18.100:6443
kubelet.kubeconfig: server: https://192.168.18.100:6443
kube-proxy.kubeconfig: server: https://192.168.18.100:6443
[root@node2 cfg]# systemctl restart kubelet.service
[root@node2 cfg]# systemctl restart kube-proxy.service
第九步:在lb01上查看nginx的k8s日志
[root@lb1 ~]# tail /var/log/nginx/k8s-access.log
192.168.18.145 192.168.18.128:6443 - [07/Feb/2020:14:18:54 +0800] 200 1119
192.168.18.145 192.168.18.132:6443 - [07/Feb/2020:14:18:54 +0800] 200 1119
192.168.18.148 192.168.18.128:6443 - [07/Feb/2020:14:18:57 +0800] 200 1120
192.168.18.148 192.168.18.132:6443 - [07/Feb/2020:14:18:57 +0800] 200 1120
第十步:在master1上操作
`测试创建pod`
[root@master1 ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
`查看状态`
[root@master1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dbddb74b8-7hdfj 0/1 ContainerCreating 0 32s
#此时状态为ContainerCreating正在创建中
[root@master1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dbddb74b8-7hdfj 1/1 Running 0 73s
#此时状态为Running,表示创建完成,运行中
`注意:日志问题`
[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-7hdfj)
#此时日志不可看,需要开启权限
`绑定群集中的匿名用户赋予管理员权限`
[root@master1 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj #此时就不会报错了
`查看pod网络`
[root@master1 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-dbddb74b8-7hdfj 1/1 Running 0 20m 172.17.32.2 192.168.18.148 <none>
在对应网段的node1节点上操作可以直接访问
[root@node1 ~]# curl 172.17.32.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h2>Welcome to nginx!</h2>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
#此时看到的就是容器中nginx的信息
访问就会产生日志,我们就可以回到master1上查看日志
[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj
172.17.32.1 - - [07/Feb/2020:06:52:53 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
#此时就可以看到node1使用网关(172.17.32.1)进行访问的记录
——创建UI显示界面——
在master1上创建dashborad工作目录
[root@master1 ~]# cd k8s/
[root@master1 k8s]# mkdir dashboard
[root@master1 k8s]# cd dashboard/
#此处需要上传页面文件到此文件夹下
`此时就可以看到页面的yaml文件`
[root@master1 dashboard]# ls
dashboard-configmap.yaml dashboard-rbac.yaml dashboard-service.yaml
dashboard-controller.yaml dashboard-secret.yaml k8s-admin.yaml
`创建页面,顺序一定要注意`
[root@master1 dashboard]# kubectl create -f dashboard-rbac.yaml #授权访问api
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
[root@master1 dashboard]# kubectl create -f dashboard-secret.yaml #进行加密
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-key-holder created
[root@master1 dashboard]# kubectl create -f dashboard-configmap.yaml #配置应用
configmap/kubernetes-dashboard-settings created
[root@master1 dashboard]# kubectl create -f dashboard-controller.yaml #控制器
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
[root@master1 dashboard]# kubectl create -f dashboard-service.yaml #发布出去进行访问
service/kubernetes-dashboard created
`完成后查看创建在指定的kube-system命名空间下`
[root@master1 dashboard]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-65f974f565-9qs8j 1/1 Running 0 3m27s
`查看如何访问`
[root@master1 dashboard]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-65f974f565-9qs8j 1/1 Running 0 3m27s
[root@master1 dashboard]# kubectl get pods,svc -n kube-system
NAME READY STATUS RESTARTS AGE
pod/kubernetes-dashboard-65f974f565-9qs8j 1/1 Running 0 4m21s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes-dashboard NodePort 10.0.0.169 <none> 443:30001/TCP 4m15s
验证:在浏览器中输入nodeIP就可以访问:
解决方法:关于谷歌浏览器无法访问题
`在master1中:`
[root@master1 dashboard]# vim dashboard-cert.sh
cat > dashboard-csr.json <<EOF
{
"CN": "Dashboard",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "NanJing",
"ST": "NanJing"
}
]
}
EOF
K8S_CA=$1
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system
#修改完成后按Esc退出插入模式,输入:wq保存退出
[root@master1 dashboard]# bash dashboard-cert.sh /root/k8s/k8s-cert/
2020/02/07 16:47:49 [INFO] generate received request
2020/02/07 16:47:49 [INFO] received CSR
2020/02/07 16:47:49 [INFO] generating key: rsa-2048
2020/02/07 16:47:49 [INFO] encoded CSR
2020/02/07 16:47:49 [INFO] signed certificate with serial number 612466244367800695250627555980294380133655299692
2020/02/07 16:47:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
secret "kubernetes-dashboard-certs" deleted
secret/kubernetes-dashboard-certs created
[root@master1 dashboard]# vim dashboard-controller.yaml
45 args:
46 # PLATFORM-SPECIFIC ARGS HERE
47 - --auto-generate-certificates
#在47行下插入以下内容
48 - --tls-key-file=dashboard-key.pem
49 - --tls-cert-file=dashboard.pem
#修改完成后按Esc退出插入模式,输入:wq保存退出
`重新部署`
[root@master1 dashboard]# kubectl apply -f dashboard-controller.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
serviceaccount/kubernetes-dashboard configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/kubernetes-dashboard configured
#此时页面会提示:继续前往192.168.18.148(不安全)
`生成令牌`
[root@master1 dashboard]# kubectl create -f k8s-admin.yaml
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
`保存`
[root@master1 dashboard]# kubectl get secret -n kube-system
NAME TYPE DATA AGE
dashboard-admin-token-l9z5f kubernetes.io/service-account-token 3 30s
#dashboard-admin-token-l9z5f后面要用于查看令牌
default-token-8hwtl kubernetes.io/service-account-token 3 2d3h
kubernetes-dashboard-certs Opaque 11 11m
kubernetes-dashboard-key-holder Opaque 2 26m
kubernetes-dashboard-token-crqvs kubernetes.io/service-account-token 3 25m
`查看令牌`
[root@master1 dashboard]# kubectl describe secret dashboard-admin-token-l9z5f -n kube-system
Name: dashboard-admin-token-l9z5f
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 115a70a5-4988-11ea-b617-000c2986f9b2
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbDl6NWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMTE1YTcwYTUtNDk4OC0xMWVhLWI2MTctMDAwYzI5ODZmOWIyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.DdqS8xHxQYUw68NpqR1XIqQRgOFS3nsrfhjPe1pdqbt6PepAf1pOaDYTJ2cGtbA89J4v0go-6ZWc1BiwidMcthVv_LgXD9cD_5RXN_GoYqsEFFFgkzdyG0y4_BSowMCheS9tGCzuo-O-w_U5gPz3LGTwMRPyRbfEVDaS3Dign_b8SASD_56WkHkSGecI42t1Zct5h2Mnsam_qPhpfgMCzwxQ8l8_8XK6t5NK6orSwL9ozAmX5XGR9j4EL06OKy6al5hAHoB1k0srqT_mcj8Lngt7iq6VPuLVVAF7azAuItlL471VR5EMfvSCRrUG2nPiv44vjQPghnRYXMWS71_B5w
ca.crt: 1359 bytes
namespace: 11 bytes
#整个token段落就是我们需要复制的令牌
把令牌粘贴之后登录,得到UI界面:
以上就是完整的K8s多节点的完整部署到页面呈现的过程!
原创文章,作者:3628473679,如若转载,请注明出处:https://blog.ytso.com/183318.html