docker-网络链接模式

docker网络连接模式

网络模式介绍

Docker 的网络支持5种网络模式:

  • none
  • bridge
  • host
  • container
  • network-name

范例:查看默认的网络模式

[root@ubuntu1804 ~]#docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
fe08e6d23c4c        bridge              bridge              local
cb64aa83626c        host                host                local
10619d45dcd4        none                null                local
网络模式指定

默认新建的容器使用Bridge模式,创建容器时,docker run 命令使用以下选项指定网络模式

格式

docker run --network <mode>
docker run --net=<mode>

<mode>:可是以下值
none
bridge
host
container:<容器名或容器ID>
<自定义网络名称>

docker-网络链接模式插图

4.2.3 bridge网络模式

docker-网络链接模式插图(1)

本模式是docker的默认模式,即不指定任何模式就是bridge模式,也是使用比较多的模式,此模式创建的容器会为每一个容器分配自己的网络 IP 等信息,并将容器连接到一个虚拟网桥与外界通信

docker-网络链接模式插图(2)

可以和外部网络之间进行通信,通过SNAT访问外网,使用DNAT可以入容器被外部主机访问,所以此模式也称为NAT模式

宿主机的需要启动ip_forward功能

bridge网络模式特点

-网络资源隔离:不同宿主机的容器无法直接通信,各自使用独立风络
-无需手动配置:容器默认自动获取172.17.0.0/16的IP地址,此地址可以修改
-可访问外网:利用宿主机的物理网卡,SNAT连接外网
-外部主机无法直接访问容器:可以通过配置DNAT接受外网的访问
-低性能较低:因为可通过NAT,网络转换带来更的损耗
-端口管理繁琐:每个容器必须手动指定唯一的端口,容器产生端口冲容

范例:查看bridge模式信息

[root@ubuntu1804 ~]#docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "fe08e6d23c4c9de00bdab479446f136c09537a1551aa62ff2c95f8cfcabd6357",
        "Created": "2020-01-31T16:11:32.718471804+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "cdb5173003f52033c7c8183994cf763d2a64ff39c431431402fd8dedf4727393": {
                "Name": "server1",
                "EndpointID": "6977fb6f74b75014513c34296f1e23ff0197f81f3209bbf7fcd39ba8e9f54c0d",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
[root@ubuntu1804 ~]#

范例:宿主机的网络状态

[root@ubuntu1804 ~]#cat /proc/sys/net/ipv4/ip_forward
1
[root@ubuntu1804 ~]#cat /proc/sys/net/ipv4/ip_forward
1
[root@ubuntu1804 ~]#iptables -vnL -t nat
Chain PREROUTING (policy ACCEPT 245 packets, 29850 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   11   636 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 103 packets, 20705 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 144 packets, 10324 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 158 packets, 11500 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  125  7831 MASQUERADE  all  --  *      !docker0  172.17.0.0/16        0.0.0.0/0           

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    2   168 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0 

范例:通过宿主机的物理网卡利用SNAT访问外部网络

#在另一台主机上建立httpd服务器
[root@centos7 ~]#systemctl is-active httpd
active

#启动容器,默认是bridge网络模式
[root@ubuntu1804 ~]#docker run -it  --rm  alpine:3.11 sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
166: eth0@if167: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
#可能访问其它宿主机
/ # ping 10.0.0.7
PING 10.0.0.7 (10.0.0.7): 56 data bytes
64 bytes from 10.0.0.7: seq=0 ttl=63 time=0.764 ms
64 bytes from 10.0.0.7: seq=1 ttl=63 time=1.147 ms
^C
--- 10.0.0.7 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.764/0.955/1.147 ms
/ # ping www.baidu.com
PING www.baidu.com (61.135.169.125): 56 data bytes
64 bytes from 61.135.169.125: seq=0 ttl=127 time=5.182 ms
^C
--- www.baidu.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 5.182/5.182/5.182 ms

/ # traceroute 10.0.0.7
traceroute to 10.0.0.7 (10.0.0.7), 30 hops max, 46 byte packets
 1  172.17.0.1 (172.17.0.1)  0.008 ms  0.008 ms  0.007 ms
 2  10.0.0.7 (10.0.0.7)  0.255 ms  0.510 ms  0.798 ms
/ # wget -qO - 10.0.0.7
Website on 10.0.0.7
/ # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.17.0.1      0.0.0.0         UG    0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0

[root@centos7 ~]#curl 127.0.0.1
Website on 10.0.0.7

[root@centos7 ~]#tail  /var/log/httpd/access_log 
127.0.0.1 - - [01/Feb/2020:19:31:16 +0800] "GET / HTTP/1.1" 200 20 "-" "curl/7.29.0"
10.0.0.100 - - [01/Feb/2020:19:31:21 +0800] "GET / HTTP/1.1" 200 20 "-" "Wget"

范例:修改bridge模式默认的网段方法1

[root@ubuntu1804 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:6b:54:d3 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe6b:54d3/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:e0:ef:72:05 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:e0ff:feef:7205/64 scope link 
       valid_lft forever preferred_lft forever

[root@ubuntu1804 ~]#docker run -it --rm  alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ #exit
[root@ubuntu1804 ~]#vim /lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock  --bip=10.100.0.1/24 
[root@ubuntu1804 ~]#systemctl daemon-reload
[root@ubuntu1804 ~]#systemctl restart docker
[root@ubuntu1804 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:6b:54:d3 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.101/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe6b:54d3/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:e0:ef:72:05 brd ff:ff:ff:ff:ff:ff
    inet 10.100.0.1/24 brd 10.100.0.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:e0ff:feef:7205/64 scope link 
       valid_lft forever preferred_lft forever
[root@ubuntu1804 ~]#docker run -it --rm  alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
179: eth0@if180: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:0a:64:00:02 brd ff:ff:ff:ff:ff:ff
    inet 10.100.0.2/24 brd 10.100.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # exit
[root@ubuntu1804 ~]#docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "ace11446c233d1fef534d9c734bf3ab5524afdbe76934a1a0e64803d03d54f98",
        "Created": "2020-02-02T13:23:58.037630754+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.100.0.0/24",
                    "Gateway": "10.100.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
[root@ubuntu1804 ~]#

范例:修改bridge网络配置方法2

[root@ubuntu1804 ~]#vim /etc/docker/daemon.json
{
  "bip": "172.30.0.1/24",
  "registry-mirrors": ["https://si7y70hh.mirror.aliyuncs.com"]
}
[root@ubuntu1804 ~]#systemctl daemon-reload 
[root@ubuntu1804 ~]#systemctl restart docker
[root@ubuntu1804 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:6b:54:d3 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe6b:54d3/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:e0:ef:72:05 brd ff:ff:ff:ff:ff:ff
    inet 172.30.0.1/24 brd 172.30.0.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:e0ff:feef:7205/64 scope link 
       valid_lft forever preferred_lft forever
[root@ubuntu1804 ~]#docker network inspect bridge
[
    {
        "Name": "bridge",
        "Id": "b86b8a72f41c05a744016a1568aa448fa30c7ddf56c43ce652c3c33eca365168",
        "Created": "2020-02-02T13:46:12.8442733+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.30.0.1/24",
                    "Gateway": "172.30.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
[root@ubuntu1804 ~]#
[root@ubuntu1804 ~]#docker run -it --rm  alpine sh
latest: Pulling from library/alpine
c9b1b535fdd9: Pull complete 
Digest: sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d
Status: Downloaded newer image for alpine:latest
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
10: eth0@if11: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:1e:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.30.0.2/24 brd 172.30.0.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # exit
Host 模式

image-20200118181700931

启动的容器如果指定host模式,那么新创建的容器不会创建自己的虚拟网卡,而是直接使用宿主机的网卡和IP地址,因此在容器里面查看到的IP信息就是宿主机的信息,访问容器的时候直接使用宿主机IP+容器端口即可,不过容器内除网络以外的其它资源,如:文件系统、系统进程等仍然和宿主机保持隔离

此模式由于直接使用宿主机的网络无需转换,网络性能最高,但是各容器之间端口不能相同,适用于运行容器端口比较固定的业务

Host网络模式特点:

-使用参数 –network host 指定

-共享宿主机网络
-网络性能无损耗
-网络故障排除相对简单
-各容器网络无隔离
-网络资源无法分别统计
-端口管理困难:容易产生端口冲突
-不支持端口映射

范例:

#查看宿主机的网络设置
[root@ubuntu1804 ~]#ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:2ff:fe7f:a8c6  prefixlen 64  scopeid 0x20<link>
        ether 02:42:02:7f:a8:c6  txqueuelen 0  (Ethernet)
        RX packets 63072  bytes 152573158 (152.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 56611  bytes 310696704 (310.6 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.100  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe34:df91  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:34:df:91  txqueuelen 1000  (Ethernet)
        RX packets 2029082  bytes 1200597401 (1.2 GB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7272209  bytes 11576969391 (11.5 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 3533  bytes 320128 (320.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3533  bytes 320128 (320.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ubuntu1804 ~]#route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

#打开容器前宿主机的80/tcp端口没有打开
[root@ubuntu1804 ~]#ss -ntl|grep :80
[root@ubuntu1804 ~]#
#创建host模式的容器
[root@ubuntu1804 ~]#docker run -d --network host --name web1 nginx-centos7-base:1.6.1
41fb5b8e41db26e63579a424df643d1f02e272dc75e76c11f4e313a443187ed1
#创建容器后,宿主机的80/tcp端口打开
[root@ubuntu1804 ~]#ss -ntlp|grep :80
LISTEN   0         128                 0.0.0.0:80               0.0.0.0:*        users:(("nginx",pid=43762,fd=6),("nginx",pid=43737,fd=6))

[root@ubuntu1804 ~]#docker exec -it web1 bash
[root@ubuntu1804 /]# ifconfig 
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:2ff:fe7f:a8c6  prefixlen 64  scopeid 0x20<link>
        ether 02:42:02:7f:a8:c6  txqueuelen 0  (Ethernet)
        RX packets 63072  bytes 152573158 (145.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 56611  bytes 310696704 (296.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.100  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::20c:29ff:fe34:df91  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:34:df:91  txqueuelen 1000  (Ethernet)
        RX packets 2028984  bytes 1200589212 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 7272137  bytes 11576960933 (10.7 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 3533  bytes 320128 (312.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3533  bytes 320128 (312.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
[root@ubuntu1804 /]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0

[root@ubuntu1804 /]# curl 10.0.0.7
Website on 10.0.0.7
[root@ubuntu1804 /]#

#查看远程主机的访问日志
[root@centos7 ~]#tail -n1 /var/log/httpd/access_log 
10.0.0.100 - - [01/Feb/2020:19:58:06 +0800] "GET / HTTP/1.1" 200 20 "-" "curl/7.29.0"

#远程主机可以访问容器的web服务
[root@centos7 ~]#curl 10.0.0.100/app/
Test Page in app

范例:host模式下端口映射无法实现

[root@ubuntu1804 ~]#ss -ntl|grep :81
[root@ubuntu1804 ~]#docker run -d --network host --name web2 -p 81:80  nginx-centos7-base:1.6.1
WARNING: Published ports are discarded when using host network mode
6b6a910d79d94b188f719bc6ad00c274acd76a4a2929212157cd49b5219d44ae
[root@ubuntu1804 ~]#docker ps -a
CONTAINER ID        IMAGE                      COMMAND                  CREATED              STATUS                     PORTS               NAMES
6b6a910d79d9        nginx-centos7-base:1.6.1   "/apps/nginx/sbin/ng…"   6 seconds ago        Exited (1) 2 seconds ago                       web2
b27c0fd28b40        nginx-centos7-base:1.6.1   "/apps/nginx/sbin/ng…"   About a minute ago   Up About a minute                              web1

范例:对比前面host模式的容器和bridge模式的端口映射

[root@ubuntu1804 ~]#docker port web1
[root@ubuntu1804 ~]#docker port web2
[root@ubuntu1804 ~]#docker run -d --network bridge -p 8001:80 --name web3 nginx-centos7-base:1.6.1
4095372b9a561704eac98ccef8041a80a2cdc2aa7b57d2798dec1a8dcb00c377
[root@ubuntu1804 ~]#docker port web3
80/tcp -> 0.0.0.0:8001
none模式

在使用none 模式后,Docker 容器不会进行任何网络配置,没有网卡、没有IP也没有路由,因此默认无法与外界通信,需要手动添加网卡配置IP等,所以极少使用

none模式特点

-使用参数 –network none 指定
-默认无网络功能,无法和外部通信

范例:启动none模式的容器

[root@ubuntu1804 ~]#docker run -d --network none -p 8001:80 --name web1-none nginx-centos7-base:1.6.1
5207dcbd0aeea88548819267d3751135e337035475cf3cd63a5e1be6599c0208
[root@ubuntu1804 ~]#docker ps 
CONTAINER ID        IMAGE                      COMMAND                  CREATED              STATUS              PORTS               NAMES
5207dcbd0aee        nginx-centos7-base:1.6.1   "/apps/nginx/sbin/ng…"   About a minute ago   Up About a minute                       web1-none

[root@ubuntu1804 ~]#docker port web1-none
[root@ubuntu1804 ~]#docker exec -it web1-none bash
[root@5207dcbd0aee /]# ifconfig -a
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@5207dcbd0aee /]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
[root@5207dcbd0aee /]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN     
[root@5207dcbd0aee /]# ping www.baidu.com
ping: www.baidu.com: Name or service not known
[root@5207dcbd0aee /]# ping 172.17.0.1
connect: Network is unreachable
[root@5207dcbd0aee /]# 
Container模式

QQ截图20200118185901

使用此模式创建的容器需指定和一个已经存在的容器共享一个网络,而不是和宿主机共享网,新创建的容器不会创建自己的网卡也不会配置自己的IP,而是和一个已经存在的被指定的容器东西IP和端口范围,因此这个容器的端口不能和被指定的端口冲突,除了网络之外的文件系统、进程信息等仍然保持相互隔离,两个容器的进程可以通过lo网卡社保通信

Container模式特点

-使用参数 –-network container:名称或ID 指定
-与宿主机风络空间隔离
-空器间共享网络空间
-适合频繁的容器间的网络通信
-直接使用对方的网络,较少使用

范例:

#创建第一个容器
[root@ubuntu1804 ~]#docker run -it --name server1 -p 80:80 alpine:3.11 sh
/ # ifconfig
eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:02  
          inet addr:172.17.0.2  Bcast:172.17.255.255  Mask:255.255.0.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:766 (766.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

/ # netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
/ # 

#在另一个终端执行下面操作
[root@ubuntu1804 ~]#docker ps 
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                NAMES
4d342fac169f        alpine:3.11         "sh"                29 seconds ago      Up 28 seconds       0.0.0.0:80->80/tcp   server1
[root@ubuntu1804 ~]#docker port server1
80/tcp -> 0.0.0.0:80

#无法访问web服务
[root@ubuntu1804 ~]#curl 127.0.0.1/app/
curl: (52) Empty reply from server

#创建第二个容器,基于第一个容器的container的网络模式
[root@ubuntu1804 ~]#docker run -d --name server2 --network container:server1 nginx-centos7-base:1.6.1
7db90f38590ade11e1c833a8b2175810c71b3f222753c5177bb8b05952f08a7b
#可以访问web服务
[root@ubuntu1804 ~]#curl 127.0.0.1/app/
Test Page in app

[root@ubuntu1804 ~]#docker exec -it server2 bash
#和第一个容器共享相同的网络
[root@4d342fac169f /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.2  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:ac:11:00:02  txqueuelen 0  (Ethernet)
        RX packets 29  bytes 2231 (2.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12  bytes 1366 (1.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 10  bytes 860 (860.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 10  bytes 860 (860.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@4d342fac169f /]# netstat -ntl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State      
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN  

#可访问外网
[root@4d342fac169f /]# ping www.baidu.com
PING www.a.shifen.com (61.135.169.121) 56(84) bytes of data.
64 bytes from 61.135.169.121 (61.135.169.121): icmp_seq=1 ttl=127 time=3.99 ms
64 bytes from 61.135.169.121 (61.135.169.121): icmp_seq=2 ttl=127 time=5.03 ms
^C
--- www.a.shifen.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 3.999/4.514/5.030/0.519 ms
[root@4d342fac169f /]#
自定义网络模式

除了以上的网络模式,也可以自定义网络,使用自定义的网段地址,网关等信息

image-20200201183716109

自定义网络实现

创建自定义网络:

docker network create -d <mode> --subnet <CIDR> --gateway <网关> <自定义网络名称>

查看自定义网络信息

docker network inspect <自定义网络名称或网络ID>

引用自定议网络

docker run --network  <自定义网络名称> <镜像名称>

删除自定义网络

doccker network rm <自定义网络名称或网络ID>
实战案例:自定义网络
[root@ubuntu1804 ~]#docker  network create -d bridge --subnet 172.27.0.0/16 --gateway 172.27.0.1 test-net
c90dee3b7937e007ed31a8d016a9e54c0174d0d26487b154db0aff04d9016d5b
[root@ubuntu1804 ~]#docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
cabde0b33c94        bridge              bridge              local
cb64aa83626c        host                host                local
10619d45dcd4        none                null                local
c90dee3b7937        test-net            bridge              local
[root@ubuntu1804 ~]#ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:34:df:91 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.100/24 brd 10.0.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe34:df91/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:9b:31:73:2b brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:9bff:fe31:732b/64 scope link 
       valid_lft forever preferred_lft forever
14: br-c90dee3b7937: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:58:7c:f0:93 brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.1/16 brd 172.27.255.255 scope global br-c90dee3b7937
       valid_lft forever preferred_lft forever
    inet6 fe80::42:58ff:fe7c:f093/64 scope link 
       valid_lft forever preferred_lft forever
[root@ubuntu1804 ~]#route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.2        0.0.0.0         UG    0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.27.0.0      0.0.0.0         255.255.0.0     U     0      0        0 br-c90dee3b7937

[root@ubuntu1804 ~]#docker run -it --rm --network test-net alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
15: eth0@if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # / # route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.27.0.1      0.0.0.0         UG    0      0        0 eth0
172.27.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0
实战案例:自定义网络和bridge网络容器之间通信

范例:开两个容器,一个使用自定义网络容器,一个使用默认brideg网络的容器

[root@ubuntu1804 ~]#docker run -it --rm --network test-net alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
21: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:1b:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.27.0.2/16 brd 172.27.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ #

[root@ubuntu1804 ~]#docker run -it --rm  alpine sh
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
23: eth0@if24: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # ping 172.27.0.2    #无法ping通自定义网络容器
PING 172.27.0.2 (172.27.0.2): 56 data bytes

解决容器间无法通信的问题

#可看ip_forward功能
[root@ubuntu1804 ~]#cat /proc/sys/net/ipv4/ip_forward
1
[root@ubuntu1804 ~]#brctl  show
bridge name bridge id       STP enabled interfaces
br-c90dee3b7937     8000.0242587cf093   no      veth984a5b4
docker0     8000.02429b31732b   no      veth1a20128

[root@ubuntu1804 ~]#iptables -vnL
Chain INPUT (policy ACCEPT 1241 packets, 87490 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  859 72156 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
  859 72156 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      br-c90dee3b7937  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      br-c90dee3b7937  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  br-c90dee3b7937 !br-c90dee3b7937  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  br-c90dee3b7937 br-c90dee3b7937  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT 1456 packets, 209K bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  289 24276 DOCKER-ISOLATION-STAGE-2  all  --  br-c90dee3b7937 !br-c90dee3b7937  0.0.0.0/0            0.0.0.0/0           
  570 47880 DOCKER-ISOLATION-STAGE-2  all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-ISOLATION-STAGE-2 (2 references)
 pkts bytes target     prot opt in     out     source               destination         
  570 47880 DROP       all  --  *      br-c90dee3b7937  0.0.0.0/0            0.0.0.0/0           
  289 24276 DROP       all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  859 72156 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           
[root@ubuntu1804 ~]#

[root@ubuntu1804 ~]#iptables-save
# Generated by iptables-save v1.6.1 on Sun Feb  2 14:33:19 2020
*filter
:INPUT ACCEPT [1283:90246]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [1489:217126]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o br-c90dee3b7937 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-c90dee3b7937 -j DOCKER
-A FORWARD -i br-c90dee3b7937 ! -o br-c90dee3b7937 -j ACCEPT
-A FORWARD -i br-c90dee3b7937 -o br-c90dee3b7937 -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-c90dee3b7937 ! -o br-c90dee3b7937 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o br-c90dee3b7937 -j DROP #注意此行规则
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP   #注意此行规则
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Sun Feb  2 14:33:19 2020
# Generated by iptables-save v1.6.1 on Sun Feb  2 14:33:19 2020
*nat
:PREROUTING ACCEPT [887:75032]
:INPUT ACCEPT [6:1028]
:OUTPUT ACCEPT [19:1444]
:POSTROUTING ACCEPT [19:1444]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.27.0.0/16 ! -o br-c90dee3b7937 -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i br-c90dee3b7937 -j RETURN
-A DOCKER -i docker0 -j RETURN
COMMIT
# Completed on Sun Feb  2 14:33:19 2020

[root@ubuntu1804 ~]#iptables-save > iptables.rule
[root@ubuntu1804 ~]#vim iptables.rule
#修改下面两行的规则
-A DOCKER-ISOLATION-STAGE-2 -o br-c90dee3b7937 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j ACCEPT

[root@ubuntu1804 ~]#iptables-restore < iptables.rule

#再次两个容器之间可以相互通信
/ # ping 172.27.0.2
PING 172.27.0.2 (172.27.0.2): 56 data bytes
64 bytes from 172.27.0.2: seq=896 ttl=63 time=0.502 ms
64 bytes from 172.27.0.2: seq=897 ttl=63 time=0.467 ms
64 bytes from 172.27.0.2: seq=898 ttl=63 time=0.227 ms

/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=63 time=0.163 ms
64 bytes from 172.17.0.2: seq=1 ttl=63 time=0.232 ms

本文链接:http://www.yunweipai.com/34874.html

原创文章,作者:ItWorker,如若转载,请注明出处:https://blog.ytso.com/52677.html

(0)
上一篇 2021年8月6日
下一篇 2021年8月6日

相关推荐

发表回复

登录后才能评论