美文网首页
ipvlan cni的一些使用上的细节

ipvlan cni的一些使用上的细节

作者: cloudFans | 来源:发表于2021-12-10 09:03 被阅读0次

一:不给eth1分配IP(ipvlan+whereabouts的方案)
二:给eth1分配IP,但用ipvl子接口禁用eth1的数据转发(terway方案)

二与一的优势对比:
二使得同一node下的pod可以与node直接通讯而不需要走vpc网关。
一中所有pod与node的通讯都需要走vpc网关(从eth1->gw->eth0)。

terway ipvlan子接口依赖的主网卡的使用上的细节:

一旦主网卡定义声明为ipvlan L2 模式,那么这张网卡就无法向外发送流量

在node上的作用依旧是正常的:

  1. 主网卡的ip依然可以ping通
  2. 可以作为“网桥” 让所有slave子接口互通

所以目前使用的双网卡模式,在eth1未初始化ipvlan模式之前,仍具有转发功能,但是eth0负责提供负载均衡器依赖的vip。

eth0 eth1分属于两个子网,但属于同一个vpc,连接到同一个软路由。为了保证 vip对应的ipvs后端pod的流量走eth0,在初始化ipvlan cni 子接口之前必须关闭eth1,防止由eth1转发pod的流量。

延伸:

当eth1已创建了ipvlan 子接口,此时down掉eth1有什么影响

关闭eth1后:

  1. ipvlan pod无法ping通网关
  2. 本地网桥的功能依然正常,同一node上的pod依然可以互通。
  3. eth1 无法向外转发(和up的时候效果是一样的)


[root@(l2)k8s-master-1 ~]# grep eth1 /etc/cni/net.d/10-terway.conf && ip link set eth1 down
    "master": "eth1"
[root@(l2)k8s-master-1 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:00:00:83:c5:17 brd ff:ff:ff:ff:ff:ff
    inet 172.32.16.11/16 brd 172.32.255.255 scope global dynamic noprefixroute eth0
       valid_lft 85132245sec preferred_lft 85132245sec
    inet6 fe80::200:ff:fe83:c517/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1400 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 00:00:00:b2:8f:1b brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 00:00:00:9c:1c:c7 brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:00:00:90:05:f0 brd ff:ff:ff:ff:ff:ff
    inet 10.5.208.208/24 brd 10.5.208.255 scope global dynamic noprefixroute eth3
       valid_lft 85132245sec preferred_lft 85132245sec
    inet6 fe80::47ae:2181:3ed0:2fdb/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:67:72:ec:1a brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
7: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 4e:49:bf:d2:ab:fc brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.3/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.97.3.203/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.165.187/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
9: ipvl_3@eth1: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP,M-DOWN> mtu 1400 qdisc noqueue state UNKNOWN group default 
    link/ether 00:00:00:b2:8f:1b brd ff:ff:ff:ff:ff:ff
    inet 172.33.192.10/32 scope host ipvl_3
       valid_lft forever preferred_lft forever
    inet6 fe80::2b2:8f1b/64 scope link 
       valid_lft forever preferred_lft forever
[root@(l2)k8s-master-1 ~]# 
[root@(l2)k8s-master-1 ~]# 
[root@(l2)k8s-master-1 ~]# kubectl  get pod -A -o wide | grep k8s-master-1
default       busybox1056-k8s-master-1               1/1     Running   0          12d     172.33.0.218   k8s-master-1   <none>           <none>
default       centos-sshd-cluster-41                 1/1     Running   0          6d17h   172.33.0.201   k8s-master-1   <none>           <none>
default       centos-sshd-cluster-43                 1/1     Running   0          6d17h   172.33.0.208   k8s-master-1   <none>           <none>
default       sshd-k8s-master-1                      1/1     Running   0          10d     172.33.0.211   k8s-master-1   <none>           <none>
kube-system   kube-apiserver-k8s-master-1            1/1     Running   6          14d     172.32.16.11   k8s-master-1   <none>           <none>
kube-system   kube-controller-manager-k8s-master-1   1/1     Running   11         14d     172.32.16.11   k8s-master-1   <none>           <none>
kube-system   kube-ovn-cni-zzqpz                     1/1     Running   3          13d     172.32.16.11   k8s-master-1   <none>           <none>
kube-system   kube-proxy-dbbgv                       1/1     Running   4          14d     172.32.16.11   k8s-master-1   <none>           <none>
kube-system   kube-scheduler-k8s-master-1            1/1     Running   12         14d     172.32.16.11   k8s-master-1   <none>           <none>
[root@(l2)k8s-master-1 ~]# ping 172.33.0.218
PING 172.33.0.218 (172.33.0.218) 56(84) bytes of data.
64 bytes from 172.33.0.218: icmp_seq=1 ttl=64 time=0.106 ms
64 bytes from 172.33.0.218: icmp_seq=2 ttl=64 time=0.072 ms
64 bytes from 172.33.0.218: icmp_seq=3 ttl=64 time=0.070 ms
64 bytes from 172.33.0.218: icmp_seq=4 ttl=64 time=0.082 ms
64 bytes from 172.33.0.218: icmp_seq=5 ttl=64 time=0.057 ms
^C
--- 172.33.0.218 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4108ms
rtt min/avg/max/mdev = 0.057/0.077/0.106/0.018 ms
[root@(l2)k8s-master-1 ~]# 
[root@(l2)k8s-master-1 ~]# 
[root@(l2)k8s-master-1 ~]# 
[root@(l2)k8s-master-1 ~]# kubectl exec -ti sshd-k8s-master-1  -- /bin/sh
sh-4.2# ping 114.114.114.114
PING 114.114.114.114 (114.114.114.114) 56(84) bytes of data.
^C
--- 114.114.114.114 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1052ms

sh-4.2# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.33.0.1      0.0.0.0         UG    0      0        0 eth0
172.33.0.0      0.0.0.0         255.255.0.0     U     0      0        0 eth0
172.33.192.10   0.0.0.0         255.255.255.255 UH    0      0        0 eth0
sh-4.2# ping 172.33.0.1
PING 172.33.0.1 (172.33.0.1) 56(84) bytes of data.
^C
--- 172.33.0.1 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 1008ms

sh-4.2# ping 172.33.192.10
PING 172.33.192.10 (172.33.192.10) 56(84) bytes of data.
64 bytes from 172.33.192.10: icmp_seq=1 ttl=64 time=0.063 ms
64 bytes from 172.33.192.10: icmp_seq=2 ttl=64 time=0.126 ms
^C
--- 172.33.192.10 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1048ms
rtt min/avg/max/mdev = 0.063/0.094/0.126/0.032 ms
sh-4.2# ping 172.33.0.218
PING 172.33.0.218 (172.33.0.218) 56(84) bytes of data.
64 bytes from 172.33.0.218: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 172.33.0.218: icmp_seq=2 ttl=64 time=0.153 ms
^C
--- 172.33.0.218 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1056ms
rtt min/avg/max/mdev = 0.058/0.105/0.153/0.048 ms

相关文章

  • ipvlan cni的一些使用上的细节

    一:不给eth1分配IP(ipvlan+whereabouts的方案)二:给eth1分配IP,但用ipvl子接口禁...

  • l2 ipvlan cni 集成免费arp功能

    关于免费arp的一般使用场景: https://baike.baidu.com/item/gratuitous%2...

  • terway 路由设计

    以后可能ipvlan更普及,那就拿ipvlan l2 做个例子,其实和veth pair是一样的 pod 内部

  • CNI之flannel介绍

    CNI介绍 CNI 容器网络接口,设计用于任何的容器运行环境CNI插件生态有两种:overlay、underlay...

  • cni设计简析

    cni设计 参考: https://github.com/containernetworking/cni/blob...

  • Linux网络协议栈6--ipvlan

    本来想将macvlan和ipvlan放一起写,但是在测试过程中发现,ipvlan使用起来还是挺复杂的,于是单独作为...

  • OVN on Kubernetes

    1.Kubernetes的CNI Kubernetes的CNI有很多实现,诸如Flannel,calico,Can...

  • CNI插件实例分析

    一 前言 之前分析了cni的插件原理,具体的CNI的实现是怎么样?偶然遇到一个cni实现,作者非常厉害,为了方便学...

  • centos7 新建bridge

    ifcfg-cni0 DEVICE="cni0"BOOTPROTO="static"IPADDR="10.5.30...

  • Kubernetes进阶:理解CNI和CNI插件

    理解典型网络插件工作原理,掌握CNI插件的使用。 什么是 CNI Container Network Interf...

网友评论

      本文标题:ipvlan cni的一些使用上的细节

      本文链接:https://www.haomeiwen.com/subject/cwfcfrtx.html