Service 存在的意义
001 Service引入主要是解决Pod的动态变化,提供统一访问入口
002 防止Pod失联,准备找到提供同一个服务的Pod(服务发现)
003 定义一组Pod的访问策略(负载均衡)
Pod与Service的关系
001 Service通过标签关联一组Pod
002 Service使用iptables或者ipvs为一组Pod提供负载均衡能力

1636984233436.png
Service定义与创建
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: web
image: nginx:1.15
------------------------------------------------------------------
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
ports:
- port: 80 # service端口
protocol: TCP # 协议
targetPort: 80 # 容器端口
selector: # 标签选择器
app: nginx # 指定关联Pod的标签
type: ClusterIP # 服务类型
--------------------------------------------------------------------
[root@k8smaster service]# kubectl apply -f deploy.yaml
deployment.apps/web created
[root@k8smaster service]# kubectl get pod
NAME READY STATUS RESTARTS AGE
web-84b7cd6c7c-66bng 1/1 Running 0 4s
web-84b7cd6c7c-9w6zl 1/1 Running 0 4s
web-84b7cd6c7c-s55dr 1/1 Running 0 4s
[root@k8smaster service]# kubectl apply -f service.yaml
service/web created
[root@k8smaster service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15d
web ClusterIP 10.110.18.55 <none> 80/TCP 17s
[root@k8smaster service]# curl 10.110.18.55
[root@k8smaster service]# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 192.168.153.21:6443 15d
web 10.244.249.25:80,10.244.249.26:80,10.244.249.30:80 41s
Service三种常用类型
ClusterIP
默认,分配一个稳定的IP地址,即VIP,只能在集群内部访问。
-----------------------------------------------------------------------
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
ports:
- port: 80 # service端口
protocol: TCP # 协议
targetPort: 80 # 容器端口
selector: # 标签选择器
app: nginx # 指定关联Pod的标签
type: ClusterIP # 服务类型

1636986298867.png
NodePort
部署
001 在每个节点上启用一个端口来暴露服务,可以在集群外部访问。
002 也会分配一个稳定内部集群IP地址。
003 访问地址:<任意NodeIP>:<NodePort>
004 端口范围:30000-32767
-----------------------------------------------------------------------
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
ports:
- port: 80 # service端口
protocol: TCP # 协议
targetPort: 80 # 容器端口
nodePort: 30009
selector: # 标签选择器
app: nginx # 指定关联Pod的标签
type: NodePort # 服务类型
-------------------------------------------------------------------------
[root@k8smaster service]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
web NodePort 10.110.18.55 <none> 80:30009/TCP 8m29s

1636986686497.png
多台node访问
要前面加一个公网负载均衡器为项目提供统一访问入口了。
------------------------------------------------------------------------------
负载均衡器(公网可以访问):
- 开源:nginx、lvs、haproxy
- 公有云:SLB
- 不同的服务不同的端口
- 通过域名进行分流
upstream demo {
server 192.168.31.62:30008;
server 192.168.31.63:30008;
}
upstream demo2 {
server 192.168.31.62:30009;
server 192.168.31.63:30009;
}
upstream demo3 {
server 192.168.31.62:30010;
server 192.168.31.63:30010;
}
server {
server_name a.xxx.com;
location / {
proxy_pass http://demo;
}
}
server {
server_name b.xxx.com;
location / {
proxy_pass http://demo2;
}
}
server {
server_name c.xxx.com;
location / {
proxy_pass http://demo3;
}
}

1636986815148.png
LoadBalancer
001 与NodePort类似,在每个节点上启用一个端口来暴露服务。
002 Kubernetes会请求底层云平台(例如阿里云、腾讯云、AWS等)上的负载均衡器,将每个Node([NodeIP]:[NodePort])作为后端自动添加进去
003 要在k8s集群中部署相关的云组件
004 自动创建资源

1636987468992.png
Service代理模式
工作流程图

1637016503023.png
网络模式
Service的底层实现主要有iptables和ipvs二种网络模式,决定了如何转发流量

1636987728326.png
iptables
查看规则
[root@k8smaster ~]# kubectl get pod -o wide -n kube-system | grep proxy
kube-proxy-npjg5 Running 192.168.153.21 k8smaster
kube-proxy-rbt6d Running 192.168.153.22 k8snode1
#默认使用iptables
[root@k8smaster ~]# kubectl logs kube-proxy-rbt6d -n kube-system
Unknown proxy mode "", assuming iptables proxy
#查看iptables规则
[root@k8smaster ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
web NodePort 10.110.18.55 <none> 80:30009/TCP 7h29m
[root@k8smaster ~]# iptables-save |grep web
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 30009 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 30009 -j KUBE-SVC-LOLE4ISW44XBNF3G
整体过程
#整体过程
001 访问 http://192.168.153.17:30009
k8s会通过30009接口接受访问
[root@k8smaster ~]# ss -antp|grep 30009
LISTEN *:30009 users:(("kube-proxy",pid=3297,fd=11))
002 数据包经过iptables规则匹配,重定向另一个链 KUBE-SVC-LOLE4ISW44XBNF3G
-A KUBE-NODEPORTS -p tcp -m comment --comment "default/web" -m tcp --dport 30009 -j KUBE-SVC-LOLE4ISW44XBNF3G
003 一组规则,有几个pod就会创建几条(这里实现了负载均衡器)
[root@k8smaster ~]# iptables-save |grep KUBE-SVC-LOLE4ISW44XBNF3G
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.33333333349 -j KUBE-SEP-CQWGGDFPBGMQSNZ2
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-U6JIICG74YE4P34Z
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -j KUBE-SEP-VDAQLCK6BQXNJPZU
004 使用DNAT转发到具体的Pod
[root@k8smaster ~]# iptables-save |grep KUBE-SEP-CQWGGDFPBGMQSNZ2
-A KUBE-SEP-CQWGGDFPBGMQSNZ2 -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.249.19:80
[root@k8smaster ~]# iptables-save |grep KUBE-SEP-U6JIICG74YE4P34Z
-A KUBE-SEP-U6JIICG74YE4P34Z -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.249.33:80
[root@k8smaster ~]# iptables-save |grep KUBE-SEP-VDAQLCK6BQXNJPZU
-A KUBE-SEP-VDAQLCK6BQXNJPZU -p tcp -m comment --comment "default/web" -m tcp -j DNAT --to-destination 10.244.249.36:80
针对ClusterIP实现的转发,后面与nodeport一样,回到了上面第三步
--------------------------------------------------------------------------
001 流程包流程:客户端 ->NodePort/ClusterIP(iptables/Ipvs负载均衡规则) -> 分布在各节点Pod
002 查看负载均衡规则:iptables-save |grep <SERVICE-NAME>
ipvs
kubeadm方式修改ipvs模式
#修改配置
[root@k8smaster ~]# kubectl get configmap -n kube-system
......
kube-proxy 2 16d
[root@k8smaster ~]# kubectl edit configmap kube-proxy -n kube-system
mode: "ipvs"
#测试 ,删除k8snode1的kube-proxy,重新拉起pod
[root@k8smaster ~]# kubectl get pod -n kube-system -o wide
kube-proxy-rbt6d 192.168.153.22 k8snode1
[root@k8smaster ~]# kubectl delete pod kube-proxy-rbt6d -n kube-system
pod "kube-proxy-rbt6d" deleted
[root@k8smaster ~]# kubectl get pod -n kube-system -o wide
kube-proxy-4hrfv 192.168.153.22 k8snode1
#此时新拉起的pod使用的是ipvs
[root@k8smaster ~]# kubectl logs kube-proxy-4hrfv -n kube-system
1 server_others.go:259] Using ipvs Proxier.
------------------------------------------------------------------------------
注:
001 kube-proxy配置文件以configmap方式存储
002 如果让所有节点生效,需要重建所有节点kube-proxy pod
查看规则
#k8snode1安装ipvsadm
[root@k8snode1 ~]# yum install ipvsadm -y
TCP 192.168.153.22:30009 rr
-> 10.244.249.19:80 Masq 1 0 0
-> 10.244.249.33:80 Masq 1 0 0
-> 10.244.249.36:80 Masq 1 0 0
Iptables VS IPVS
Iptables:
• 灵活,功能强大
• 规则遍历匹配和更新,呈线性时延
IPVS:
• 工作在内核态,有更好的性能
• 调度算法丰富:rr,wrr,lc,wlc,ip hash.
CoreDNS
相关概念
001 是一个DNS服务器,Kubernetes默认采用,以Pod部署在集群中
[root@k8smaster ~]# kubectl get pod -n kube-system -o wide |grep coredns
coredns-6d56c8448f-lhzmh 10.244.249.35 k8snode1
coredns-6d56c8448f-z6t9r 10.244.16.147 k8smaster
002 CoreDNS服务监视Kubernetes API,为每一个Service创建DNS记录用于域名解析
003 CoreDNS YAML文件(针对二进制):
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns
004 ClusterIP A记录格式:<service-name>.<namespace-name>.svc.cluster.local
示例:my-svc.my-namespace.svc.cluster.local
示例
[root@k8smaster ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d
web NodePort 10.110.18.55 <none> 80:30009/TCP 7h29m
[root@k8smaster ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 16d
[root@k8smaster ~]# kubectl run -it dns-test --image=busybox:1.28.4 -- sh
#解析web(default)
/ # nslookup web
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: web
Address 1: 10.110.18.55 web.default.svc.cluster.local
#解析kube-dns(kube-system)
/ # nslookup kube-dns.kube-system
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
/ # ping web
PING web (10.110.18.55): 56 data bytes
64 bytes from 10.110.18.55: seq=0 ttl=64 time=0.074 ms
/ # ping kube-dns.kube-system
PING kube-dns.kube-system (10.96.0.10): 56 data bytes
64 bytes from 10.96.0.10: seq=0 ttl=64 time=0.170 ms
网友评论