Kubernetes Service
Concept of Service
Kubernetes Service defines such an abstraction: a logical grouping of pods and a policy that can access them - commonly referred to as microservices. This group of pods can be accessed by the Service, usually through the Label Selector
Service can provide load balancing, but it has the following restrictions on use:
● only four layers of load balancing capability are provided, but there is no layer 7 function. However, sometimes we may need more matching rules to forward requests. At this point, there are four layers Load balancing is not supported
Type of Service
There are four types of Service in K8s
● Clusterlp: default type, automatically assign - Virtual IP S accessible only within the Cluster
● NodePort: bind a port on each machine for the Service on the basis of ClusterIP, so that the Service can be accessed through <nodeip>: NodePort
● LoadBalancer: on the basis of NodePort, create - external load balancers with the help of cloud provider, and forward the request to <nodelp>: NodePort
● ExternalName: introduce services outside the cluster into the cluster and use them directly inside the cluster. No proxy of any type has been created, which is only possible with Kube DNS of kubernetes 1.7 or later
VIP and Service agent
In the Kubernetes cluster, each Node runs a Kube Proxy process. kube - proxy implements a form of VIP (virtual IP) for the Service instead of the form of ExternalName. In Kubernetes v1.0, the proxy is completely in userspace. In Kubernetes v1.1, iptables agent is added, but it is not the default running mode. From Kubernetes v1.2, iptables proxy is used by default. In Kubernetes v1.8.0-beta 0, added ipvs proxy
ipvs proxy is used by default from kubernetes version 1.14
In Kubernetes v1.0, Service is a "layer 4" (TCP/UDP overIP) concept. In Kubernetes v1.1. The Ingress API (beta version) is added to represent "layer 7" (HTTP) services
! Why not use round robin DNS?
Classification of proxy patterns
1. userspace proxy mode
2. iptables proxy mode
3. ipvs proxy mode
In this mode, the Kube proxy monitors the Kubernetes Service object and Endpoints, calls the netlink interface to create ipvs rules accordingly, and periodically synchronizes ipvs rules with the Kubernetes Service object and Endpoints object to ensure that the ipvs status is consistent with expectations. When accessing the service, the traffic will be redirected to one of the back-end pods
Similar to iptables, ipvs is similar to the hook function of netfilter, but uses hash table as the underlying data structure and works in kernel space. This means that ipvs can redirect traffic faster and have better performance when synchronizing proxy rules. In addition, ipvs provides more options for load balancing algorithms, such as:
● rr: polling scheduling
● lc: minimum number of connections I
● dh: target hash
● sh: source hash
● sed: minimum expected delay
● nq: non queuing dispatching
Note: IPVS mode assumes that IPVS kernel modules have been installed on the nodes before running Kube proxy. When Kube proxy is started in IPVS proxy mode, Kube proxy will verify whether IPVS module is installed on the node. If not, Kube proxy will fall back to iptables proxy mode
[root@k8s-master01 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:443 rr -> 192.168.192.131:6443 Masq 1 3 0 TCP 10.96.0.10:53 rr -> 10.244.0.16:53 Masq 1 0 0 -> 10.244.0.17:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr -> 10.244.0.16:9153 Masq 1 0 0 -> 10.244.0.17:9153 Masq 1 0 0 UDP 10.96.0.10:53 rr -> 10.244.0.16:53 Masq 1 0 0 -> 10.244.0.17:53 Masq 1 0 0 [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d13h
ClusterlP
clusterIP mainly uses iptables in each node node to forward the data sent to the corresponding port of clusterIP to Kube proxy. Then Kube proxy implements load balancing internally, and can query the address and port of the corresponding POD under the service, and then forward the data to the address and port of the corresponding pod
In order to realize the functions on the diagram, the following components need to work together:
● apiserver users send the command to create a service to apiserver through kubectl command, and apiserver stores the data in etcd after receiving the request
● each node of Kube proxy kubernetes has a process called Kube porxy, which is responsible for sensing the changes of service and pod and writing the changed information into the local iptables rules
● iptables uses NAT and other technologies to transfer virtuallP traffic to the endpoint
Create myapp deploy Yaml file
vim myapp-deploy.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: myapp-deploy namespace: default spec: replicas: 3 selector: matchLabels: app: myapp release: stabel template: metadata: labels: app: myapp release: stabel env: test spec: containers: - name: myapp image: ikubernetes/myapp:v2 imagePullPolicy: IfNotPresent ports: - name: http containerPort: 80
[root@k8s-master01 ~]# kubectl apply -f myapp-deploy.yaml deployment.apps/myapp-deploy created [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-659f64f98b-bn8sl 1/1 Running 0 78s myapp-deploy-659f64f98b-fztt9 1/1 Running 0 78s myapp-deploy-659f64f98b-wdsbw 1/1 Running 0 78s [root@k8s-master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-deploy-659f64f98b-bn8sl 1/1 Running 0 113s 10.244.1.82 k8s-node01 <none> <none> myapp-deploy-659f64f98b-fztt9 1/1 Running 0 113s 10.244.2.38 k8s-node02 <none> <none> myapp-deploy-659f64f98b-wdsbw 1/1 Running 0 113s 10.244.1.83 k8s-node01 <none> <none> [root@k8s-master01 ~]# curl 10.244.1.82 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
Create Service information
vim myapp-service.yaml
apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: type: ClusterIP selector: app: myapp1 # These two labels should match the above release: stabel # These two labels should match the above ports: - name: http port: 80 # Rear port assignment targetPort: 80 # Rear port assignment
[root@k8s-master01 ~]# kubectl apply -f myapp-service.yaml service/myapp created [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d14h myapp ClusterIP 10.106.140.139 <none> 80/TCP 10s [root@k8s-master01 ~]# curl 10.106.140.139 curl: (7) Failed connect to 10.106.140.139:80; connection denied [root@k8s-master01 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:443 rr -> 192.168.192.131:6443 Masq 1 3 0 TCP 10.96.0.10:53 rr -> 10.244.0.16:53 Masq 1 0 0 -> 10.244.0.17:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr -> 10.244.0.16:9153 Masq 1 0 0 -> 10.244.0.17:9153 Masq 1 0 0 TCP 10.106.140.139:80 rr # Empty UDP 10.96.0.10:53 rr -> 10.244.0.16:53 Masq 1 0 0 -> 10.244.0.17:53 Masq 1 0 0 # After modification, delete it first. You can use yaml file to delete it [root@k8s-master01 ~]# kubectl delete -f myapp-service.yaml service "myapp" deleted # Why important documents should be saved [root@k8s-master01 ~]# cd /usr/local/ [root@k8s-master01 local]# ls apache-maven-3.6.3 bin games install-k8s lib libexec share apache-tomcat-9.0.30 etc include jdk1.8.0_231 lib64 sbin src [root@k8s-master01 local]# cd install-k8s/ [root@k8s-master01 install-k8s]# ls core plugin
Modify VIM myapp service yaml
apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: type: ClusterIP selector: app: myapp # These two labels should match the above release: stabel # These two labels should match the above ports: - name: http port: 80 # Rear port assignment targetPort: 80 # Rear port assignment
[root@k8s-master01 ~]# kubectl apply -f myapp-service.yaml service/myapp created [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d14h myapp ClusterIP 10.103.61.43 <none> 80/TCP 17s [root@k8s-master01 ~]# ipvsadm -Ln IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.96.0.1:443 rr -> 192.168.192.131:6443 Masq 1 3 0 TCP 10.96.0.10:53 rr -> 10.244.0.16:53 Masq 1 0 0 -> 10.244.0.17:53 Masq 1 0 0 TCP 10.96.0.10:9153 rr -> 10.244.0.16:9153 Masq 1 0 0 -> 10.244.0.17:9153 Masq 1 0 0 TCP 10.103.61.43:80 rr # common -> 10.244.1.82:80 Masq 1 0 0 -> 10.244.1.83:80 Masq 1 0 0 -> 10.244.2.38:80 Masq 1 0 0 UDP 10.96.0.10:53 rr -> 10.244.0.16:53 Masq 1 0 0 -> 10.244.0.17:53 Masq 1 0 0 [root@k8s-master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-deploy-659f64f98b-bn8sl 1/1 Running 0 18m 10.244.1.82 k8s-node01 <none> <none> myapp-deploy-659f64f98b-fztt9 1/1 Running 0 18m 10.244.2.38 k8s-node02 <none> <none> myapp-deploy-659f64f98b-wdsbw 1/1 Running 0 18m 10.244.1.83 k8s-node01 <none> <none> # Automatic polling, load balancing [root@k8s-master01 ~]# curl 10.103.61.43 Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a> [root@k8s-master01 ~]# curl 10.103.61.43/hostname.html myapp-deploy-659f64f98b-wdsbw [root@k8s-master01 ~]# curl 10.103.61.43/hostname.html myapp-deploy-659f64f98b-bn8sl [root@k8s-master01 ~]# curl 10.103.61.43/hostname.html myapp-deploy-659f64f98b-fztt9 [root@k8s-master01 ~]# curl 10.103.61.43/hostname.html myapp-deploy-659f64f98b-wdsbw
Query process
iptables -t nat -nvL PREROUTING > KUBE-SERVICES > SVC > SEP
Headless Service
Sometimes load balancing and separate Service IP are not required or desired. In this case, you can create a Headless Service by specifying the Cluster IP(spec.clusterlP) value as "None" Such services will not allocate Cluster IP, Kube proxy will not process them, and the platform will not perform load balancing and routing for them
vi myapp-svc-headless.yaml
apiVersion: v1 kind: Service metadata: name: myapp-headless namespace: default spec: selector: app: myapp clusterIP: "None" ports: - port: 80 targetPort: 80 [root@k8s-master mainfests]# dig -t A myapp-headless.default.svc.cluster.local. @10.96.0.10
[root@k8s-master01 ~]# kubectl apply -f myapp-svc-headless.yaml service/myapp-headless created [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-659f64f98b-bn8sl 1/1 Running 0 26m myapp-deploy-659f64f98b-fztt9 1/1 Running 0 26m myapp-deploy-659f64f98b-wdsbw 1/1 Running 0 26m [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d14h myapp ClusterIP 10.103.61.43 <none> 80/TCP 9m31s myapp-headless ClusterIP None <none> 80/TCP 27s [root@k8s-master01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-5c98db65d4-4kj2t 1/1 Running 7 6d14h coredns-5c98db65d4-7zsr7 1/1 Running 7 6d14h etcd-k8s-master01 1/1 Running 8 6d14h kube-apiserver-k8s-master01 1/1 Running 8 6d14h kube-controller-manager-k8s-master01 1/1 Running 7 6d14h kube-flannel-ds-amd64-5chsx 1/1 Running 8 6d12h kube-flannel-ds-amd64-8bxpj 1/1 Running 8 6d12h kube-flannel-ds-amd64-g4gh9 1/1 Running 7 6d13h kube-proxy-cznqr 1/1 Running 7 6d12h kube-proxy-mcsdl 1/1 Running 8 6d12h kube-proxy-t7v46 1/1 Running 7 6d14h kube-scheduler-k8s-master01 1/1 Running 7 6d14h # Install the following tools [root@k8s-master01 ~]# yum -y install bind-utils [root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-5c98db65d4-4kj2t 1/1 Running 7 6d14h 10.244.0.16 k8s-master01 <none> <none> coredns-5c98db65d4-7zsr7 1/1 Running 7 6d14h 10.244.0.17 k8s-master01 <none> <none> etcd-k8s-master01 1/1 Running 8 6d14h 192.168.192.131 k8s-master01 <none> <none> kube-apiserver-k8s-master01 1/1 Running 8 6d14h 192.168.192.131 k8s-master01 <none> <none> kube-controller-manager-k8s-master01 1/1 Running 7 6d14h 192.168.192.131 k8s-master01 <none> <none> kube-flannel-ds-amd64-5chsx 1/1 Running 8 6d12h 192.168.192.129 k8s-node02 <none> <none> kube-flannel-ds-amd64-8bxpj 1/1 Running 8 6d12h 192.168.192.130 k8s-node01 <none> <none> kube-flannel-ds-amd64-g4gh9 1/1 Running 7 6d13h 192.168.192.131 k8s-master01 <none> <none> kube-proxy-cznqr 1/1 Running 7 6d12h 192.168.192.130 k8s-node01 <none> <none> kube-proxy-mcsdl 1/1 Running 8 6d12h 192.168.192.129 k8s-node02 <none> <none> kube-proxy-t7v46 1/1 Running 7 6d14h 192.168.192.131 k8s-master01 <none> <none> kube-scheduler-k8s-master01 1/1 Running 7 6d14h 192.168.192.131 k8s-master01 <none> <none> [root@k8s-master01 ~]# dig -t A myapp-headless.default.svc.cluster.local. @10.244.0.16 ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.9 <<>> -t A myapp-headless.default.svc.cluster.local. @10.244.0.16 ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8817 ;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;myapp-headless.default.svc.cluster.local. IN A ;; ANSWER SECTION: myapp-headless.default.svc.cluster.local. 30 IN A 10.244.1.82 myapp-headless.default.svc.cluster.local. 30 IN A 10.244.1.83 myapp-headless.default.svc.cluster.local. 30 IN A 10.244.2.38 ;; Query time: 0 msec ;; SERVER: 10.244.0.16#53(10.244.0.16) ;; WHEN: June 10, 2013:23:04 CST 2022 ;; MSG SIZE rcvd: 237 [root@k8s-master01 ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES myapp-deploy-659f64f98b-bn8sl 1/1 Running 0 35m 10.244.1.82 k8s-node01 <none> <none> myapp-deploy-659f64f98b-fztt9 1/1 Running 0 35m 10.244.2.38 k8s-node02 <none> <none> myapp-deploy-659f64f98b-wdsbw 1/1 Running 0 35m 10.244.1.83 k8s-node01 <none> <none>
NodePort
The principle of nodePort is to open a port on the node, import the traffic to the port to the Kube proxy, and then the Kube proxy to the corresponding pod
vi nodeport.yaml
apiVersion: v1 kind: Service metadata: name: myapp namespace: default spec: type: NodePort selector: app: myapp release: stabel ports: - name: http port: 80 targetPort: 80
Query process
iptables -t nat -nvL KUBE-NODEPORTS
[root@k8s-master01 ~]# kubectl apply -f nodeport.yaml service/myapp configured [root@k8s-master01 ~]# kubectl get pod NAME READY STATUS RESTARTS AGE myapp-deploy-659f64f98b-bn8sl 1/1 Running 0 42m myapp-deploy-659f64f98b-fztt9 1/1 Running 0 42m myapp-deploy-659f64f98b-wdsbw 1/1 Running 0 42m [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d14h myapp NodePort 10.103.61.43 <none> 80:30585/TCP 25m myapp-headless ClusterIP None <none> 80/TCP 15m
Open browser to access: 192.168.192.131:30585
Open browser to access: 192.168.192.130:30585
Open browser to access: 192.168.192.129:30585
All node nodes are accessible
All three hosts have Kube proxy information
[root@k8s-master01 ~]# netstat -anpt | grep :30585 tcp6 0 0 :::30585 :::* LISTEN 2009/kube-proxy
master host
[root@k8s-master01 ~]# ipvsadm -Ln | grep 192.168.192.131 TCP 192.168.192.131:30585 rr -> 192.168.192.131:6443 Masq 1 3 0
LoadBalancer (toll service)
loadBalancer and nodePort are actually the same way. The difference is that loadBalancer is one more step than nodePort, that is, cloud provider can be called to create LB to guide the node
ExternalName
This type of Service can map the Service to the contents of the externalName field (for example: hub.atguigu.com) by returning CNAME and its value. ExternalName Service is a special case of Service. It does not have a selector, nor does it define any ports and endpoints. On the contrary, for services running outside the cluster, it provides services by returning the alias of the external Service
vi ex.yml
kind: Service apiVersion: v1 metadata: name: my-service-1 namespace: default spec: type: ExternalName externalName: hub.atguigu.com
When querying the host my service defalut. svc. cluster. Local (svc_name.namespace.svc.cluster.local), the DNS service of the cluster will return a value of my database. example. CNAME records of. The working mode of accessing this service is the same as that of others, except that redirection occurs in the DNS layer, and no proxy or forwarding is performed
[root@k8s-master01 ~]# kubectl create -f ex.yml service/my-service-1 created [root@k8s-master01 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d15h my-service-1 ExternalName <none> hub.atguigu.com <none> 19s myapp NodePort 10.103.61.43 <none> 80:30585/TCP 67m myapp-headless ClusterIP None <none> 80/TCP 58m [root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-5c98db65d4-4kj2t 1/1 Running 7 6d15h 10.244.0.16 k8s-master01 <none> <none> coredns-5c98db65d4-7zsr7 1/1 Running 7 6d15h 10.244.0.17 k8s-master01 <none> <none> etcd-k8s-master01 1/1 Running 8 6d15h 192.168.192.131 k8s-master01 <none> <none> kube-apiserver-k8s-master01 1/1 Running 8 6d15h 192.168.192.131 k8s-master01 <none> <none> kube-controller-manager-k8s-master01 1/1 Running 7 6d15h 192.168.192.131 k8s-master01 <none> <none> kube-flannel-ds-amd64-5chsx 1/1 Running 8 6d13h 192.168.192.129 k8s-node02 <none> <none> kube-flannel-ds-amd64-8bxpj 1/1 Running 8 6d13h 192.168.192.130 k8s-node01 <none> <none> kube-flannel-ds-amd64-g4gh9 1/1 Running 7 6d14h 192.168.192.131 k8s-master01 <none> <none> kube-proxy-cznqr 1/1 Running 7 6d13h 192.168.192.130 k8s-node01 <none> <none> kube-proxy-mcsdl 1/1 Running 8 6d13h 192.168.192.129 k8s-node02 <none> <none> kube-proxy-t7v46 1/1 Running 7 6d15h 192.168.192.131 k8s-master01 <none> <none> kube-scheduler-k8s-master01 1/1 Running 7 6d15h 192.168.192.131 k8s-master01 <none> <none> [root@k8s-master01 ~]# dig -t A my-service-1.default.svc.cluster.local. @10.103.61.43 ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.9 <<>> -t A my-service-1.default.svc.cluster.local. @10.103.61.43 ;; global options: +cmd ;; connection timed out; no servers could be reached [root@k8s-master01 ~]# dig -t A my-service-1.default.svc.cluster.local. @10.244.0.16 ; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.9 <<>> -t A my-service-1.default.svc.cluster.local. @10.244.0.16 ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30414 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;my-service-1.default.svc.cluster.local. IN A ;; ANSWER SECTION: my-service-1.default.svc.cluster.local. 30 IN CNAME hub.atguigu.com. ;; Query time: 16 msec ;; SERVER: 10.244.0.16#53(10.244.0.16) ;; WHEN: June 11, 2013:15:27 CST 2022 ;; MSG SIZE rcvd: 134