Ten years east of the river, ten years west of the river, don't bully the young poor
Never stop learning and keep improving
summary
- In kubernetes, Pod is the carrier of applications. We can access applications through the IP of Pod, but the IP address of Pod is not fixed, which means it is not convenient to directly use the IP of Pod to access services.
- To solve this problem, kubernetes provides Service resources. The Service aggregates multiple pods that provide the same Service, and provides a unified entry address. You can access the following Pod services by accessing the entry address of the Service.
- In many cases, Service is just a concept. What really works is the Kube proxy Service process. A Kube proxy Service process is running on each Node. When a Service is created, the information of the created Service will be written to etcd through the API Server. The Kube proxy will discover the changes of this Service based on the listening mechanism, and then it will convert the latest Service information into the corresponding access rules.
- Kube proxy currently supports three working modes:
-
userspace mode
-
iptables mode
- ipvs mode
- ipvs mode is similar to iptables. Kube proxy monitors Pod changes and creates corresponding ipvs rules. ipvs has higher forwarding efficiency than iptables. In addition, ipvs supports more LB algorithms.
This article adopts ipvs
Open ipvs on three servers
- In kubernetes, service has two proxy models, one based on iptables and the other based on ipvs. The performance of ipvs is higher than that of iptables, but if you want to use it, you need to manually load the ipvs module.
- Install ipset and ipvsadm on each node:
yum -y install ipset ipvsadm
- Execute the following script on all nodes:
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF
- Authorize, run and check whether to load:
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
- Check if it is loaded:
lsmod | grep -e ipvs -e nf_conntrack_ipv4
After opening the IPv6, install the IPVS kernel module, otherwise it will be degraded to iptables
kubectl edit cm kube-proxy -n kube-system
Change the mode value to ipvs, and then wq saves it
Delete the original load module. After deletion, the load module of ipvs will be automatically installed
kubectl delete pod -l k8s-app=kube-proxy -n kube-system
Test whether the IPv6 is successfully opened
# Test whether the ipvs module is opened successfully
ipvsadm -Ln
rr for polling
Service type
- Service resource list:
apiVersion: v1 # edition kind: Service # type metadata: # metadata name: # Resource name namespace: # Namespace spec: selector: # Tag selector, used to determine which pods the current Service proxy has app: nginx type: NodePort # Type of Service, specifying the access method of the Service clusterIP: # IP address of the virtual service sessionAffinity: # session affinity. Two options, ClientIP and None, are supported. The default value is None--Used to prevent session Cross domain ports: # port information - port: 8080 # Service port protocol: TCP # agreement targetPort : # Pod port nodePort: # Host port
spec.type Description:
- ClusterIP: the default value. It is a virtual IP automatically assigned by the kubernetes system and can only be accessed within the cluster.
- NodePort: exposes the Service to the outside through the port on the specified Node. Through this method, you can access the Service outside the cluster.
- LoadBalancer: use an external load balancer to complete the load distribution to the service. Note that this mode requires the support of an external cloud environment.
- ExternalName: brings services outside the cluster into the cluster for direct use.
Service usage
- Before using the Service, first create three pods with Deployment. Note that the tag app=nginx-pod should be set for the Pod.
- Create deployment Yaml file, as follows:
apiVersion: apps/v1 kind: Deployment metadata: name: pc-deployment namespace: dev spec: replicas: 3 selector: matchLabels: app: nginx-pod template: metadata: labels: app: nginx-pod spec: containers: - name: nginx image: nginx:1.17.1 ports: - containerPort: 80
establish
kubectl apply -f deployment.yaml
see
kubectl get deploy,rs,pod -n dev -o wide
- To facilitate the following tests, modify the index html:
kubectl exec -it pod/pc-deployment-7d7dd5499b-9qnr7 -c nginx -n dev /bin/sh --After entering echo "10.224.2.34" > /usr/share/nginx/html/index.htmlkubectl exec -it pod/pc-deployment-7d7dd5499b-rl67g -c nginx -n dev /bin/sh --After entering echo "10.224.1.49" > /usr/share/nginx/html/index.htmlkubectl exec -it pod/pc-deployment-7d7dd5499b-smpt2 -c nginx -n dev /bin/sh --After entering echo "10.224.1.50" > /usr/share/nginx/html/index.html
ClusterIP type Service
- Create service clusterip Yaml file, as follows:
apiVersion: v1 kind: Service metadata: name: service-clusterip namespace: dev spec: selector: app: nginx-pod clusterIP: 10.97.97.97 # The IP address of the service. If it is not written, a default IP address will be generated type: ClusterIP ports: - port: 80 # Port of Service targetPort: 80 # Port of Pod
Create and view a service
kubectl create -f service-clusterip.yaml kubectl get svc -n dev -o wide kubectl describe svc service-clusterip -n dev
Mapping rules of ipvs
ipvsadm -Ln
According to the above two figures, it is known that accessing 10.97.97.97 will be forwarded to the three IPS circled in red in the screenshot. These three IPS are endpoint s, corresponding to a Pod respectively. Finally, they will be forwarded to the IPS of each Pod.
Load distribution policy
- Service access is distributed to the backend Pod. At present, kubernetes provides two load distribution strategies:
- If it is not defined, the Kube proxy policy is used by default, such as random and polling.
- The Session persistence mode based on the client address means that all requests from the same client will be forwarded to a fixed Pod, which is very friendly to traditional Session based authentication projects. In this mode, the sessionAffinity: ClusterIP option can be added to the spec.
Service of type HeadLiness
- In some scenarios, developers may not want to use the load balancing function provided by the Service, but want to control the load balancing policy by themselves. In this case, kubernetes provides the HeadLinesss Service, which does not assign Cluster IP. If they want to access the Service, they can only query through the domain name of the Service.
- Create service headlines Yaml file, as follows:
apiVersion: v1 kind: Service metadata: name: service-headliness namespace: dev spec: selector: app: nginx-pod clusterIP: None # Set clusterIP to None to create a headliness Service type: ClusterIP ports: - port: 80 # Port of Service targetPort: 80 # Port of Pod
Create and view
kubectl create -f service-headliness.yaml kubectl get svc service-headliness -n dev -o wide
Without a specific Ip, how can I access it in the cluster
- To view Service details and pod:
kubectl describe svc service-headliness -n dev kubectl get pod -n dev
Check the domain name resolution and find the domain name of the service
Enter a pod
kubectl exec -it pc-deployment-7d7dd5499b-9qnr7 -n dev /bin/bash
View domain name
cat /etc/resolv.conf --stay pod Internal execution
- Query through the domain name of the Service:
dig @10.96.0.10 service-headliness.dev.svc.cluster.local
NodePort type service
- In the previous case, the IP address of the created Service can only be accessed inside the cluster. If you want the Service to be exposed for external use, you need to use another type of Service, called a NodePort type Service. The working principle of NodePort is to map the Service port to a port of the Node, and then you can access the Service through NodeIP:NodePort.
- Create service nodeport Yaml file, as follows:
apiVersion: v1 kind: Service metadata: name: service-nodeport namespace: dev spec: selector: app: nginx-pod type: NodePort # Service type is NodePort ports: - port: 80 # Port of Service targetPort: 80 # Port of Pod nodePort: 30002 # Specify the port of the bound node (the default value range is 30000~32767),If not specified, it will be assigned by default
Create and view
kubectl create -f service-nodeport.yaml kubectl get svc service-nodeport -n dev -o wide
Visit: 192.168.36.38:30002