1, Overview of Kubernetes
1.1 introduction to kubernetes
1.1.1 what is kubernetes and its role
Kubernetes (K8S) is used to automate the deployment, expansion, and management of containerized applications
Kubernetes usually works in conjunction with the docker container and integrates multiple host clusters running the docker container.
Official website address: https://Kubernetes.io
Chinese community: https://www.kubernetes.org.cn/docs
The goal of Kubernetes is to make the deployment of containerized applications simple and efficient. A core feature of Kubernetes is that it can independently manage containers to ensure that containers in the cloud platform run according to users' expectations
The following are Kubernetes related features:
-
Automatic packaging
Automatically place containers according to resource requirements and other constraints without sacrificing availability, and mix critical and best effort workloads to improve resource utilization and save more resources
-
Scale horizontally
Use simple commands or UI, or automatically adjust the number of application copies according to CPU usage
-
Automatic deployment and rollback
Kubernetes gradually deploys changes to the application or its configuration, and monitors the application health to ensure that it will not terminate all instances at the same time. If problems occur, kubernetes will restore the changes and take advantage of the growing ecosystem of deployment solutions
-
Storage orchestration
Automatically install the selected storage system, whether it is local storage, such as public cloud provider GCP or AWS, or network storage system NFS,iSCSI,Gluster,Ceph, Cinder or Flocker
-
Self healing
Restart the failed container. When the node is unavailable, replace and rearrange the container on the node, terminate the container that does not respond to the user-defined health check, and do not notify the client before the client is ready to release it
-
Service discovery and load balancing
There is no need to modify the application to use unfamiliar service discovery mechanisms. kubernetes provides containers with their own ip addresses and a single DNS name of a group of containers, and can load balance between them
-
Key and configuration management
Deploy and update the key and application configuration, do not recompile your scene, and do not expose the secrets in the stack configuration
-
Batch processing
In addition to services, kubernetes can manage batch and CI workloads and, if necessary, replace failed containers
What can I do with kubernetes
kubernetes is a new leading solution for distributed architecture based on container technology
Kubernetes is an open development platform (non intrusive, existing system containers are migrated to kubernetes), and a younger generation distributed system support platform (perfect cluster management capability)
Kubernetes can be used to run containerized applications on kubernets clusters of physical or virtual machines. Kubernetes can provide a container centric infrastructure to meet some common requirements for running applications in production environments, such as:
- Multiple processes work together
- Storage system is attached to
- Distributing secrets
- Applied health monitoring
- Replication of application instances
- Pod auto scale / extend
- Naming and discovering
- load balancing
- Rolling update
- Resource monitoring
- Log access
- Scheduling applications
- Provide certification and authorization
Why use Kubernetes
The most direct feeling of using Kubernetes is that we can easily develop complex systems
Secondly, Kubernetes is embracing the microservice architecture in an all-round way (the core of microservices is to split a huge single application into many small interconnected microservices. A microservice may be supported by multiple instance replicas, and the number of replicas can be dynamically adjusted with the change of system load)
Finally, Kubernetes system architecture has strong horizontal expansion capability
1.1.2 Kubernetes quick start
-
Environmental preparation
-
Turn off Centos firewall
systemctl disable firewalld
systemctl stop firewalld
-
Install etcd and Kubernetes software
yum update
yum install -y etcd kubernetes
-
Start service
systemctl start etcd
systemctl start docker
If docker Startup failed, please refer to( vi /etc/sysconfig/selinux hold selinux The following is changed to disabled,Restart the machine again docker It's OK)
systemctl start kube-apiserver
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kubelet
systemctl start kube-proxy
-
-
to configure
-
Install local private warehouse
docker pull registry
docker run -di -p 5000:5000 registry
Add warehouse trust
vi /etc/docker/daemon.json
{
“registry-mirrors”:[“ https://docker.mirrors.ustc.edu.cn ”], \
"Secure registers": ["192.168.65.133:5000"] \
}systemctl daemon-reload
systemctl restart docker
visit
http://192.168.65.133:5000/v2/_catalog
-
Tomcat configuration
mkdir -p /usr/local/k8s
cd /usr/local/k8s
-
mytomcat.rc.yaml
apiVersion: v1 kind: ReplicationController metadata: name: mytomcat spec: replicas: 2 selector: app: mytomcat template: metadata: labels: app: mytomcat spec: containers: - name: mytomcat image: tomcat:7-jre7 ports: - containerPort: 8080
kubectl create -f mytomcat.rc.yaml
-
mytomcat.svc.yaml
apiVersion: v1 kind: Service metadata: name: mytomcat spec: type: NodePort ports: - port: 8080 nodePort: 30001 selector: app: mytomcat
kubectl create -f mytomcat.svc.yaml
kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.254.0.1 <none> 443/TCP 6m mytomcat 10.254.119.243 <nodes> 8080:30001/TCP 14s
kubectl get pods
No resources found
The normal display is as follows (but it is not started, and the solution is shown below)
NAME READY STATUS RESTARTS AGE mytomcat-cqbfh 0/1 ContainerCreating 0 7s mytomcat-tnbjb 0/1 ContainerCreating 0 7s
-
-
-
Problem solving
-
docker pull failed
-
Solution 1
1,yum install -y rhsm
2,docker search pod-infrastructure
3,docker pull docker.io/tianyebj/pod-infrastructure
4,docker tag tianyebj/pod-infrastructure 192.168.65.133:5000/pod-infrastructure
5,docker push 192.168.65.133:5000/pod-infrastructure
6,vi /etc/kubernetes/kubelet
Modify KUBELET_POD_INFRA_CONTAINER="–pod-infra-container-image=192.168.65.133:5000/pod-infrastructure:latest"
7. Restart service
systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler
systemctl restart kubelet
systemctl restart kube-proxy
-
Solution 2
1,docker pull docker.io/kubernetes/pause
2,docker tag docker.io/kubernetes/pause:latest 192.168.65.133:5000/google_containers/pause-amd64.3.0
3,docker push 192.168.65.133:5000/google_containers/pause-amd64.3.0
4,vi /etc/kubernetes/kubelet
KUBELET_ARGS="–pod_infra_container_image=192.168.65.133:5000/google_containers/pause-amd64.3.0"
5. Restart kubelet service
systemctl restart kubelet
-
-
External network inaccessible
The container created in the built k8s cluster can only be accessed by curl on its node, but the port occupied by the container cannot be accessed on any other host
Solution:
1,vim /etc/sysctl.conf
2,net.ipv4.ip_forward=1
3,cd /usr/local/k8s
4,kubectl replace -f mytomcat.rc.yaml
5,kubectl delete svc --all
6,kubectl create -f mytomcat.svc.yaml
7. Firewall port open
systemctl start firewalld firewall-cmd --list-ports firewall-cmd --state firewall-cmd --zone=public --add-port=30001/tcp --permanent firewall-cmd --reload systemctl stop firewalld
-
No resources found when solving kubectl get pods
1,vim /etc/kubernetes/apiserver
2. Find KUBE_ADMISSION_CONTROL= "– admission control=namespacelifecycle, namespaceexists, limitranger, securitycontextdeny, ServiceAccount, resourcequota", remove ServiceAccount, save and exit
3,systemctl restart kube-apiserver
-
-
Browse test
http://192.168.65.133:30001
1.2 basic architecture and common terms of kubernetes
The Kubernetes cluster contains node agent Kubelet and Master components (APIs, scheduler, etc), all based on distributed storage systems.
The following figure shows the architecture of Kubernetes
In this system architecture diagram, services are divided into those running on the work node and those constituting the cluster level dashboard
The Kubernetes node has the necessary services for running the application container, which are controlled by the Master
Docker should be run on each node. Docker is responsible for downloading all specific images and running containers
Kubernetes is mainly composed of the following core components:
- etcd: saves the status of the entire cluster
- apiserver: provides a unique access to resource operations, and provides mechanisms such as authentication, authorization, access control, API registration and discovery
- controller manger: responsible for maintaining the status of the cluster, such as fault detection, automatic expansion, rolling update, etc
- schuduler: responsible for resource scheduling, and scheduling Pod to corresponding machines according to the scheduled scheduling policy
- kubelet: responsible for maintaining the life cycle of containers and managing Volume (CVI) and network (CNI)
- Container runtime: responsible for image management and the real running of Pod and container (CRI)
- Kube proxy: responsible for providing Service discovery and load balancing within the cluster for services
In addition to the core components, there are also some recommended add ons:
- Kube DNS: responsible for providing DNS services for the entire cluster
- Ingress Controller: the micro service provides an Internet interface
- Heapster: provide resource overhead
- Dashboard: provides GUI
- Federation: provides clusters across availability zones
- Fluentd elasticsearch: provides cluster log collection, storage and query
Kubernetes' design concept and functions are actually a layered architecture similar to Linux
- Core layer: the core function of Kubernetes, which provides API for building high-level applications externally and plug-in application execution environment internally
- Application layer: Deployment (stateless applications, stateful applications, batch tasks, cluster applications, etc.) and routing (service discovery, DNS resolution, etc.)
- Management: system measurement (such as infrastructure, container and network measurement), automation (such as automatic expansion, dynamic Provision, etc.) and policy management (RBAC, Quota, PSP, NetworkPolicy, etc.)
- Interface layer: kubectl command line tool, CLIENT SDK and cluster Federation
- Ecosystem: the ecosystem of large container cluster management and scheduling on the interface layer can be divided into two categories
- Kubernetes external: log, monitoring, configuration management, CI, CD, Workflow, Faas, OTS application, ChatOps, etc
- Kubernetes internal: CRI, CNI, CVI, image warehouse, Cloud Provider, cluster configuration and management, etc
1.2.1 Cluster
Cluster is a collection of computing, storage and network resources. Kubernetes uses these resources to run various container based applications
The Kubernetes Cluster consists of a Master and a Node, on which several Kubernets services are running
1.2.2 Master
The Master is mainly responsible for scheduling, that is, deciding where to put applications for execution.
The Master runs a Linux system, which can be a physical machine or a virtual machine
Master is the brain of Kubernetes Cluster, running Daemon services including Kube apiserver, Kube scheduler, Kube controller manager, etcd and Pod network
-
API Server(kube-apiserver)
API Server provides HTTP/HTTPS RESTful API, namely Kubernetes API. It is the only entry for CRUD and other operations of all resources in Kubernets, and it is also the entry process controlled by the cluster
-
Scheduler(kube-scheduler)
The Scheduler is a process responsible for resource scheduling. It decides which Node to run the Pod on
-
Controller Manager(kube-controller-manager)
Automation control center for all resource objects. The Controller Manager is responsible for managing various resources of the Cluster to ensure that the resources are in the expected state
There are many kinds of controller managers, such as replication controller, endpoints controller, namespace controller, serviceaccount controller, etc
Different controllers manage different resources. For example, the replication controller manages the declaration cycles of Deployment, StatefulSet, and DaemonSet, and the namespace controller manages the Namespace resources
-
etcd
etcd is responsible for saving the configuration information of Kubernetes Cluster and the status information of various resources. etcd will quickly notify Kubernets related components when data changes
-
Pod network
To enable Pod to communicate with each other, Kubernetes Cluster must deploy Pod network, and flannel is one of the options
1.2.3 Node
In addition to the Master, other machines in the Kubernetes cluster are called Node nodes. The Node is responsible for running container applications. The Node is managed by the Master. The Node is responsible for monitoring and reporting the status of the container, and managing the life cycle of the container according to the requirements of the Master
Node also runs on the Linux system, which can be a physical machine or a virtual machine
The following key processes are running on each Node
-
kubelet
Be responsible for the creation and startup of the container corresponding to the Pod, and closely cooperate with the Master node to realize the basic functions of cluster management
-
kube-proxy
Important components for implementing the communication and load balancing mechanism of Kubernetes Service
-
Docker Enginer
Docker engine, responsible for the creation and management of native containers
1.2.4 Pod
Pod is the smallest unit of Kubernetes and the most important and basic concept.
Each Pod contains one or more containers. The containers of the Pod as a whole are dispatched by the Master to run on a Node
Kubernetes assigns a unique IP address to each Pod, which is called Pod IP
Multiple containers in a Pod share the Pod IP address
In Kubernetes, a Pod container can communicate directly with a Pod container on another host
1.2.5 Service
Kubernetes Service defines the way for the outside world to access a group of specific pods. The Service has its own ip and port. The Service provides load balancing.
It is also one of the core resource objects of Kubernetes. In fact, each Service is a "micro Service" of the micro Service architecture we often mention
1.2.6 Replication Controller
Replication Controller (RC for short) is one of the core concepts in the Kubernetes system. It actually defines an expected scenario, that is, it declares that the number of replicas of a Pod meets a certain expected value at any time. All RC definitions include the following parts
- Number of replicas expected by Pod (replicas)
- Label selector for filtering target Pod
- When the number of copies of a Pod is less than the expected number, the Pod template used to create a new Pod
The following is a summary of some characteristics and functions of RC
-
In most cases, we define an RC to automatically control the creation process of Pod and the number of copies
-
RC includes the completed Pod definition template
-
RC realizes automatic control of Pod copy through Label Selector mechanism
-
By changing the number of Pod copies in RC, the capacity expansion and capacity reduction functions of Pod can be realized
-
By changing the image version in the Pod template in RC, the rolling upgrade function of Pod can be realized
2, Kubernetes cluster
Is Kubernetes used to coordinate highly available computer clusters that are connected to work as a single unit.
Kubernetes automatically distributes and schedules container applications in a more efficient manner on a cluster.
The Kubernetes cluster consists of two types of resources:
Master is the scheduling node of the cluster
Nodes are the work nodes where the application actually runs
There are several ways to deploy the K8S cluster: kubeadm, minikube, and binary packages
The first two are automatic deployment, which simplifies deployment operations,
It is recommended to use binary package deployment, because automatic deployment masks many details, making it less aware of certain blocks
Deployment environment requirements
(1) One or more machines, operating system CentOS 7 x-86_ x64
(2) Hardware configuration: memory 2GB or 2G+, CPU 2-core or CPU 2-core +;
(3) Each machine in the cluster can communicate with each other;
(4) Each machine in the cluster can access the external network and needs to pull the image;
(5) Prohibit swap partition;
2. binary package installation
2.1 environmental equipment and planning
-
Recommended configuration: 2-core 2G
Docker version 17.05.0-ce
role IP assembly master 192.168.65.137 etcd,kube-apiserver,kube-controller-manager,kube-scheduler,docker node01 192.168.65.138 kube-proxy,kubelet,docker node02 192.168.65.139 kube-proxy,kubelet,docker -
View the default firewall status (not running is displayed when it is turned off, and running is displayed when it is turned on)
firewall-cmd --state
-
Turn off firewall
systemctl stop firewalld
-
Disable firewalld startup
systemctl disable firewalld
-
Get Kubernetes binary package
kubernetes-server-linux-amd64.tar.gz
-
Upload binary packages to /opt/software
2.2 Master installation
2.2.1 Docker installation
(1) Modify configuration
vi /etc/sysconfig/selinux SELINUX=disabled Restart the machine reboot
(2) Install docker
yum install docker -y docker -v
(3) Configure domestic image
vi /etc/docker/daemon.json { "registry-mirrors":["https://docker.mirrors.ustc.edu.cn"] }
(4) Install docker compose
download sudo curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose Package already exists in upper level file sudo chmod +x /usr/local/bin/docker-compose docker-compose --version
2.2.2 etcd service
As the main service of Kubernetes cluster, etcd is installed and started before installing the services of Kubernetes
-
Get binaries
etcd-v3.3.4-linux-amd64.tar.gz
-
Upload to the /opt/software directory of the master
-
decompression
cd /opt/software
tar -zxvf etcd-v3.3.4-linux-amd64.tar.gz
-
Copy etcd and etcdctl files to /usr/bin
-
Configure SYSTEMd service file /usr/lib/systemd/system/etcd service
[Unit] Description=Etcd Server After=network.target [Service] Type=simple EnvironmentFile=-/etc/etcd/etcd.conf WorkingDirectory=/var/lib/etcd ExecStart=/usr/bin/etcd Restart=on-failure [Install] WantedBy=multi-user.target
-
Start and test etcd service
mkdir -p /var/lib/etcd systemctl daemon-reload systemctl enable etcd.service systemctl start etcd.service etcdctl cluster-health
2.2.3 Kube apiserver service
decompression
cd /opt/software
tar -zxvf kubernetes-server-linux-amd64.tar.gz
After decompression, put the Kube apiserver, Kube controller manager, Kube scheduler and kubectl binary command files to be used for management into the /usr/bin directory to complete the installation of these services
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin
The following is to configure the Kube apiserver service
Edit SYSTEMd service file: VI /usr/lib/systemd/system/kube-apiserver service
[Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service Wants=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/apiserver ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS Restart=on-failure Type=notify [Install] WantedBy=multi-user.target
configuration file
Create directory: mkdir /etc/kubernetes
vi /etc/kubernetes/apiserver
KUBE_API_ARGS="--storage-backend=etcd3 --etcd-servers=http://127.0.0.1:2379 --insecure-bind-address=0.0.0.0 --insecure-port=8080 --service-cluster-ip-range=169.169.0.0/16 --service-node-port-range=1-65535 --admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
2.2.4 Kube Controller Manager Service
The Kube controller manager service depends on the Kube apiserver service:
Configure SYSTEMd service file: VI /usr/lib/systemd/system/kube-controller-manager service
[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=kube-apiserver.service Requires=kube-apiserver.service [Service] EnvironmentFile=-/etc/kubernetes/controller-manager ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
Configuration file: vi /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--master=192.168.65.137:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
2.2.5 Kube Scheduler service
The Kube Scheduler service also depends on the Kube apiserver service
Configure SYSTEMd service file: VI /usr/lib/systemd/system/kube-scheduler service
[Unit] Description=Kubernetes scheduler Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=kube-apiserver.service Requires=kube-apiserver.service [Service] EnvironmentFile=-/etc/kubernetes/scheduler ExecStart=/usr/bin/kube-scheduler $KUBE_scheduler_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
Configuration file: vi /etc/kubernetes/scheduler
KUBE_scheduler_ARGS="--master=192.168.65.137:8080 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
2.2.6 startup
After completing the above configuration, start the services in sequence
systemctl daemon-reload
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
Check the health status of each service:
systemctl status kube-apiserver.service
systemctl status kube-controller-manager.service
systemctl status kube-scheduler.service
2.2.7 Node installation preparation
scp -r kubernetes 192.168.65.138:/opt/software/kubernetes scp -r kubernetes 192.168.65.139:/opt/software/kubernetes
2.3 Node1 installation
On Node1 node, put the binary file kubelet Kube proxy extracted from the compressed package into the /usr/bin directory in the same way. See step 2.2.7
cd /opt/software/kubernetes/server/bin cp kubelet kube-proxy /usr/bin vi /etc/fstab Comment out /dev/mapper/centos-swap swap #/dev/mapper/centos-swap swap swap defaults 0 0
It is necessary to pre install docker on Node1 node. Please refer to the installation of docker on the Master and start docker
2.3.1 kubelet service
Configure SYSTEMd service file: VI /usr/lib/systemd/system/kubelet service
[Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet $KUBELET_ARGS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
Create directory: mkdir -p /var/lib/kubelet
mkdir /etc/kubernetes
Configuration file: vi /etc/kubernetes/kubelet
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.65.138 --cgroup-driver=systemd --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --fail-swap-on=false"
Configuration file for kubelet connection to Master Apiserver
vi /etc/kubernetes/kubeconfig
apiVersion: v1 kind: Config clusters: - cluster: server: http://192.168.65.137:8080 name: local contexts: - context: cluster: local name: mycontext current-context: mycontext
2.3.2 Kube proxy service
The Kube proxy service depends on the network service, so you must ensure that the network service is normal. If the network service fails to start, the common solutions are as follows:
1.and NetworkManger Service conflicts, close directly NetworkMnager The service was good, service NetworkManager stop,And prohibit startup chkconfig NetworkManager off,Then restart it 2.And profile MAC Address mismatch, use ip addr(or ifconfig)see MAC Address, will/etc/sysconfig/network-scripts/ifcfg-xxx Medium HWADDR Change to viewed MAC address 3.Set the card machine to start a card called NetworkManager-wait-online Service, command is systemctl enable NetworkManager-wait-online.service 4.see/etc/sysconfig/network-scripts Next, delete all other irrelevant network card location files to avoid unnecessary impact, that is, only one is reserved to ifcfg Beginning file
Configure SYSTEMd service file: VI /usr/lib/systemd/system/kube-proxy service
[Unit] Description=Kubernetes Kube-proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.service Requires=network.service [Service] EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS Restart=on-failure killMode=process [Install] WantedBy=multi-user.target
Configuration file: vi /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--master=http://192.168.65.137:8080 --hostname-override=192.168.65.138 --logtostderr=true --log-dir=/var/log/kubernetes --v=2"
2.3.3 startup
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
2.4 Node2 installation
Refer to Node1 node installation and pay attention to modifying the IP
2.5 health check and example test
-
View cluster status
kubelctl get nodes NAME STATUS ROLES AGE VERSION 192.168.65.138 Ready <none> 8m40s v1.19.7 192.168.65.139 Ready <none> 2m8s v1.19.7
-
View master cluster component status
kubectl get cs Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-0 Healthy {"health":"true"} controller-manager Healthy ok scheduler Healthy ok
-
nginx-rc.yaml
mkdir /usr/local/k8s
cd /usr/local/k8s
vi nginx-rc.yaml
apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
kubectl create -f nginx-rc.yaml
-
nginx-svc.yaml
apiVersion: v1 kind: Service metadata: name: nginx spec: type: NodePort ports: - port: 80 nodePort: 33333 selector: app: nginx
kubectl create -f nginx-svc.yaml
-
View Pod
kubectl get pods No resources found in default namespace. Problem solution: as a result of docker Failed to pull image By building a private warehouse: docker pull registry docker run -di --name=registry -p 5000:5000 registry modify daemon.json vi /etc/docker/daemon.json { "registry-mirrors":["https://docker.mirrors.ustc.edu.cn"], "insecure-registries":["192.168.65.137:5000"] } docker pull docker.io/kubernetes/pause docker tag docker.io/kubernetes/pause:latest 192.168.65.137:5000/google_containers/pause-amd64.3.0 docker push 192.168.65.137:5000/google_containers/pause-amd64.3.0 modify node On node kubelete vi /etc/kubernetes/kubelet KUBELET_ARGS="--pod_infra_container_image=192.168.65.137:5000/google_containers/pause-amd64.3.0 --kubeconfig=/etc/kubernetes/kubeconfig --hostname-override=192.168.65.138 --cgroup-driver=systemd --logtostderr=false --log-dir=/var/log/kubernetes --v=2 --fail-swap-on=false" restart kubelet service systemctl restart kubelet kubectl replace -f nginx-rc.yaml kubectl delete svc --all kubectl create -f nginx-svc.yaml kubectl get pods NAME READY STATUS RESTARTS AGE nginx-5wqlx 1/1 Running 0 2m21s nginx-c54lc 1/1 Running 0 2m21s nginx-n9pbm 1/1 Running 0 2m21s
3.kubedm installation
3.1 environmental equipment and planning
-
Recommended configuration: 2-core 2G
Docker version 17.05.0-ce
role IP assembly k8smaster 192.168.65.133 etcd,kube-apiserver,kube-controller-manager,kube-scheduler,docker k8snode1 192.168.65.136 kube-proxy,kubelet,docker k8snode2 192.168.65.140 kube-proxy,kubelet,docker -
View the default firewall status (not running is displayed when it is turned off, and running is displayed when it is turned on)
firewall-cmd --state
-
Turn off firewall
systemctl stop firewalld
systemctl disable firewalld
-
Close selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config \permanent
Setenforce0 \
-
Turn off swap (k8s disable virtual memory to improve performance)
sed -ri ‘s/. swap./#&/’ / etc/fstab \
swapoff -a \a temporary
-
Add hosts in the master
cat >> /etc/hosts << EOF
192.168.65.133 k8smaster
192.168.65.136 k8snode1
192.168.65.140 k8snode2
EOF
-
Set bridge parameters
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system \
-
time synchronization
yum install ntpdate -y
ntpdate time.windows.com
Install docker/kubedm/kubelet/kubectl on all server nodes
The default container running environment of Kubernetes is Docker, so Docker needs to be installed first;
3.2 installing Docker
(1) Modify configuration
vi /etc/sysconfig/selinux SELINUX=disabled Restart the machine reboot
(2) Install docker
yum install docker -y docker -v
(3) Configure domestic image
vi /etc/docker/daemon.json { "registry-mirrors":["https://docker.mirrors.ustc.edu.cn"] }
(4) Install docker compose
download sudo curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose Package already exists in upper level file sudo chmod +x /usr/local/bin/docker-compose docker-compose --version
(5) Start docker
systemctl start docker systemctl enable docker
3.3 installing kubedm, kubelet and kubectl
(1) Add an alicloud YUM source for k8s, and then download the relevant components of k8s to find the download source;
cat > /etc/yum.repos.d/kubernetes.repo << EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
(2) Install kubeadm, kubelet, and kubectl
Kubelet: runs on all nodes of the cluster and is responsible for starting the POD and container;
Kubeadm: a tool for initializing cluster s;
Kubectl: kubectl is a kubenetes command-line tool. Through kubectl, you can deploy and manage applications, view various resources, create, delete and update components;
yum install kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4 -y systemctl enable kubelet.service Check for installation: yum list installed | grep kubelet yum list installed | grep kubeadm yum list installed | grep kubectl To view the installed version: kubelet --version
(3) Restart the machine
(4) Deploy the Kubernetes Master master node (executed on the master node)
kubeadm init --apiserver-advertise-address=192.168.65.133 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.4 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
Output results:
0402 05:10:40.940464 2510 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io] [init] Using Kubernetes version: v1.19.4 [preflight] Running pre-flight checks [WARNING SystemVerification]: missing optional cgroups: pids [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.65.133] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.65.133 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.65.133 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 19.503398 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: zp0v2c.iuh4850t3daaj1tn [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.65.133:6443 --token zp0v2c.iuh4850t3daaj1tn \ --discovery-token-ca-cert-hash sha256:c4d3486feabd40009801526da89b2d227e4a130f692ec034746648c0ceab626e
explain:
The selection of service cidr cannot overlap or conflict with podcidr and the local network. Generally, you can select a private network address segment that is not used by the local network and podcidr. For example, if podcidr uses 10.244.0.0/16, then service cidr can select 10.96.0.0/12, and the network has no overlap or conflict;
(5) Execute on the Master node
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get nodes
(6) The Node node joins the Kubernetes cluster (executed on the Node node)
kubeadm join 192.168.65.133:6443 --token zp0v2c.iuh4850t3daaj1tn \ --discovery-token-ca-cert-hash sha256:c4d3486feabd40009801526da89b2d227e4a130f692ec034746648c0ceab626e
Output results:
[preflight] Running pre-flight checks [WARNING SystemVerification]: missing optional cgroups: pids [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
(7) View Nodes
kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster NotReady master 5m56s v1.19.4 k8snode1 NotReady <none> 56s v1.19.4 k8snode2 NotReady <none> 5s v1.19.4 Found them all NotReady state
3.4 deploying network plug-ins
(1) Download Kube flannel YML file
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
For documents, see/ pkg/kube-flannel.yml
(2) Apply Kube flannel The YML file gets the runtime container (master node)
kubectl apply -f kube-flannel.yml
Output results:
podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created
(3) Restart (primary node)
(4) View Nodes
kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster Ready master 39m v1.19.4 k8snode1 Ready <none> 34m v1.19.4 k8snode2 Ready <none> 33m v1.19.4
(5) View Pod status
kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6d56c8448f-snghb 1/1 Running 0 39m coredns-6d56c8448f-tjgsd 1/1 Running 0 39m etcd-k8smaster 1/1 Running 1 33s kube-apiserver-k8smaster 1/1 Running 1 58s kube-controller-manager-k8smaster 1/1 Running 1 31s kube-flannel-ds-czlv9 1/1 Running 0 26m kube-flannel-ds-j4v7k 1/1 Running 0 26m kube-flannel-ds-j9vhb 1/1 Running 0 26m kube-proxy-cwn25 1/1 Running 0 34m kube-proxy-w47mf 1/1 Running 0 34m kube-proxy-x72bn 1/1 Running 0 39m kube-scheduler-k8smaster 1/1 Running 1 54s
(6) View which node the Pod is distributed on
kubectl get pods -n kube-system -o wide
3, Kubernetes common commands
Get containers under the current namespace kubectl get pods Get list of all containers kubectl get all Create container kubectl create -f kubernate-pvc.yaml kubectl apply -f kubernate-pvc.yaml There is no difference between the two commands when they are first created, but apply Creation can be performed multiple times for patching establish deployment kubectl create deployment [deploy_name] --image=image Exposure services kubectl expose deployment [deploy_name] --port=Container port number --type=NodePort Delete container kubectl delete pods/test-pd perhaps kubectl delete -f rc-nginx.yaml View assignments pod In which node Run on kubectl get pod/test-pd -o wide View container log kubectl logs nginx-8586cf59-mwwtc Enter container terminal kubectl exec -it nginx-8586cf59-mwwtc /bin/bash One Pod There are multiple containers in the--container or -c parameter For example, there are pod be known as my-pod,this pod There are two containers named main-app and helper-app,The following command will open to main-app of shell In the container kubectl exec -it my-pod --container main-app -- /bin/bash Container details list kubectl describe pod/mysql-m8rbl View container status kubectl get svc Empty run test kubectl create deployment [deploy_name] --image=image --dry-run -o yaml kubectl create deployment [deploy_name] --image=image --dry-run -o json Can print out yaml or json content kubectl create deployment [deploy_name] --image=image --dry-run -o yaml > deploy.yaml kubectl create deployment [deploy_name] --image=image --dry-run -o json > deploy.json To be printed yaml or json Save in
4, Kubernetes core components
-
apiserver
API server Is the only entry for all requests; api server Manage all transactions and record information to etcd In the database, etcd There is a feature mechanism for automatic service discovery, etcd A cluster with three nodes will be built to realize three copies;
-
scheduler
scheduler The scheduler is used to schedule resources, view the resource status of business nodes, and determine which node Create on pod,Give instructions to api server;
-
controller-manager
Control Manager controller-manager Administration pod; pod Can be divided into stateful and stateless pod,One pod It is best to put only one container in the container;
Summary:
api server Distribute tasks to business nodes kubelet To execute; Customer access via kube-proxy Go visit pod; pod The following is not necessarily docker,There are other containers; commonly pod Contains only one container, except in one case, which is elk,elk Will be in pod Put one more inside logstash To collect logs;
1.Pod details
Pod is the most important concept of Kubernetes. Each pod has a special Pause container called "root container".
The image corresponding to the Pause container belongs to the Kubernetes platform. In addition to the Pause container, each Pod also contains one or more closely related business containers
-
Pod vs app
Each Pod is an instance of an application with a dedicated IP address
-
Pod vs container
A Pod can have multiple containers that share network and storage resources with each other. A Pause container in each Pod saves the status of all containers. By managing Pause containers, you can achieve the effect of managing all containers in the Pod
-
Pod vs node
Containers in the same Pod are always dispatched to the same Node. The communication between different nodes is realized based on virtual layer-2 network technology
-
Pod vs Pod
Ordinary Pod and static Pod
1.1. Definition of pod
The following is the complete content of Pod defined in yaml file
apiVersion: v1 //edition kind: Pod //Type, Pod metadata: //metadata name: string //Metadata, Pod name namespace: string //Metadata, explicit space of Pod labels: //Metadata, tag list - name: string //Metadata, tag name annotations: //Metadata, custom annotation list - name: string //Metadata, custom annotation name spec: //Detailed definition of containers in Pod containers: //A list of containers in the Pod. There can be multiple containers - name: string //Name of the container image: string //Mirror in container imagesPullPolicy: [Always|Never|IfNotPresent] //Get the policy of the image. The default value is Always. Try to download the image again every time command: [string] //List of container startup commands (if not configured, use the commands inside the image) args: [string] //Startup parameter list workingDir: string //Working directory of the container volumeMounts: //Storage volume settings suspended inside the container - name: string mountPath: string //Absolute path of the storage volume Mount inside the container readOnly: boolean //The default value is read / write ports: //List of port numbers that the container needs to expose - name: string containerPort: int //Ports to be exposed by the container hostPort: int //The port on which the host of the container listens (the exposed port of the container is mapped to the port of the host machine. When setting the hostPort, the same host machine will no longer be able to start the second copy of the container) protocol: string //TCP and UDP, the default is TCP env: //List of environments to be set before the container runs - name: string value: string resources: limits: //Resource limit, maximum available resources of the container cpu: string memory: string requeste: //Resource limit, the initial number of resources available for container startup cpu: string memory: string livenessProbe: //Settings for Pod content health check exec: command: [string] //The specified command or script is required for the exec mode httpGet: //Check health via httpget path: string port: number host: string scheme: string httpHeaders: - name: string value: string tcpSocket: //Check health through tcpSocket port: number initialDelaySeconds: 0 //First inspection time timeoutSeconds: 0 //Check timeout periodSeconds: 0 //Inspection interval successThreshold: 0 failureThreshold: 0 securityContext: //security check privileged: false restartPolicy: [Always|Never|OnFailure] //Restart policy. The default value is Always nodeSelector: object //Node selection means that the Pod is scheduled to the node containing these label s, which is specified in the format of key:value imagePullSecrets: - name: string hostNetwork: false //Whether to use the host network mode and enable the Docker bridge. The default is No volumes: //Define a list of shared storage volumes on this Pod - name: string emptyDir: {} //It is a storage volume with the same life cycle as Pod. It is a temporary directory with empty contents hostPath: string //The directory on the host where the Pod is located, which will be used for the directory of mount in the container path: string secret: //Storage volume of type secret secretName: string item: - key: string path: string configMap: //Storage volume of type configMap name: string items: - key: string path: string
1.2. Basic usage of pod
The requirements for running containers in Kubernetes are that the main program of the container should always run in the foreground, not in the background.
The application needs to be transformed into a foreground operation mode
If the start command of the Docker image we created is a background execution program, run the command after kubelet creates the Pod containing this container, that is, the Pod is considered to have ended, and the Pod will be destroyed immediately.
If it is considered that the Pod defines RC, the creation and destruction will fall into an infinite cycle
Pod can be composed of one or more containers
-
Pod example consisting of one container
#Pod consisting of one container apiVersion: v1 kind: Pod metadata: name: mytomcat labels: name: mytomcat spec: containers: - name: mytomcat image: tomcat ports: - containerPort: 8000
-
Pod example consisting of two tightly coupled containers
#Two tightly coupled containers apiVersion: v1 kind: Pod metadata: name: myweb labels: name: tomcat-redis spec: containers: - name: tomcat image: tomcat ports: - containerPort: 8080 - name: redis image: redis ports: - containerPort: 6379
-
establish
kubectl create -f xxx.yaml
-
see
kubectl get pod/po <Pod_name> kubectl get pod/po <Pod_name> -o wide kubectl describe pod/po <Pod_name>
-
delete
kubectl delete -f pod xxx.yaml kubectl delete pod --all/[pod_name]
1.3.Pod classification
There are two types of Pod
-
Ordinary Pod
Once a common Pod is created, it will be stored in the etcd. Then it will be dispatched to a specific Node by the Kubernetes Master and bound. Then the kubelet process on the Node corresponding to the Pod will be instantiated into a group of related Docker containers and started
By default, when a container in the Pod stops, Kubernetes will automatically detect the problem and restart all containers in the Pod. If the Node where the Pod is located goes down, all pods on the Node will be rescheduled to other nodes
-
Static Pod
Static pods are managed by kubelet and only exist on specific nodes. They cannot be managed through API Server, and cannot be associated with ReplicationController, Deployment, or DaemonSet. Kubelet also cannot perform health checks on them
1.4.Pod lifecycle and restart policy
-
Status of Pod
Status value explain Pending The API Server has created the Pod, but the images of one or more containers in the Pod have not been created, including the image download process Running All containers in the Pod have been created, and at least one container is running, starting or restarting Completed All containers in the Pod successfully perform the protrusion and will not be restarted Failed All containers in Pod have exited, but at least one container failed to exit Unknown Unable to obtain Pod status for some reason, such as poor network communication -
Pod restart policy
The restart strategy of Pod includes Always, OnFailure and Never. The default value is Always
Restart policy explain Always When a container fails, kubelet automatically restarts the container OnFailure When the container terminates and the exit code is not 0, kubelet automatically restarts the container Never kubelet will not restart the container regardless of its running state -
Common state transitions
Number of containers contained in Pod Current status of Pod Occurrence of events Result status of Pod RestartPolicy=Always RestartPolicy=OnFailure RestartPolicy=Never Contains a container Running Container exited successfully Running Succeeded Succeeded Contains a container Running Container failed to exit Running Running Failure Contains two containers Running 1 container failed to exit Running Running Running Contains two containers Running The container was killed by OOM Running Running Failure
1.5.Pod resource configuration
Each Pod can set a quota for the computing resources on the server it can use. There are two types of computing resources in kubernetes that can set a quota: CPU and Memory. The resource unit of CPU is the number of CPUs, which is an absolute value rather than a relative value.
Memory configuration is also an absolute value. Its unit is memory bytes
In kubernetes, the following two parameters need to be set to limit the quota of a computing resource
Requests: the minimum number of requests for this resource. The system must meet the requirements
Limits: the maximum allowable amount of the resource. Can't you break it? When the container tries to use more resources than this amount, it may be killed by kubernetes and restarted
spec: containers: - name: db image: mysql resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m"
The above code indicates that the MYSQL container applies for at least 0.25 CPUs and 64MiB memory. The resource quota that the container can use during operation is 0.5 CPUs and 128MiB memory
2.Label details
Label is another core concept in kubernetes system.
A Label is a key value pair of key value, where key and value are specified by the user
Labels can be attached to various resource objects, such as Node, Pod, Service and RC. A resource object can define any number of labels
The same Label can also be added to any number of resource objects. The Label is usually determined when the resource object is defined, and can also be dynamically added or deleted after the object is created
The most common use of label is to use metadata Labels field to add labels for objects, and use spec.selector to reference objects
apiVersion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 3 selector: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 ---------------------------------------- apiVersion: v1 kind: Service metadata: name: nginx spec: type: NodePort ports: - port: 80 nodePort: 33333 selector: app: nginx
Label is attached to various resource objects in the kubernetes cluster for the purpose of grouping and managing these resource objects. The core of grouping management is the Label Selector
Both Label and Label Selector cannot be defined separately. They must be attached to the definition files of some resource objects. Generally, they are attached to the resource definition files of RC and Service
3.Replication Controller details
Replication Controller (RC) is one of the core concepts in kubernetes system
After we define an RC and submit it to the kubernetes cluster, the Controller Manager component on the Master node will be notified to regularly check the surviving pods in the system and ensure that the number of target Pod instances is just equal to the expected value of the RC. If there are too many or too few pods running, the system will stop or create some pods. In addition, we can also modify the number of RC replicas to achieve the dynamic scaling function of Pod
kubectl scale rc nginx --replicas=5
Since the Replication Controller has the same name as the module Replication Controller in the kubernetes code, it was upgraded to another new concept Replica Sets in kubernetes v1.2, officially interpreted as the next generation RC. The difference between Replica Sets and RC is that Replica Sets supports set based label selectors, while RC only supports equation based label selectors.
We seldom use Replica Sets alone. It is mainly used by Deployment, a higher-level resource object, to form a complete set of scheduling mechanisms for creating, deleting and updating pods
It is better not to create a Pod directly over the RC, because the Replication Controller will manage the Pod replica through the RC to automatically create, supplement, replace and delete the Pod replica. In this way, the disaster tolerance capability of the application can be improved and the loss caused by unexpected conditions such as node crash can be reduced.
Even if the application has only one Pod copy, it is strongly recommended to use RC to define Pod
4.Replica Set details
ReplicaSet is not fundamentally different from Replication Controller, except that the name is different, and RepicaSet supports collective selector (repapplicationcontroller only supports equality)
kubernetes officials strongly recommend avoiding using ReplicaSet directly and creating RS and Pod through Deployment
Since ReplicaSet is an alternative to ReplicationController, its usage is basically the same. The only difference is that ReplicaSet supports a set of selector s
5.Deployment details
Deployelement is a new concept introduced by kubernetes v1.2. The purpose of the introduction is to better solve the arrangement problem of Pod. The Deployment uses ReplicaSet internally to implement
The definition of Deployment is very similar to that of ReplicaSet, except that the API declaration is different from the Kind type
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: frontend spec: replicas: 1 selector: matchLabels: tier: frontend matchExpressions: - {key: tier,operator: In,values: [frontend]} template: metadata: labels: app: app-demo tier: frontend spec: containers: - name: tomcat-demo image: tomcat ports: -containerPort: 8080
6.Horizontal Pod Autoscaler
Like RC and Deployment, Horizontal Pod Autoscaler(Pod horizontal expansion referred to as HPA) is also a kubernetes resource object.
The principle of HPA is to track and analyze the load changes of all target pods controlled by RC to determine whether it is necessary to adjust the number of copies of the target Pod
kubernetes provides manual and automatic modes for Pod expansion and reduction. In the manual mode, you can set the number of Pod copies for a Deployment/RC through the kubelet scale command. In the automatic mode, you need to set a business indicator according to a performance indicator or user-defined, and set a range for the number of Pod copies. The system will automatically adjust within this range according to the change of performance indicators
-
Manual capacity expansion and reduction
kubectl scale deployment frontend --replicas 1
-
Automatic capacity expansion and reduction
The Kube controller manager service startup parameter of the basic Master of the HPA controller - the duration defined by the horizontal Pod autoscaler sync period (the default value is 30s), periodically detects the CPU utilization of the Pod, and adjusts the number of Pod copies in the RC or Deployment when the conditions are met to meet the user-defined CPU utilization of the draw Pod
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 1 template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx resources: requests: cpu: 50m ports: containerPort: 80 -------------------------------------- apiVersion: v1 kind: Service metadata: name: nginx-svc spec: ports: - port: 80 selector: app: nginx -------------------------------------- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: nginx-hpa spec: scaleTargetRef: apiVersion: app/v1beta1 kind: Deployment name: nginx-deployment minReplicas: 1 maxReplicas: 10 targetCPUUtilizationPercentage: 50
7.Volume details
Volume is a shared directory in a Pod that can be accessed by multiple containers
The Volume of Kubernetes is defined on the Pod. It is hung to a specific file directory by multiple containers in a Pod
The life cycle of Volume is the same as that of Pod, but it is not related to the life cycle of container. When the container is terminated or restarted, the data in Volume will not be lost.
To use volume, pod needs to specify the type and content of volume (spec.volumes field) and the location mapped to the container (spec.container.volumeMounts field)
kubernetes supports many types of volumes, including: emptyDir, hostPath, gcepersistenctdish, awsElasticBlockStore, nfs, iscsi, flocker, glusterfs, rbd, cephfs, gitRepo, secret, persistentVolumeClain, downwardAPI, azureFileVolume, azureDisk, vspherevaolume, Quobyte, PortworxVolume, ScaleIO
-
emptyDir
Volumes of EmptyDir type are created when pods are scheduled to a host machine, and containers in the same pod can read and write the same file in EmptyDir. Once the pod leaves the host, the data in EmptyDir will be permanently deleted. So at present, EmptyDir volume is mainly used as temporary space, such as the temporary directory required by the Web server to write logs or tmp files.
yaml example is as follows
apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - iamge: docker.io/nazarpc/webserver name: test-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {}
-
hostPath
The volume of the HostPath property enables the corresponding container to access the specified directory on the current host. For example, if you need to run a container that accesses the Docker system directory, use /var/lib/docker directory as a HostDir type volume for a long time; Or to run CAdvisor inside a container, use the /dev/cgroups directory as a volume of HostDir type for a long time
Once the pod leaves the host, the data in the HostDir will not be permanently deleted, but the data will not be migrated to other host with the pod
Therefore, it should be noted that since the file system structure and content on each host are not necessarily the same, HostDir of the same pod may behave differently on different host computers
yaml example is as follows
apiVersion: v1 kind: Pod metadata: name: test-pd spec: containers: - image: docker.lio/nazarpc/webserver name: test-container #Specify the mount path in the container volumeMounts: - mountPath: /test-pd name: test-volume #Specify the provided storage volume volumes: - name: test-volume #Directory on host hostPath: # directory location on host path: /data
-
nfs
volume of type NFS. Allow an existing network hard disk to be shared among containers in the same pod
yaml example is as follows
apiVersion: apps/v1 kind: Deployment metadata: name: redis spec: selector: matchLabels: app: redis revisionHistoryLimit: 2 template: metadata: labels: app: redis spec: containers: #Apply image - image: redis name: redis imagePullPolicy: IfNotPresent #Internal port of application ports: - containerPort: 6379 name: redis6379 env: - name: ALLOW_EMPTY_PASSWORD value: "yes" - name: REDIS_PASSWORD value: "redis" #Persistent mount location in docker volumeMounts: #Directory on host - name: redis-persistent-storage nfs: path: /k8s-nfs/redis/data server: 192.168.126.112
8.Namespace details
Namespace s are used in many cases to realize multi-user resource isolation. Resource objects within the cluster are allocated to different namespaces to form logical groups, so that different groups can be managed separately while sharing the resources of the whole cluster
After the kubernetes cluster is started, a Namespace named "default" will be formed. If a Namespace is not specially specified, the Pod, RC and Service created by the user will be created into the default Namespace by the system
-
Namespace creation
apiVersion: v1 kind: Namespace metadata: name:development ----------------- apiVersion: v1 kind: Pod metadata: name: busybox namespace:development spec: containers: - image: busybox command: - sleep - "3600" name: busybox
-
Namespace view
kubectl get pods --namespace=development
9.Service details
Service is the core concept of kubernetes. By creating a service, you can provide a unified entry address for a group of container applications with the same functions, and distribute the request load to each container application at the back end
9.1. Definition of service
Service definition file in yaml format
apiVersion: v1 kind: Service metadata: name: string namespace: string labels: - name: string annotations: - name: string spec: selector: [] type: string clusterIP: string sessionAffinity: string ports: - name: string protocol: string port: int targetPort: int nodePort: int status: loadBalance: ingress: ip: string hostname: string
Attribute name | Value type | Required | Value description |
---|---|---|---|
version | string | Required | v1 |
kind | string | Required | Service |
metadata | object | Required | metadata |
metadata.name | string | Required | Service name |
metadata.namespace | string | Required | Namespace, default |
metadata.labels[] | list | Custom label attribute list | |
metadata.annotation[] | list | Custom annotation attribute list | |
spec | object | Rquired | Detailed description |
spec.selector[] | list | Required | Label Selector configuration, which selects the Pod with the specified label label as the management range |
spec.type | string | Required | Type of Service. Specifies the access method of the Service. The default value is ClusterIP. The value range is as follows: ClusterIP, the IP of the virtual Service, which is used for pod access within the k8s cluster. The Kube proxy on the Node is forwarded through the set iptables rules; Nodeport, which uses the port of the host machine. External clients that can access nodes can access services through the IP address and port of the Node; LoadBalancer, which uses an external load balancer to complete the load distribution to the Service, needs to be in spec.status The LoadBalancer field specifies the IP address of the Aibu load balancer, and defines both nodeport and ClusterIP for public cloud environments |
spec.clusterIP | string | The IP address of the virtual service. When type=clusterIP, if it is not specified, the system will automatically assign it, or you can specify it manually. When type=LoadBalancer, you need to specify | |
spec.sessionAffinity | string | Whether to support session can be selected as ClientIP, which means that client access requests from the same source IP address are forwarded to the same back-end Pod. It is empty by default | |
spec.ports[] | list | List of ports to be exposed by the Service | |
spec.ports[].name | string | Port name | |
spec.ports[].protocol | string | Port protocol, supports TCP and UDP, and the default value is TCP | |
spec.ports[].port | int | Port number of service listening | |
spec.ports[].targetPort | int | The port number that needs to be forwarded to the backend Pod | |
spec.ports[].nodePort | int | When spec.type=NodePort, specify the port number mapped to the physical machine | |
status | object | When spec.type=LoadBalancer, set the address of the external load balancer for the public cloud environment | |
status.loadBalancer | object | External load balancer | |
status.loadBalancer.ingress | object | External load balancer | |
statsu.loadBalancer.ingress.ip | string | IP address of external load balancer | |
status.loadBalancer.ingress.hostname | string | Hostname of the external load balancer |
9.2. Basic usage of service
Generally speaking, applications that provide services externally need to be implemented through some mechanism. The easiest way for container applications is through TCP/IP mechanism and listening for IP and port numbers
Create a service with basic functions
apiVersion: v1 kind: ReplicationController metadata: name: mywebapp spec: replicas: 2 template: metadata: name: mywebapp labels: app: mywebapp spec: containers: - name: mywebapp image: tomcat ports: - containerPort: 8080
We can use kubectl get pods -l app=mywebapp -o yaml | grep podIP to obtain the IP address and port number of the Pod to access the Tomcat service, but it is not reliable to directly access the application service through the IP address and port number of the Pod, because when the Node where the Pod is located fails, the Pod will be rescheduled by kubernetes to another Node, and this table will appear for the address of the Pod.
We can define the Service through the configuration file and create it through kubectl create. In this way, we can access the backend Pod through the Service address
apiVersion: v1 kind: Service metadata: name: mywebAppService spec: ports: - port: 8081 targetPort: 8080 selector: app: mywebapp
9.2.1. Multi port Service
Sometimes a container may also provide services with multiple ports, so it can also be set in the definition of Service to correspond multiple ports to multiple application services
apiVersion: v1 kind: Service metadata: name: mywebAppService spec: ports: - port: 8080 targetPort: 8080 - port: 8005 targetPort: 8005 name: management selector: app: mywebapp
9.2.2. External Service
In some special environments, the application system needs to connect an external database as a back-end Service, or use a Service in another cluster or Namespace as a back-end Service. This can be achieved by creating a Service without a Label Selector
apiVersion: v1 kind: Service metadata: name: my-service spec: ports: - protocol: TCP port: 80 tartgetPort: 80 -------------------- apiVersion: v1 kind: EndPoints metadata: name: my-service subsets: - address: - IP: 10.254.74.13 ports: - port: 8080
5, Kubernetes deploys containerized applications
Docker application – > deploy a java program in docker (springboot)
5.1 deploy nginx
kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort kubectl get pod,svc NAME READY STATUS RESTARTS AGE pod/nginx-6799fc88d8-4jw88 1/1 Running 0 2m54s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 53m service/nginx NodePort 10.100.253.65 <none> 80:30948/TCP 6s
http://192.168.65.133:30948/
5.2 deploying Tomcat
kubectl create deployment tomcat --image=tomcat kubectl expose deployment tomcat --port=8080 --type=NodePort
5.3 deploy microservices (springboot program)
1,Project packaging( jar,war)-->Some tools are available git,maven,jenkins 2,make Dockerfile File, generate image; 3,kubectl create deployment nginx --image= Your image 4,Yours springboot It's deployed, so docker The way the container operates in pod Inside;
(1) Custom JDK image (Node node, which can be executed on all three nodes, or load after save)
vi /opt/Dockerfile
FROM centos:latest MAINTAINER zhuyan ADD jdk-8u171-linux-x64.tar.gz /usr/local/java ENV JAVA_HOME /usr/local/java/jdk1.8.0_171 ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar ENV PATH $PATH:$JAVA_HOME/bin CMD java -version
Build image: docker build -t jdk1.8.0_171 .
Run image: docker run -di image id (or image name)
(2) Build the springboot project image (Node node, which can be executed on all three nodes, or load after save)
mkdir -p /opt/springboot
vi /opt/springboot/Dockerfile
FROM jdk1.8.0_171 MAINTAINER zhuyan ADD 38-springboot-k8s-1.0.0.jar /opt RUN chmod +x /opt/38-springboot-k8s-1.0.0.jar CMD java -jar /opt/38-springboot-k8s-1.0.0.jar
Build image: docker build -t 38-springboot-k8s-1.0.0-jar
(3) Empty run test (master node)
--Method 1: Print yaml kubectl create deployment springboot-k8s --image=38-springboot-k8s-1.0.0-jar --dry-run -o yaml --Mode 2: Print json kubectl create deployment springboot-k8s --image=38-springboot-k8s-1.0.0-jar --dry-run -o json --Method 3: Print yaml,And save to deploy.yaml kubectl create deployment springboot-k8s --image=38-springboot-k8s-1.0.0-jar --dry-run -o yaml > deploy.yaml --Mode 4: Print json,And save to deploy.json kubectl create deployment springboot-k8s --image=38-springboot-k8s-1.0.0-jar --dry-run -o json> deploy.json
(4) Modify deploy Yaml file (master node)
apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: springboot-k8s name: springboot-k8s spec: replicas: 1 selector: matchLabels: app: springboot-k8s strategy: {} template: metadata: creationTimestamp: null labels: app: springboot-k8s spec: containers: - image: 38-springboot-k8s-1.0.0-jar imagePullPolicy: Never name: 38-springboot-k8s-1-0-0-jar-2678v resources: {} status: {} increase imagePullPolicy: Never, Change the image pull policy to Never;
(5) yaml file deployment (master node)
kubectl apply -f deploy.yaml Equivalent to: kubectl create deployment springboot-k8s --image=38-springboot-k8s-1.0.0-jar (Omit steps (3) and (4)
(6) View information
--see pod detailed information kubectl describe pods springboot-k8s-699cbb7f7-58hkz(pod Name) --Check if created deployments task kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 14h springboot-k8s 1/1 1 1 8m14s tomcat 1/1 1 1 14h --see pod journal kubectl logs springboot-k8s-699cbb7f7-58hkz(pod Name)
(7) Expose service empty execution
--Method 1: Print yaml kubectl expose deployment springboot-k8s --port=8080 --type=NodePort --dry-run -o yaml --Mode 2: Print json kubectl expose deployment springboot-k8s --port=8080 --type=NodePort --dry-run -o json --Method 3: Print yaml,And save to deploy-service.yaml kubectl expose deployment springboot-k8s --port=8080 --type=NodePort --dry-run -o yaml > deploy-service.yaml --Mode 4: Print json,And save to deploy-service.json kubectl expose deployment springboot-k8s --port=8080 --type=NodePort --dry-run -o json> deploy-service.json
(8) View the deploy service yaml
vi deploy-service.yaml apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: springboot-k8s name: springboot-k8s spec: ports: - port: 8080 protocol: TCP targetPort: 8080 selector: app: springboot-k8s type: NodePort status: loadBalancer: {}
(9) Perform exposure services
kubectl create -f deploy-service.yaml or kubectl apply -f deploy-service.yaml Equivalent to: kubectl expose deployment springboot-k8s --port=8080 --type=NodePort (Omit steps (7) and (8)
(10) View information
--View services kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15h nginx NodePort 10.100.253.65 <none> 80:30948/TCP 14h springboot-k8s NodePort 10.103.30.92 <none> 8080:30673/TCP 5m46s tomcat NodePort 10.108.134.142 <none> 8080:30455/TCP 14h
(11) Access
http://192.168.65.136:30673/38-springboot-k8s/json
5.4 deploy Kubernetes Dashbaord
Kubernetes dashboard is a common Web-based UI of kubernetes cluster. It allows users to manage and troubleshoot applications running in the cluster, as well as manage the cluster itself;
Github: https://github.com/kubernetes/dashboard
(1) Download the resource list of yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
See/ pkg/recommended.yaml
(2) Resource list for applying yaml
kubectl apply -f recommended.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created
(3) Check whether the pod is successful
Note that the namespace is under kubernetes dashboard
kubectl get pod -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE dashboard-metrics-scraper-7b59f7d4df-vkf8z 1/1 Running 0 62s kubernetes-dashboard-665f4c5ff-rm86x 1/1 Running 0 62s
(3) Access
https://192.168.65.136:30001/
You need to enter a token. The following three commands are used to generate a token
kubectl create serviceaccount dashboard-admin -n kube-system kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
5.5 Ingress exposure application
5.5.1 NodePort problem
NodePort Services are the most primitive way for external requests to access services directly NodePort The specified port is opened on all nodes (virtual machines). All requests sent to this port will be directly forwarded to the service pod in
The yaml file format of NodePort service is as follows:
apiVersion: v1 kind: Service metadata: name: my-nodeport-service selector: app: my-appspec: type: NodePort ports: - name: http port: 80 targetPort: 80 nodePort: 30008 protocol: TCP
This method has a "NodePort" port. It can specify which port to open on the node. If no port is specified, it will select a random port. Most of the time, kubernetes should select a random port
However, this method has great shortcomings:
1. one port can only provide one service
2. only ports between 30000-32767 can be used
3. if the ip address of the node / virtual machine changes, it needs to be handled manually
Therefore, this method is not recommended to directly publish services in the production environment. This method can be used if the running services are not required to be available in real time or used for demonstration or temporary running of an application
5.5.2 service type
Type of Service. Specifies the access method of the Service. The default value is ClusterIP.
The value range is as follows:
-
ClusterIP, the IP of the virtual service, is used for pod access within the k8s cluster. The Kube proxy on the Node is forwarded through the set iptables rules;
-
NodePort, which uses the port of the host machine. External clients that can access nodes can access services through the IP address and port of the Node;
-
LoadBalancer, which uses an external load balancer to complete the load distribution to the service, needs to be in spec.status The LoadBalancer field specifies the ip address of the asynchronous load balancer, and defines both nodePort and clusterIP for the public cloud environment
5.5.3 three port descriptions
ports: - name: http port: 80 targetPort: 80 nodePort: 30008 protocol: TCP
-
nodePort
External machine (on windows Browser) accessible ports; Like a Web If the application needs to be accessed by other users, it needs to be configured type=NodePort,And configuration nodePort=30001,Then other machines can access it through the browser scheme://node:30001 accesses the service;
-
targetPort
The port of the container is the same as the port exposed when making the container( Dockerfile in EXPOSE),for example docker.io Official nginx Exposed port 80;
-
port
Kubernetes The access port between services in the cluster, although mysql The container exposes port 3306, but cannot be accessed by external machines mysql Service, because it is not configured NodePort Type. The 3306 port is used by other containers in the cluster to access the service through the 3306 port; kubectl expose deployment springboot-k8s --port=8080 --target-port=8080 --type=NodePort
5.5.4 Ingress
The necessary entry for external requests to enter the k8s cluster
[external link image transfer failed. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-1hGquP57-1617614743960)(.\img.png)]
Although the pod s and service s deployed in the k8s cluster have their own ip addresses, they cannot provide external network access. Services cannot be provided by listening to nodeports. This method is not flexible and is not recommended for production environments
Ingress is an API resource object in the k8s cluster, which is equivalent to a cluster network management system. It can customize routing rules to forward, manage, and expose services (a group of pod s). It is flexible. It is recommended to use this method in the production environment
Ingress is not built-in to k8s (after k8s is installed, ingress is not installed)
ingress needs to be installed separately, and there are many types of Google Cloud Load Balancer, nginx, contour, istio, etc
5.5.4.1 deploy Ingress Nginx
(1) Deploy an nginx container application (expose the port of nginx service to cover up)
kubectl create deployment nginx --image=nginx
(2) Expose the service (expose the port of nginx service to cover up)
kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
(3) Deploy Ingress Nginx
https://github.com/kubernetes/ingress-nginx
Ingress NGINX is an ingress controller of Kubernetes that uses NGINX as a reverse proxy and load balancer;
https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
deploy.yaml see/ pkg/deploy.yaml
Line 332 is changed to Alibaba cloud image:
Alibaba cloud image homepage: http://dev.aliyun.com/
Modify the image address to:
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.33.0
As shown below
spec: hostNetwork: true dnsPolicy: ClusterFirst containers: - name: controller image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.33.0 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown
deploy
kubectl apply -f deploy.yaml namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created
(4) View Ingress status
kubectl get service -n ingress-nginx kubectl get deploy -n ingress-nginx kubectl get pods -n ingress-nginx
5.5.4.2 configuring Ingress Nginx rules
ingress-nginx-rule.yaml see/ pkg/ingress-nginx-rule.yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: k8s-ingress spec: rules: - host: www.abc.com http: paths: - pathType: Prefix path: / backend: service: name: nginx #This corresponds to the service. If you want to expose tomcat, you can change it to a Tomcat service, and change the following service port accordingly port: number: 80
kubectl apply -f ingress-nginx-rule.yaml
Error reporting problem solving
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission Then execute again: kubectl apply -f ingress-nginx-rule.yaml
inspect
kubectl get ing(ress)
5.6 deploying SpringCloud microservices
1,Marking of the project itself jar Package or war Package; 2,Make project image (write Dockerfile Documents); 3,use k8s Deploy image (command mode yaml Mode); 4,External exposure services;
(1) Microservice package
See/ pkg/microservice path
(2) Make Dockerfile (on Node)
vi Dockerfile-consumer
FROM jdk1.8.0_171 MAINTAINER zhuyan ADD 0923-spring-cloud-alibaba-consumer-1.0.0.jar /opt RUN chmod +x /opt/0923-spring-cloud-alibaba-consumer-1.0.0.jar CMD java -jar /opt/0923-spring-cloud-alibaba-consumer-1.0.0.jar
vi Dockerfile-provider
FROM jdk1.8.0_171 MAINTAINER zhuyan ADD 0923-spring-cloud-alibaba-provider-1.0.0.jar /opt RUN chmod +x /opt/0923-spring-cloud-alibaba-provider-1.0.0.jar CMD java -jar /opt/0923-spring-cloud-alibaba-provider-1.0.0.jar
vi Dockerfile-gateway
FROM jdk1.8.0_171 MAINTAINER zhuyan ADD 0923-spring-cloud-alibaba-gateway-1.0.0.jar /opt RUN chmod +x /opt/0923-spring-cloud-alibaba-gateway-1.0.0.jar CMD java -jar /opt/0923-spring-cloud-alibaba-gateway-1.0.0.jar
(3) Making an image (on the Node)
docker build -t spring-cloud-alibaba-consumer -f Dockerfile-consumer . docker build -t spring-cloud-alibaba-provider -f Dockerfile-provider . docker build -t spring-cloud-alibaba-gateway -f Dockerfile-gateway .
(4) Deploy provider
kubectl create deployment spring-cloud-alibaba-provider --image=spring-cloud-alibaba-provider --dry-run -o yaml > provider.yaml modify yaml File, change the image policy to Never,Pull from local containers: - image: spring-cloud-alibaba-provider name: 0923-spring-cloud-alibaba-provider-1.0.0.jar-8ntrx imagePullPolicy: Never #Modifying the image pull policy kubectl apply -f provider.yaml kubectl get pod
(5) Deploy consumer
kubectl create deployment spring-cloud-alibaba-consumer --image=spring-cloud-alibaba-consumer --dry-run -o yaml > consumer.yaml modify yaml File, change the image policy to Never,Pull from local containers: - image: spring-cloud-alibaba-consumer name: 0923-spring-cloud-alibaba-consumer-8ntrx imagePullPolicy: Never #Modifying the image pull policy kubectl apply -f consumer.yaml kubectl expose deployment spring-cloud-alibaba-consumer --port=9090 --target-port=9090 --type=NodePort (This step can be omitted because you can use gateway For exposure) kubectl get pod
(5) Deploy gateway
kubectl create deployment spring-cloud-alibaba-gateway --image=spring-cloud-alibaba-gateway --dry-run -o yaml > gateway.yaml modify yaml File, change the image policy to Never,Pull from local containers: - image: spring-cloud-alibaba-gateway name: 0923-spring-cloud-alibaba-gateway-8ntrx imagePullPolicy: Never #Modifying the image pull policy kubectl apply -f gateway.yaml kubectl expose deployment spring-cloud-alibaba-gateway --port=80 --target-port=80 --type=NodePort kubectl get pod kubectl get svc View service port
(6) Access
If the service port obtained above is 35610, then: http://192.168.65.133:35610/echo
(7) Ingress unified portal
View ingress
kubectl get pods -n ingress-nginx
vi ingress-nginx-gateway-rule.yaml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: k8s-ingress-cloud spec: rules: - host: www.cloud.com http: paths: - pathType: Prefix path: / backend: service: name: spring-cloud-alibaba-gateway port: number: 80
Apply rules
kubectl apply -f ingress-nginx-gateway-rule.yaml kubectl get ing Get rule bound IP,E.g. 192.168.65.136
(8) Access with ingress portal
Configuring hosts on an external machine
Rule bound IP www.cloud Com below
192.168.65.136 www.cloud.com
visit: http://www.cloud.com/echo