Kind introduction
Kind It is the abbreviation of Kubernetes In Docker. As the name suggests, it is a tool that uses the Docker container as a Node and deploys kubernetes to it. The official documents also recommend Kind as a tool for building local clusters. By default, Kind will first download the kindest/node image, which contains the main components of kubernetes. When the image Node is ready, Kind uses kubeadm to create the cluster, and internally uses containerd to run the component container. In the end, Kind is only for the convenience of testing kubernetes clusters, and it cannot be used in production environments.
Deploy Kind
Kind uses Golang for development. On the Release page of the warehouse, it has uploaded the built binary, supports a variety of operating systems, and can be directly downloaded and used on demand.
wget -O /usr/local/bin/kind https://github.com/kubernetes-sigs/kind/releases/download/v0.8.1/kind-linux-amd64 && chmod +x /usr/local/bin/kind # kind v0.8.1 supports the latest kubernetes v1.18.2 cluster
Install docker
yum-config-manager --add-repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo sed -i 's#download.docker.com#mirrors.ustc.edu.cn/docker-ce#g' /etc/yum.repos.d/docker-ce.repo yum install -y docker-ce
Deploy kubectl
wget -O /usr/local/bin/kubctl https://storage.googleapis.com/kubernetes-release/release/v1.18.5/bin/linux/amd64/kubectl chmod +x /usr/local/bin/kubectl
Create kind single cluster
Using the kind command to create
Use kind create to create a cluster. The default is a single node cluster.
# kind create cluster --name test Creating cluster "test" ... ✓ Ensuring node image (kindest/node:v1.18.2) ✓ Preparing nodes ✓ Creating kubeadm config ✓ Starting control-plane ️ ✓ Installing CNI ✓ Installing StorageClass Cluster creation complete. You can now use the cluster with: export KUBECONFIG="$(kind get kubeconfig-path --name="test")" kubectl cluster-info
In the docker environment, an image will be started
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2e0a5e15a4a0 kindest/node:v1.18.2 "/usr/local/bin/entr..." 14 minutes ago Up 14 minutes 45319/tcp, 127.0.0.1:45319->6443/tcp test-control-plane
View cluster information
export KUBECONFIG="$(kind get kubeconfig-path --name="test")" echo 'export KUBECONFIG="$(kind get kubeconfig-path --name=test)"' >> /root/.bashrc kubectl cluster-info Kubernetes master is running at https://localhost:45319 KubeDNS is running at https://localhost:45319/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. kubectl get node -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME test-control-plane Ready master 16m v1.18.2 172.17.0.2 <none> Ubuntu Disco Dingo (development branch) 3.10.0-693.el7.x86_64 containerd://1.2.6-0ubuntu1 kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-fb8b8dccf-6r58d 1/1 Running 0 17m kube-system coredns-fb8b8dccf-bntk8 1/1 Running 0 17m kube-system etcd-test-control-plane 1/1 Running 0 17m kube-system ip-masq-agent-qww8n 1/1 Running 0 17m kube-system kindnet-vbz6w 1/1 Running 0 17m kube-system kube-apiserver-test-control-plane 1/1 Running 0 16m kube-system kube-controller-manager-test-control-plane 1/1 Running 0 17m kube-system kube-proxy-wf7dq 1/1 Running 0 17m kube-system kube-scheduler-test-control-plane 1/1 Running 0 16m
Start nginx app
kubectl run nginx --image nginx:1.17.0-alpine --restart=Never --port 80 --labels="app=nginx-test" kubectl port-forward --address 0.0.0.0 pod/nginx 8080:80 curl localhost:8080
Specify profile creation
# cat kube-config.yaml kind: Cluster apiVersion: kind.sigs.k8s.io/v1alpha3 kubeadmConfigPatches: - | apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration metadata: name: config networking: serviceSubnet: 10.0.0.0/16 imageRepository: registry.aliyuncs.com/google_containers nodeRegistration: kubeletExtraArgs: pod-infra-container-image: registry.aliyuncs.com/google_containers/pause:3.1 - | apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration metadata: name: config networking: serviceSubnet: 10.0.0.0/16 imageRepository: registry.aliyuncs.com/google_containers nodes: # Specify a node, which is a node by default - role: control-plane # kind create cluster --name test2 --config kube-config.yaml Creating cluster "test2" ... ✓ Ensuring node image (kindest/node:v1.18.2) ✓ Preparing nodes ✓ Creating kubeadm config ✓ Starting control-plane ️ ✓ Installing CNI ✓ Installing StorageClass Cluster creation complete. You can now use the cluster with: export KUBECONFIG="$(kind get kubeconfig-path --name="test2")" kubectl cluster-info
Create kind ha cluster
The ha cluster configuration can only be declared through the configuration file
# cat kind-ha-config.yaml kind: Cluster apiVersion: kind.sigs.k8s.io/v1alpha3 kubeadmConfigPatches: - | apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration metadata: name: config networking: serviceSubnet: 10.0.0.0/16 imageRepository: registry.aliyuncs.com/google_containers nodeRegistration: kubeletExtraArgs: pod-infra-container-image: registry.aliyuncs.com/google_containers/pause:3.1 - | apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration metadata: name: config networking: serviceSubnet: 10.0.0.0/16 imageRepository: registry.aliyuncs.com/google_containers nodes: #Mainly modify this location, add a role, and specify a node node. The name of the work node must be worker, and the master node must be control plane - role: control-plane - role: control-plane - role: control-plane - role: worker - role: worker - role: worker # kind create cluster --name test-ha --config kind-ha-config.yaml Creating cluster "test-ha" ... ✓ Ensuring node image (kindest/node:v1.18.2) ✓ Preparing nodes ✓ Configuring the external load balancer ⚖️ ✓ Creating kubeadm config ✓ Starting control-plane ️ ✓ Installing CNI ✓ Installing StorageClass ✓ Joining more control-plane nodes ✓ Joining worker nodes Cluster creation complete. You can now use the cluster with: export KUBECONFIG="$(kind get kubeconfig-path --name="test3")" kubectl cluster-info # kubectl get nodes NAME STATUS ROLES AGE VERSION test3-control-plane Ready master 7m44s v1.18.2 test3-control-plane2 Ready master 4m59s v1.18.2 test3-control-plane3 Ready master 2m18s v1.15.0 test3-worker Ready <none> 110s v1.15.0 test3-worker2 Ready <none> 109s v1.15.0 test3-worker3 Ready <none> 105s v1.15.0
Common operations
kind is based on docker. Let's take a look at docker resources to verify whether it is really based on docker?
# When executed locally, you will be prompted that a docker container is running. You can see that there is a container in docker, and the cluster created by kind is based on this container. If you delete this container directly, the k8s cluster created by kind will also have problems docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2e0a5e15a4a0 kindest/node:v1.18.2 "/usr/local/bin/entr..." 14 minutes ago Up 14 minutes 45319/tcp, 127.0.0.1:45319->6443/tcp test-control-plane
Let's take another look at the network. Running docker network ls, we can see that there is a network named kind.
docker network ls NETWORK ID NAME DRIVER SCOPE 94de31154cb7 bridge bridge local 0c31de104d44 host host local a667b873436d kind bridge local 6083dbc308a4 none null local
We can further explore kind control plane (the docker container above)
Use docker exec kind control plane crictl PS to get the list of running containers inside the container. The container is operated through crictl inside the container. You can refer to https://github.com/kubernetes-sigs/cri-tools,crtctl It is mainly used to manage containers. The command usage is the same as the docker command. You can view the command usage through docker exec kind control plane crittl help.
# master node docker exec kind-ha-control-plane crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 4b62cd954c86a ace0a8c17ba90 18 minutes ago Running kube-controller-manager 3 8f000bb1c20f3 90552a29c50d9 db10073a6f829 19 minutes ago Running local-path-provisioner 8 7e648bc7297b1 268f41443c426 a3099161e1375 19 minutes ago Running kube-scheduler 4 a64377f98d627 aa3fea2edc80d 67da37a9a360e 3 hours ago Running coredns 0 719884414c5f4 04c58978f5395 67da37a9a360e 3 hours ago Running coredns 0 da6e08629ac71 110429a5a873b 2186a1a396deb 3 hours ago Running kindnet-cni 0 5359903320ef9 1c125b02f6300 0d40868643c69 3 hours ago Running kube-proxy 0 9ba4d0a1fdd3d 0301cd4d26d9c 6ed75ad404bdd 3 hours ago Running kube-apiserver 0 4905e2b2a8a1a 435ee12a45bff 303ce5db0e90d 3 hours ago Running etcd 0 96f4e9190bede # worker node docker exec kind-ha-worker crictl ps CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 5e82bf4756b6f bfba26ca350c1 21 minutes ago Running nginx 0 78d1324b8b1a1 ea02b20040341 0d40868643c69 3 hours ago Running kube-proxy 0 de6a57d0b7381 c2aa986df532f 2186a1a396deb 3 hours ago Running kindnet-cni 1 ebea7a329cfe0
Get all resource samples of k8s cluster
kubectl get all --all-namespaces
Get the config configuration file outside the cluster
kind get kubeconfig --name kind-ha
Get the internal config configuration file of the cluster (generally, getting external is enough)
kind get kubeconfig --internal --name kind-ha
Delete cluster (easy to clean)
kind delete cluster --name test
Mount file
nodes: - role: control-plane extraMounts: - containerPath: /etc/docker/daemon.json hostPath: /etc/docker/daemon.json readOnly: true
Expose port method 1
nodes: - role: control-plane extraPortMappings: - containerPort: 30080 hostPort: 30080
Sometimes we want to expose the svc port for external access. Because the kubernetes node is in the docker container, we also need the container to expose the svc port for external access through the host machine.
Expose port method 2
# How to use port forwarding kubectl port-forward --address 0.0.0.0 pod/nginx 8080:80
Download Image from private warehouse
docker exec test-drone-control-plane bash -c "sed -i '56a\ [plugins.cri.registry.mirrors.\"192.168.77.134:5000\"]' /etc/containerd/config.toml" docker exec test-drone-control-plane bash -c "sed -i '57a\ endpoint = [\"http://192.168.77.134:5000\"]' /etc/containerd/config.toml" docker exec test-drone-control-plane bash -c "cat /etc/containerd/config.toml" docker exec test-drone-control-plane bash -c 'kill -s SIGHUP $(pgrep containerd)' COPY
You can also mount the containerd configuration file by mounting a file
Restart the cluster
docker stop test-drone-control-plane docker start test-drone-control-plane COPY
In short, kind can easily create a test cluster in the docker environment without polluting our host computer, which provides great convenience for our testing.
Example: deploying WordPress and MySQL using persistent volumes
This tutorial shows you how to use Kind to deploy the WordPress website and MySQL database. Both applications use PersistentVolumes and PersistentVolumeClaims to store data.
PersistentVolume (PV) has been manually provisioned by the administrator or dynamically used by Kubernetes to provision a piece of storage in the cluster StorageClass. PersistentVolumeClaim (PVC) is a storage request that can be satisfied by PV. PersistentVolumes and PersistentVolumeClaims are independent of the Pod life cycle and retain data by restarting, rescheduling, or even deleting the Pod.
Warning: this deployment is not suitable for production use cases because it uses single instance WordPress and MySQL Pods. Consider using WordPress Helm Chart Deploy WordPress in production.
Note: the files provided in this tutorial use the GA Deployment API and are specific to kubernetes 1.9 and later. If you want to use this tutorial with an earlier version of Kubernetes, update the API version accordingly, or refer to an earlier version of this tutorial.
target
- Create PersistentVolumeClaims and PersistentVolumes
- Create a kustomization Yaml and
- Secret password configuration
- MySQL resource configuration
- WordPress resource configuration
- Apply the Kustomization directory kubectl apply -k in the following way/
- clean up
Before you start
You need to have a Kubernetes cluster, and you must configure the kubectl command line tool to communicate with the cluster. If you do not already have a cluster, you can use the Minikube Alternatively, Kind can create a cluster, or use one of the following Kubernetes amusement parks:
To check the version, enter kubectl version. The examples shown on this page are applicable to kubectl1.14 and later versions. We have created a kind ha cluster above. You can directly follow the following steps.
kubectl version Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5", GitCommit:"e6503f8d8f769ace2f338794c914a96fc335df0f", GitTreeState:"clean", BuildDate:"2020-06-26T03:47:41Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:48:36Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Download the following configuration files:
Create PersistentVolumeClaims and PersistentVolumes
Both MySQL and Wordpress require a PersistentVolume to store data. Their PersistentVolumeClaims will be created in the deployment step.
Many clustered environments have a default StorageClass installed. If no StorageClass is specified in the PersistentVolumeClaim, the default StorageClass for the cluster is used.
After the PersistentVolumeClaim is created, the PersistentVolume is dynamically set according to the StorageClass configuration.
# View StorageClass kubectl get sc kubectl describe sc $(kubectl get sc |grep default|awk 'NR==2{print $1}')
Warning: in the local cluster, the default StorageClass uses the hostPath configurator. hostPath this volume is for development and testing only. With the hostPath volume, your data /tmp will reside on the node to which the Pod is scheduled and will not move between nodes. If the Pod dies and is dispatched to another node in the cluster, or the node is rebooted, the data will be lost.
Note: if you want to start a cluster that needs to use the hostPath configurator, the --enable hostPath provider must set this flag in the controller manager component.
Note: if you have a Kubernetes cluster running on the Kubernetes Engine, follow This guide.
Create a Secret password file kustomization Yaml
secret Is an object that stores a piece of sensitive data such as a password or key. As of 1.14, kubectl supports managing Kubernetes objects using the kustomization file. You can create a secret kustomization Yaml.
Kustomization Yaml adds a Secret generator from the following command. You will need to replace YOUR_PASSWORD is the password you want to use.
cat <<EOF >./kustomization.yaml secretGenerator: - name: mysql-pass literals: - password=YOUR_PASSWORD EOF
Add resource configuration for MySQL and WordPress
The following listing describes a single instance MySQL deployment. The MySQL container mounts the PersistentVolume in / var / lib / mysql. In MySQL_ Root_ The password environment variable sets the password from the secret database.
application/wordpress/mysql-deployment.yaml
apiVersion: v1 kind: Service metadata: name: wordpress-mysql labels: app: wordpress spec: ports: - port: 3306 selector: app: wordpress tier: mysql clusterIP: None --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: wordpress spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: wordpress-mysql labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: mysql strategy: type: Recreate template: metadata: labels: app: wordpress tier: mysql spec: containers: - image: mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim
The following listing describes a single instance WordPress deployment. The WordPress container installs the PersistentVolume in the location /var/www/html is used for web site data files. In WordPress_ DB_ The host environment variable sets the name of the MySQL service defined above. WordPress will access the database by the service. In WordPress_ DB_ The password environment variable sets the password from the generated secret kustomize database.
application/wordpress/wordpress-deployment.yaml
application/wordpress/wordpress-deployment.yaml apiVersion: v1 kind: Service metadata: name: wordpress labels: app: wordpress spec: ports: - port: 80 selector: app: wordpress tier: frontend type: LoadBalancer --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wp-pv-claim labels: app: wordpress spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi --- apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: wordpress labels: app: wordpress spec: selector: matchLabels: app: wordpress tier: frontend strategy: type: Recreate template: metadata: labels: app: wordpress tier: frontend spec: containers: - image: wordpress:4.8-apache name: wordpress env: - name: WORDPRESS_DB_HOST value: wordpress-mysql - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 80 name: wordpress volumeMounts: - name: wordpress-persistent-storage mountPath: /var/www/html volumes: - name: wordpress-persistent-storage persistentVolumeClaim: claimName: wp-pv-claim
Download the configuration file directly with curl command
-
Download the MySQL deployment profile.
curl -LO https://k8s.io/examples/application/wordpress/mysql-deployment.yaml
-
Download the WordPress profile.
curl -LO https://k8s.io/examples/application/wordpress/wordpress-deployment.yaml
-
Add them to kustomization Yaml file.
cat <<EOF >>./kustomization.yaml resources: - mysql-deployment.yaml - wordpress-deployment.yaml EOF
Execute and verify
In kustomization Yaml contains all the resources for deploying the WordPress website and MySQL database. You can launch the app in the following ways
kubectl apply -k ./
Now you can verify that all objects exist.
-
Verify that the secret exists by running the following command:
kubectl get secrets
The response shall be as follows:
NAME TYPE DATA AGE mysql-pass-c57bb4t7mf Opaque 1 9s
-
Verify that the PersistentVolume is dynamically configured.
kubectl get pvc
Note: setting up and binding PV may take several minutes.
The response shall be as follows:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pv-claim Bound pvc-8cbd7b2e-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s wp-pv-claim Bound pvc-8cd0df54-4044-11e9-b2bb-42010a800002 20Gi RWO standard 77s
-
Verify that the Pod is running by running the following command:
kubectl get pods
Note: the status of the Pod may take up to a few minutes to run.
The response shall be as follows:
NAME READY STATUS RESTARTS AGE wordpress-mysql-1894417608-x5dzt 1/1 Running 0 40s
-
Verify that the service is running by running the following command:
kubectl get services wordpress
The response shall be as follows:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress LoadBalancer 10.0.0.89 <pending> 80:32406/TCP 4m
Note: Minikube can only expose the service NodePort. EXTERNAL-IP is always suspended.
The Currently, I use kind to create a cluster. I can directly access the WordPress page through port forwarding
-
Run the following command to obtain the IP address of the WordPress service:
kubectl port-forword --address 0.0.0.0 svc/wordpress 8000:80
-
Copy the IP address and load the page into the browser to view your site.
You should see a WordPress setup page similar to the following screenshot.
Warning: do not leave WordPress installation on this page. If other users find it, they can build a website on your instance and use it to provide malicious content.
Install WordPress or delete your instance by creating a user name and password.
Delete WordPress
-
Run the following command to delete your keys, deployments, services, and PersistentVolumeClaims:
kubectl delete -k ./