Update certificate
It is very convenient to install kubernetes cluster using kubeadm, but there is also a annoying problem that the default certificate validity period is only one year, so it is necessary to consider the issue of certificate upgrade. The demonstration cluster version in this article is v1.16.2, which does not guarantee that the following operations are also applicable to other versions. Before operation, the certificate directory must be backed up to prevent rollback of operation errors. This article mainly introduces two methods to update cluster certificates.
Manually update certificate
By default, the client certificate generated by kubeadm is only valid for one year. We can use the check expiration command to check whether the certificate has expired:
$ kubeadm alpha certs check-expiration CERTIFICATE EXPIRES RESIDUAL TIME EXTERNALLY MANAGED admin.conf Nov 07, 2020 11:59 UTC 73d no apiserver Nov 07, 2020 11:59 UTC 73d no apiserver-etcd-client Nov 07, 2020 11:59 UTC 73d no apiserver-kubelet-client Nov 07, 2020 11:59 UTC 73d no controller-manager.conf Nov 07, 2020 11:59 UTC 73d no etcd-healthcheck-client Nov 07, 2020 11:59 UTC 73d no etcd-peer Nov 07, 2020 11:59 UTC 73d no etcd-server Nov 07, 2020 11:59 UTC 73d no front-proxy-client Nov 07, 2020 11:59 UTC 73d no scheduler.conf Nov 07, 2020 11:59 UTC 73d no
This command displays the expiration / remaining time of the client certificate in the /etc/kubernetes/pki folder and the client certificate embedded in the KUBECONFIG file used by kubedm.
kubeadm cannot manage certificates signed by external CA S. If it is an external certificate, it needs to manage the update of the certificate manually.
It should also be noted that kubelet Conf, because kubedm configures kubelet to automatically update certificates.
In addition, kubedm will automatically update all certificates when the control panel is upgraded. Therefore, the best way to build a cluster using kubedm is to upgrade the cluster frequently. This can ensure that your cluster remains up-to-date and maintains reasonable security. However, for the actual production environment, we may not upgrade the cluster frequently, so we need to manually update the certificate at this time.
It is also very convenient to update the certificate manually. We only need to update your certificate through the kubedm alpha certs renew command, which uses the Ca (or front proxy CA) certificate and the key stored in /etc/kubernetes/pki.
If you are running a highly available cluster, this command needs to be executed on all control panel nodes.
Next, we will update our cluster certificate. The following operations are performed on the master node. First, back up the original certificate:
$ mkdir /etc/kubernetes.bak $ cp -r /etc/kubernetes/pki/ /etc/kubernetes.bak $ cp /etc/kubernetes/*.conf /etc/kubernetes.bak
Then back up the etcd data directory:
$ cp -r /var/lib/etcd /var/lib/etcd.bak
Next, execute the command to update the certificate:
$ kubeadm alpha certs renew all --config=kubeadm.yaml kubeadm alpha certs renew all --config=kubeadm.yaml certificate embedded in the kubeconfig file for the admin to use and for kubeadm itself renewed certificate for serving the Kubernetes API renewed certificate the apiserver uses to access etcd renewed certificate for the API server to connect to kubelet renewed certificate embedded in the kubeconfig file for the controller manager to use renewed certificate for liveness probes to healthcheck etcd renewed certificate for etcd nodes to communicate with each other renewed certificate for serving etcd renewed certificate for the front proxy client renewed certificate embedded in the kubeconfig file for the scheduler manager to use renewed
The certificate is updated with one click through the above command. At this time, you can see that the expiration time is one year later by viewing the above certificate:
$ kubeadm alpha certs check-expiration CERTIFICATE EXPIRES RESIDUAL TIME EXTERNALLY MANAGED admin.conf Aug 26, 2021 03:47 UTC 364d no apiserver Aug 26, 2021 03:47 UTC 364d no apiserver-etcd-client Aug 26, 2021 03:47 UTC 364d no apiserver-kubelet-client Aug 26, 2021 03:47 UTC 364d no controller-manager.conf Aug 26, 2021 03:47 UTC 364d no etcd-healthcheck-client Aug 26, 2021 03:47 UTC 364d no etcd-peer Aug 26, 2021 03:47 UTC 364d no etcd-server Aug 26, 2021 03:47 UTC 364d no front-proxy-client Aug 26, 2021 03:47 UTC 364d no scheduler.conf Aug 26, 2021 03:47 UTC 364d no
Then remember to update the kubeconfig file:
$ kubeadm init phase kubeconfig all --config kubeadm.yaml [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/scheduler.conf"
Overwrite the original admin file with the newly generated admin configuration file:
$ mv $HOME/.kube/config $HOME/.kube/config.old $ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ chown $(id -u):$(id -g) $HOME/.kube/config
After completion, restart the Kube apiserver, Kube controller, Kube scheduler and etcd containers. We can check the validity of apiserver's certificate to verify whether the update is successful:
$ docker restart `docker ps | grep etcd | awk '{ print $1 }'` $ docker restart `docker ps | grep kube-apiserver | awk '{ print $1 }'` $ docker restart `docker ps | grep kube-scheduler | awk '{ print $1 }'` $ docker restart `docker ps | grep kube-controller | awk '{ print $1 }'` systemctl restart kubelet $ echo | openssl s_client -showcerts -connect 127.0.0.1:6443 -servername api 2>/dev/null | openssl x509 -noout -enddate notAfter=Aug 26 03:47:23 2021 GMT
It can be seen that the current validity period is one year later, which proves that the update has been successful.
Updating certificates with Kubernetes certificate API (not recommended)
In addition to the one click manual certificate update described above, you can also use the Kubernetes certificate API to perform manual certificate update. For the online environment, we may not take the risk of updating the cluster or certificate frequently. After all, these are risky. Therefore, we hope that the validity period of the generated certificate is long enough. Although this is not recommended from the perspective of security, it is also very necessary for some scenarios to have a long enough validity period. Many administrators manually change the source code of kubeadm to 10 years, and then recompile it to create a cluster. Although this method can achieve the goal, it is not recommended. Especially when you want to update the cluster, you have to update it with a new version. In fact, Kubernetes provides an API to help us generate a long enough certificate validity.
To use the built-in API method to sign, first, we need to configure the --empirical-cluster-signing-duration parameter of the Kube controller manager component and adjust it to 10 years. Here is the cluster installed by kubeadm, so we can directly modify the yaml file of the static Pod:
$ vi /etc/kubernetes/manifests/kube-controller-manager.yaml ...... spec: containers: - command: - kube-controller-manager # Set the certificate validity to 10 years - --experimental-cluster-signing-duration=87600h - --client-ca-file=/etc/kubernetes/pki/ca.crt ......
After the modification, Kube controller manager will automatically restart and take effect. Then we need to create a certificate signing request for the Kubernetes certificate API using the following command. If you set up external signers such as cert manager, certificate signing requests (CSRs) are automatically approved. No, you must manually approve the certificate using the kubectl certificate command. The following kubeadm command outputs the name of the certificate to be approved and waits for approval to occur:
$ kubeadm alpha certs renew all --use-api --config kubeadm.yaml &
The output is similar to the following:
[1] 2890 [certs] Certificate request "kubeadm-cert-kubernetes-admin-pn99f" created
Then we need to manually approve the certificate:
$ kubectl get csr NAME AGE REQUESTOR CONDITION kubeadm-cert-kubernetes-admin-pn99f 64s kubernetes-admin Pending # Manual approval certificate $ kubectl certificate approve kubeadm-cert-kubernetes-admin-pn99f certificatesigningrequest.certificates.k8s.io/kubeadm-cert-kubernetes-admin-pn99f approved
In the same way, perform the approval operation for the Pending csr until all the csr are approved. Finally, the status of all csr lists is as follows:
$ kubectl get csr NAME AGE REQUESTOR CONDITION kubeadm-cert-front-proxy-client-llhrj 30s kubernetes-admin Approved,Issued kubeadm-cert-kube-apiserver-2s6kf 2m43s kubernetes-admin Approved,Issued kubeadm-cert-kube-apiserver-etcd-client-t9pkx 2m7s kubernetes-admin Approved,Issued kubeadm-cert-kube-apiserver-kubelet-client-pjbjm 108s kubernetes-admin Approved,Issued kubeadm-cert-kube-etcd-healthcheck-client-8dcn8 64s kubernetes-admin Approved,Issued kubeadm-cert-kubernetes-admin-pn99f 4m29s kubernetes-admin Approved,Issued kubeadm-cert-system:kube-controller-manager-mr86h 79s kubernetes-admin Approved,Issued kubeadm-cert-system:kube-scheduler-t8lnw 17s kubernetes-admin Approved,Issued kubeadm-cert-ydzs-master-cqh4s 52s kubernetes-admin Approved,Issued kubeadm-cert-ydzs-master-lvbr5 41s kubernetes-admin Approved,Issued
Validity of inspection certificate after approval:
$ kubeadm alpha certs check-expiration CERTIFICATE EXPIRES RESIDUAL TIME EXTERNALLY MANAGED admin.conf Nov 05, 2029 11:53 UTC 9y no apiserver Nov 05, 2029 11:54 UTC 9y no apiserver-etcd-client Nov 05, 2029 11:53 UTC 9y no apiserver-kubelet-client Nov 05, 2029 11:54 UTC 9y no controller-manager.conf Nov 05, 2029 11:54 UTC 9y no etcd-healthcheck-client Nov 05, 2029 11:53 UTC 9y no etcd-peer Nov 05, 2029 11:53 UTC 9y no etcd-server Nov 05, 2029 11:54 UTC 9y no front-proxy-client Nov 05, 2029 11:54 UTC 9y no scheduler.conf Nov 05, 2029 11:53 UTC 9y no
We can see that it has been extended for 10 years. This is because the ca certificate is only valid for 10 years.
However, we can not restart several components of the control panel directly. This is because the etcd corresponding to the cluster installed by kubedm uses the certificate /etc/kubernetes/pki/etcd/ca.crt by default. The certificate approved by the command kubectl certificate approve is issued using the default certificate /etc/kubernetes/pki/ca.crt, Therefore, we need to replace the CA authority certificate in etcd:
# Copy the static Pod resource list first $ cp -r /etc/kubernetes/manifests/ /etc/kubernetes/manifests.bak $ cp /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/etcd/ca.crt $ cp /etc/kubernetes/pki/ca.key /etc/kubernetes/pki/etcd/ca.key
In addition, you need to replace the requestheader client CA file. The default file is /etc/kubernetes/pki/front-proxy-ca.crt. Now you also need to replace it with the default CA file. Otherwise, if you use the aggregation API, for example, if you execute the kubectl top command after installing metrics server, an error will be reported:
$ cp /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/front-proxy-ca.crt $ cp /etc/kubernetes/pki/ca.key /etc/kubernetes/pki/front-proxy-ca.key
Because it is a static Pod, the above components will automatically restart and take effect after modification. Since our current version of kubelet has enabled automatic certificate rotation by default, kubelet's certificates do not need to be managed anymore. In this way, I will update the certificates to be valid for 10 years. Before operation, the certificate directory must be backed up to prevent rollback of operation errors.
Cluster upgrade
The latest version of Kubernetes is v1.19.0. Our environment here is v1.16.2. Because the version span here is too large, we can not directly start from 1.16 X updated to 1.19 x. kubeadm does not support updating across multiple major versions, so we can upgrade one version at a time. However, the methods of version updating are basically the same, so it is easy to update later. Let's update the cluster to v1.16.14 first.
First, view the current cluster version:
$ kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:09:08Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
First, we keep the kubeadm config file:
$ kubeadm config view > kubeadm-config.yaml apiServer: extraArgs: authorization-mode: Node,RBAC timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: registry.aliyuncs.com/k8sxio # Change to Alibaba cloud image source kind: ClusterConfiguration kubernetesVersion: v1.16.14 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {}
Change the above imageRepository value to: registry Aliyuncs Com/k8sxio, and then save the contents to the file kubedm config Yaml (of course, if your cluster can get the image of grc.io, it can not be changed).
Then update kubeadm:
$ yum makecache fast && yum install -y kubeadm-1.16.14-0 kubectl-1.16.14-0 $ kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.14", GitCommit:"d2a081c8e14e21e28fe5bdfa38a817ef9c0bb8e3", GitTreeState:"clean", BuildDate:"2020-08-13T12:31:14Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Because dl K8s IO gets the version information. This address needs scientific methods to access, so we can update kubeadm to the target version first, and then we can view some information about the target version upgrade.
Execute the upgrade plan command to see if the upgrade is possible:
$ kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.16.2 [upgrade/versions] kubeadm version: v1.16.14 I0827 15:46:54.805052 11355 version.go:251] remote version is much newer: v1.19.0; falling back to: stable-1.16 [upgrade/versions] Latest stable version: v1.16.14 [upgrade/versions] Latest version in the v1.16 series: v1.16.14 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 7 x v1.16.2 v1.16.14 Upgrade to the latest version in the v1.16 series: COMPONENT CURRENT AVAILABLE API Server v1.16.2 v1.16.14 Controller Manager v1.16.2 v1.16.14 Scheduler v1.16.2 v1.16.14 Kube Proxy v1.16.2 v1.16.14 CoreDNS 1.6.2 1.6.2 Etcd 3.3.15 3.3.15-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.16.14 _____________________________________________________________________
We can first use the dry run command to view the upgrade information:
$ kubeadm upgrade apply v1.16.14 --config kubeadm-config.yaml --dry-run `` Be careful to pass `--config` Specify the above saved configuration file, which contains the cluster information of the previous version and the modified image address. After checking the above upgrade information and confirming that it is correct, you can perform the upgrade operation. We can download the required image in advance:
$ kubeadm config images pull --config kubeadm-config.yaml
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-apiserver:v1.16.14
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-controller-manager:v1.16.14
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-scheduler:v1.16.14
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-proxy:v1.16.14
[config/images] Pulled registry.aliyuncs.com/k8sxio/pause:3.1
[config/images] Pulled registry.aliyuncs.com/k8sxio/etcd:3.3.15-0
[config/images] Pulled registry.aliyuncs.com/k8sxio/coredns:1.6.2
Then you can execute the real upgrade command:
$ kubeadm upgrade apply v1.16.14 --config kubeadm-config.yaml
kubeadm upgrade apply v1.16.14 --config kubeadm-config.yaml
[upgrade/config] Making sure the configuration is correct:
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.16.14"
[upgrade/versions] Cluster version: v1.16.2
[upgrade/versions] kubeadm version: v1.16.14
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]:y
......
After a period of time, you can see the following information to prove that the cluster upgrade is successful:
......
[addons]: Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.14". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
Because we have updated the above kubectl Okay, now we use kubectl To view the following version information:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.14", GitCommit:"d2a081c8e14e21e28fe5bdfa38a817ef9c0bb8e3", GitTreeState:"clean", BuildDate:"2020-08-13T12:33:34Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.14", GitCommit:"d2a081c8e14e21e28fe5bdfa38a817ef9c0bb8e3", GitTreeState:"clean", BuildDate:"2020-08-13T12:24:51Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ydzs-master Ready master 292d v1.16.2
ydzs-node1 Ready 292d v1.16.2
ydzs-node2 Ready 292d v1.16.2
ydzs-node3 Ready 290d v1.16.2
ydzs-node4 Ready 290d v1.16.2
ydzs-node5 Ready 218d v1.16.2
ydzs-node6 Ready 218d v1.16.2
You can see that the version is not updated because the kubelet Not updated yet, we can kubelet View the following version:
$ kubelet --version
Kubernetes v1.16.2
Let's update it manually at this time kubelet:
$ yum install -y kubelet-1.16.14-0
View the next version after installation
$ kubelet --version
Kubernetes v1.16.14
Then restart the kubelet service
$ systemctl daemon-reload
$ systemctl restart kubelet
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ydzs-master Ready master 292d v1.16.14
ydzs-node1 Ready 292d v1.16.2
ydzs-node2 Ready 292d v1.16.2
ydzs-node3 Ready 290d v1.16.2
ydzs-node4 Ready 290d v1.16.2
ydzs-node5 Ready 218d v1.16.2
ydzs-node6 Ready 218d v1.16.2
Can see master Node has been updated to v1.16.14 When the version is up, you can upgrade nodes. When upgrading nodes, you'd better expel the nodes first and then upgrade them one by one:
$ kubectl drain ydzs-node1 --ignore-daemonsets
node/ydzs-node1 cordoned
error: unable to drain node "ydzs-node1", aborting command...
There are pending nodes to be drained:
ydzs-node1
error: cannot delete Pods with local storage (use --delete-local-data to override): rook-ceph/csi-cephfsplugin-provisioner-56c8b7ddf4-n96kk, rook-ceph/csi-rbdplugin-provisioner-6ff4dd4b94-2bl82
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ydzs-master Ready master 292d v1.16.14
ydzs-node1 Ready,SchedulingDisabled 292d v1.16.2
ydzs-node2 Ready 292d v1.16.2
ydzs-node3 Ready 290d v1.16.2
ydzs-node4 Ready 290d v1.16.2
ydzs-node5 Ready 218d v1.16.2
ydzs-node6 Ready 218d v1.16.2
And then in ydzs-node1 Execute update command on node:
$ kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
Update package
$ yum install -y kubeadm-1.16.14-0 kubectl-1.16.14-0 kubelet-1.16.14-0
Restart kubelet after installation
$ systemctl daemon-reload
$ systemctl restart kubelet
After the update is completed, confirm that the node upgrade is successful:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ydzs-master Ready master 292d v1.16.14
ydzs-node1 Ready,SchedulingDisabled 292d v1.16.14
ydzs-node2 Ready 292d v1.16.2
ydzs-node3 Ready 290d v1.16.2
ydzs-node4 Ready 290d v1.16.2
ydzs-node5 Ready 218d v1.16.2
ydzs-node6 Ready 218d v1.16.2
Then, disable scheduling:
$ kubectl uncordon ydzs-node1
node/ydzs-node1 uncordoned
Upgrade other nodes in the same way to upgrade successfully:
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ydzs-master Ready master 292d v1.16.14
ydzs-node1 Ready 292d v1.16.14
ydzs-node2 Ready 292d v1.16.14
ydzs-node3 Ready 290d v1.16.14
ydzs-node4 Ready 290d v1.16.14
ydzs-node5 Ready 218d v1.16.14
ydzs-node6 Ready 218d v1.16.14