K8s tutorial 01 (k8s environment configuration and private warehouse building)

Infrastructure as a Service (laas, Infrastructure as a Service) Alibaba cloud
Platform as a service (Paas, platform as a service) Sina cloud
Software as a Service (Saas) Office 365

Apache MESOS Explorer
docker Swarm

Highly available cluster replica data should preferably be > = 3 odd numbers

Data address:
https://gitee.com/WeiboGe2012/kubernetes-kubeadm-install
https://gitee.com/llqhz/ingress-nginx Find data according to version

k8s architecture


APISERVER: unified access for all services
Cronrollermanager: maintain the desired number of replicas
Scheduler: responsible for introducing tasks and selecting appropriate nodes to assign tasks
ETCD: key value pair database stores all important information of K8S cluster (persistence)
Kubelet: directly interact with the container engine to realize container lifecycle management
Kube prox: it is responsible for writing rules to IPTABLES and IPVS to realize service mapping access

Description of other plug-ins
COREDNS: can create one-to-one domain IP correspondence resolution for SVC S in the cluster
DASHBOARD: provide a B/S structure inter access system for K8S cluster
INGRESS CONTROLLER: only four layers of agent can be officially implemented, and INGRESS can implement seven layers of agent
FEDERATION: provides a multi K8S unified management function across cluster centers
PROMETHEUS: provides monitoring capability of K8S cluster
ELK: provides a unified analysis and intervention platform for K8S cluster logs

Pod concept

pod controller type

Repl icationController & ReplicaSet & Deployment

ReplicationController is used to ensure that the number of replicas applied to the container is always kept at the user-defined number of replicas, that is, if a container exits abnormally, it will automatically create a new Pod to replace it; If there are abnormally many containers, they will also be recycled automatically. In the new version of Kubernetes, it is recommended to use ReplicaSet instead of replicate ioncontrol

ReplicaSet is not fundamentally different from repl applicationcontroller, except that the name is different, and Repl icaSet supports collective selector s

Although ReplicaSet can be used independently, it is generally recommended to use Deployment to automatically manage ReplicaSet, so that there is no need to worry about incompatibility with other mechanisms (for example, ReplicaSet does not support rolling update but Deployment does)

HPA (Hor izontalPodAutoScale)

Horizontal Pod Autoscaling is only applicable to Deployment and ReplicaSet. In V1 version, it only supports capacity expansion based on the CPU utilization of Pod. In vlalpha version, it supports capacity expansion based on memory and user-defined metric

StatefullSet

StatefulSet is designed to solve the problem of stateful services (corresponding to Deployments and Repl icaSets, which are designed for stateless services). Its application scenarios include:

*Stable persistent storage, i.e Pod The same persistent data can still be accessed after rescheduling, based on PVC To achieve
*Stable network flag, i.e Pod After rescheduling PodName and HostName Invariant, based on Headless Service(I.e. no Cluster IP of Service )To achieve
*Orderly deployment and orderly expansion, i.e Pod There is a sequence, and the deployment or expansion should be carried out in sequence according to the defined sequence(From 0 to N-1,Next Pod Before running all previous Pod Must be Running and Ready state),be based on init containers To achieve
*Orderly shrink, orderly delete(Namely from N-1 To 0)

DaemonSet

The DaemonSet ensures that a copy of the Pod is running on all (or some) nodes. When nodes join the cluster, a new Pod will be added for them. When nodes are removed from the cluster, these pods will also be recycled. Deleting the DaemonSet will delete all the pods it creates
Some typical uses of using the DaemonSet:

*Running cluster storage daemon, For example, in each Node Run on glusterd, ceph. 
*In each Node Running log collection on daemon,for example fluentd, logstash. 
*In each Node Running monitoring on daemon, for example Prometheus Node Exporter

Job,Cronjob

The Job is responsible for the batch task, that is, the task that is executed only once. It ensures that one or more pods of the batch task are successfully completed

Cron Job manages time-based jobs, namely:

*Run only once at a given point in time
*Run periodically at a given point in time

Service discovery

Network communication mode

Kubernetes' network model assumes that all pods are in a flat network space that can be directly connected. This is a ready-made network model in GCE (Google Compute Engine). Kubernetes assumes that this network already exists. If you build a kubernetes cluster in a private cloud, you cannot assume that the network already exists. We need to implement this network assumption by ourselves, first get through the mutual access between Docker containers on different nodes, and then run kubernetes

Between multiple containers in the same Pod: lo
Communication between pods: Overlay Network
Communication between Pod and Service: Iptables rules of each node

Network solutions kubernetes + flannel

Flannel is a network planning service designed by the CoreOS team for Kubernetes. In short, its function is to make Docker containers created by different node hosts in the cluster have unique virtual IP addresses in the whole cluster. Moreover, it can also establish an overlay network between these IP addresses. Through this overlay network, data packets can be transmitted to the target container intact

Instructions provided by Flannel of ETCD:

Storage management Flannel allocatable IP address segment resources
Monitor the actual address of each Pod in ETCD, and establish and maintain the Pod node routing table in memory

Network communication mode under different conditions

1. Internal communication of the same Pod: the same pod shares the same network namespace and the same Linux protocol stack Pod1 to Pod2
2. Pod1 to Pod2

  1. Different from the same host, the Pod address is in the same network segment as docker0, but the docker0 network segment and the host network card are two completely different IP network segments, and the communication between different nodes can be carried out through the physical network card of the host. Associate the IP of the Pod with the IP of the Node where the Pod is located. Through this association, the pods can access each other!
  2. Pod1 and Pod2 are in the same place The Docker0 bridge directly forwards the request to Pod2 on machines, without any demonstration by Flannel

3. Pod to Service network: at present, based on performance considerations, it is all maintained and forwarded by iptables
4. Pod to the external network: pod sends a request to the external network, looks up the routing table, and forwards data packets to the network card of the host computer. After the host network card completes routing, iptables executes Masquerade, changes the source IP to the IP of the host network card, and then sends a request to the external network server
5. External network access Pod: Service

Component communication diagram

k8s cluster installation

preparation in advance


1. The node on which k8s is installed must be a CPU larger than 1 core
2. Network information of the installation node 192.168.192.0/24 master:131 node1:130 node2:129

Four CentOS7: one master master server, two node nodes and one harbor, both in host only mode

A Windows10: and install koolshare
KoolCenter firmware download server: http://fw.koolcenter.com/



Download IMG disk writing tool


Create virtual machine











































Let's see which network card is host only


After setting, the browser accesses the koolshare routing ip: 192.168.1.1 Password: koolshare





Change to the same network segment as k8s cluster

Log in again using the new routing ip: 192.168.192.1

Diagnostic visit to domestic websites is normal

Visit foreign websites and click [cool soft]

download koolss


Search SSR node directly or fill in SSR server directly

k8s cluster installation

Set the system hostname and hosts file

hostnamectl set-hostname k8s-master01

hostnamectl set-hostname k8s-node01

hostnamectl set-hostname k8s-node02

Set ip address
vi /etc/sysconfig/network-scripts/ifcfg-ens33
master host

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ea3b61ed-9232-4b69-b6c0-2f863969e750"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.192.131
NETMASK=255.255.255.0
GATEWAY=192.168.192.1
DNS1=192.168.192.1
DNS2=114.114.114.114

node01 host

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ea3b61ed-9232-4b69-b6c0-2f863969e750"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.192.130
NETMASK=255.255.255.0
GATEWAY=192.168.192.1
DNS1=192.168.192.1
DNS2=114.114.114.114

node02 host

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ea3b61ed-9232-4b69-b6c0-2f863969e750"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.192.129
NETMASK=255.255.255.0
GATEWAY=192.168.192.1
DNS1=192.168.192.1
DNS2=114.114.114.114

Three hosts restart the network

service network restart

Set the hosts file of the master host and add the following host names
vi /etc/hosts

192.168.192.131 k8s-master01
192.168.192.130 k8s-node01
192.168.192.129 k8s-node02

Copy the hosts file of the master host to node01 and node02 hosts

[root@localhost ~]# scp /etc/hosts root@k8s-node01:/etc/hosts
The authenticity of host 'k8s-node01 (192.168.192.130)' can't be established.
ECDSA key fingerprint is SHA256:M5BalHyNXU5W49c5/9iZgC4Hl370O0Wr/c5S/FYFIvw.
ECDSA key fingerprint is MD5:28:23:b8:eb:af:d1:bd:bb:8c:77:e0:01:3c:62:7a:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'k8s-node01,192.168.192.130' (ECDSA) to the list of known hosts.
root@k8s-node01's password: 
hosts                                         100%  241   217.8KB/s   00:00    
[root@localhost ~]# scp /etc/hosts root@k8s-node02:/etc/hosts
The authenticity of host 'k8s-node02 (192.168.192.129)' can't be established.
ECDSA key fingerprint is SHA256:M5BalHyNXU5W49c5/9iZgC4Hl370O0Wr/c5S/FYFIvw.
ECDSA key fingerprint is MD5:28:23:b8:eb:af:d1:bd:bb:8c:77:e0:01:3c:62:7a:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'k8s-node02,192.168.192.129' (ECDSA) to the list of known hosts.
root@k8s-node02's password: 
hosts                                         100%  241   143.1KB/s   00:00  

Dependency packages are installed on all three hosts

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git

Three hosts: set the firewall to iptables and set empty rules

systemctl stop firewalld && systemctl disable firewalld

yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

Three hosts: turn off selinux

swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config 

Three hosts: adjust kernel parameters for K8S

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # The use of swap space is prohibited. It is allowed only when the system is OOM
vm.overcommit_memory=1 # Do not check whether the physical memory is sufficient
vm.panic_on_oom=0 # Open OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf

Three hosts: adjust the system time zone

# Set the system time zone to China / Shanghai
timedatectl set-timezone Asia/Shanghai
# Write the current UTC time to the hardware clock
timedatectl set-local-rtc 0
# Restart system time dependent services
systemctl restart rsyslog
systemctl restart crond

Three hosts: shut down services not required by the system

systemctl stop postfix && systemctl disable postfix

Three hosts: set rsyslogd and SYSTEMd Journal

mkdir /var/log/journal  # Directory where logs are persisted

mkdir /etc/systemd/journald.conf.d

cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# Persistent save to disk
Storage=persistent

# Compress history log
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

# Maximum occupied space 10G
SystemMaxUse=10G

# But the maximum log file size is 200M
SystemMaxFileSeize=200M

# Log storage time: 2 weeks
MaxRetentionSec=2week

# Do not forward logs to syslog
ForwardToSyslog=no
EOF

systemctl restart systemd-journald

Three hosts: upgrade the system kernel to 5.4

CentOS 7.x 3.10 There are some Bugs in the X kernel, causing the running Docker and Kubernetes to be unstable, for example: rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

uname -r command to view kernel information

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# Check /boot/grub2/grub Whether the corresponding kernel menuentry in CFG contains initrd16 configuration. If not, install it again!
yum --enablerepo=elrepo-kernel install -y kernel-lt

# After querying the kernel version above, set the boot to boot from the new kernel
grub2-set-default "CentOS Linux (4.4.182-1.el7.elrepo.x86_64) 7 (Core)"
grub2-set-default "CentOS Linux (5.4.195-1.el7.elrepo.x86_64) 7 (Core)"

Three hosts: prerequisites for Kube proxy to enable ipvs

# ==============================================================Kernel version 4.44
modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_Vs -e nf_conntrack_ipv4


# ==============================================================Kernel version 5.4
# 1. install ipset and ipvsadm
yum install ipset ipvsadmin -y

# 2. add the module to be loaded and write the script file
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF

# 3. add execution permission for script
chmod +x /etc/sysconfig/modules/ipvs.modules
# 4. execute script file
cd /etc/sysconfig/modules/
./ipvs.modules
# 5. check whether the corresponding module is loaded successfully
lsmod | grep -e -ip_vs -e nf_conntrack


Three hosts: install Docker software

Refer to installing docker: https://blog.csdn.net/DDJ_TEST/article/details/114983746

# Set up repository
# Install the yum utils package (provides the yum config manager utility) and set up a stable repository.
$ sudo yum install -y yum-utils
$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

# Install the Docker Engine and containerd of the specified version
$ sudo yum install -y docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io

Start docker

$ sudo systemctl start docker

Verify that the Docker Engine is properly installed by running the Hello world image.

$ sudo docker run hello-world

Set startup timing to start docker

$ sudo systemctl enable docker

Close docker service

$ sudo systemctl stop docker

Restart docker service

$ sudo systemctl restart  docker

verification

[root@k8s-master01 ~]# docker version
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:51:21 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:22:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false
# Create /etc/docker directory
mkdir /etc/docker

# Configuring the daemon
cat > /etc/docker/daemon.json <<EOF
{
	"exec-opts": ["native.cgroupdriver=systemd"],
	"log-driver": "json-file",
	"log-opts": {
		"max-size": "100m"
	},
	"insecure-registries": ["https://hub.atguigu.com"]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart docker service
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

Three hosts: install kubeadm (master-slave configuration)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
systemctl enable kubelet.service

kubeadm-basic.images.tar.gz and harbor offline installer v2.3.2 Tgz Download
Upload to the master node

tar -zxvf kubeadm-basic.images.tar.gz 

vim load-images.sh upload the image file to docker

#!/bin/bash

ls /root/kubeadm-basic.images > /tmp/image-list.txt

cd /root/kubeadm-basic.images

for i in $( cat /tmp/image-list.txt )
do
    docker load -i $i
done


rm -rf /tmp/image-list.txt
chmod a+x load-images.sh

./load-images.sh 

Copy the extracted files to node01 and node02

scp -r kubeadm-basic.images load-images.sh root@k8s-node01:/root

scp -r kubeadm-basic.images load-images.sh root@k8s-node02:/root

Execute load images on node01 and node02 SH file

./load-images.sh 

Initialize master node

# Generate template file
kubeadm config print init-defaults > kubeadm-config.yaml

vim kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.192.131      # Change to the ip of the master host
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1		# Change to installed version number
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"	# Add default podSubnet segment
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
    SupportIPVSProxyMode: true
mode: ipvs
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

When an error is reported, look for problems 3 and 4

view log file

[root@k8s-master01 ~]# vim kubeadm-init.log

[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.192.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.192.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.192.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.004030 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
ecaa903ab475ec8d361a7a844feb3973b437a6e36981be7d949dccda63c15d00
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.192.131:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:3236bf910c84de4e1f5ad24b1b627771602d5bad03e7819aad18805c440fd8aa
~

Execute the above command

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

View the k8s node

[root@k8s-master01 ~]# kubectl get node
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   NotReady   master   19m   v1.15.1

Join the master node and other work nodes

Execute the join command in the installation log

Deploy network

No need for the following one

kubectl apply -f https://github.com/WeiboGe2012/kube-flannel.yml/blob/master/kube-flannel.yml

Execute the following command

[root@k8s-master01 ~]# mkdir -p install-k8s/core
[root@k8s-master01 ~]# mv kubeadm-init.log kubeadm-config.yaml install-k8s/core
[root@k8s-master01 ~]# cd install-k8s/
[root@k8s-master01 install-k8s]# mkdir -p plugin/flannel
[root@k8s-master01 install-k8s]# cd plugin/flannel
[root@k8s-master01 flannel]# wget https://github.com/WeiboGe2012/kube-flannel.yml/blob/master/kube-flannel.yml

[root@k8s-master01 flannel]# kubectl create -f kube-flannel.yml

[root@k8s-master01 flannel]# kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-4kj2t               1/1     Running   0          92m
coredns-5c98db65d4-7zsr7               1/1     Running   0          92m
etcd-k8s-master01                      1/1     Running   0          91m
kube-apiserver-k8s-master01            1/1     Running   0          91m
kube-controller-manager-k8s-master01   1/1     Running   0          91m
kube-flannel-ds-amd64-g4gh9            1/1     Running   0          18m
kube-proxy-t7v46                       1/1     Running   0          92m
kube-scheduler-k8s-master01            1/1     Running   0          91m

[root@k8s-master01 flannel]# kubectl get node
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   93m   v1.15.1

[root@k8s-master01 flannel]# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.1  netmask 255.255.255.0  broadcast 10.244.0.255
        ether c6:13:60:e7:e8:21  txqueuelen 1000  (Ethernet)
        RX packets 4809  bytes 329578 (321.8 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4854  bytes 1485513 (1.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0
        ether 02:42:71:d8:f1:e2  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.192.131  netmask 255.255.255.0  broadcast 192.168.192.255
        ether 00:0c:29:8c:51:ba  txqueuelen 1000  (Ethernet)
        RX packets 536379  bytes 581462942 (554.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1359677  bytes 1764989232 (1.6 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.244.0.0  netmask 255.255.255.255  broadcast 0.0.0.0
        ether 16:0c:14:08:a6:51  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 625548  bytes 102038881 (97.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 625548  bytes 102038881 (97.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth350261c9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        ether f2:95:97:91:06:00  txqueuelen 0  (Ethernet)
        RX packets 2400  bytes 198077 (193.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2424  bytes 741548 (724.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethd9ac2bc1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        ether 16:5c:e0:81:25:ba  txqueuelen 0  (Ethernet)
        RX packets 2409  bytes 198827 (194.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2435  bytes 744163 (726.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Node01, node01 host join node

kubeadm-init. Copy the last line of the log file to node01 and node02 for execution

kubeadm join 192.168.192.131:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:3236bf910c84de4e1f5ad24b1b627771602d5bad03e7819aad18805c440fd8aa

Go to the master to check that the two nodes are not fully operational

[root@k8s-master01 flannel]# kubectl get node
NAME           STATUS     ROLES    AGE   VERSION
k8s-master01   Ready      master   99m   v1.15.1
k8s-node01     NotReady   <none>   23s   v1.15.1
k8s-node02     NotReady   <none>   20s   v1.15.1

# Wait a while, and then look again after node01 and node02 are completed
[root@k8s-master01 flannel]# kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-4kj2t               1/1     Running   0          100m
coredns-5c98db65d4-7zsr7               1/1     Running   0          100m
etcd-k8s-master01                      1/1     Running   0          100m
kube-apiserver-k8s-master01            1/1     Running   0          100m
kube-controller-manager-k8s-master01   1/1     Running   0          100m
kube-flannel-ds-amd64-5chsx            1/1     Running   0          109s
kube-flannel-ds-amd64-8bxpj            1/1     Running   0          112s
kube-flannel-ds-amd64-g4gh9            1/1     Running   0          26m
kube-proxy-cznqr                       1/1     Running   0          112s
kube-proxy-mcsdl                       1/1     Running   0          109s
kube-proxy-t7v46                       1/1     Running   0          100m
kube-scheduler-k8s-master01            1/1     Running   0          100m
[root@k8s-master01 flannel]# kubectl get pod -n kube-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE     IP                NODE           NOMINATED NODE   READINESS GATES
coredns-5c98db65d4-4kj2t               1/1     Running   0          101m    10.244.0.3        k8s-master01   <none>           <none>
coredns-5c98db65d4-7zsr7               1/1     Running   0          101m    10.244.0.2        k8s-master01   <none>           <none>
etcd-k8s-master01                      1/1     Running   0          100m    192.168.192.131   k8s-master01   <none>           <none>
kube-apiserver-k8s-master01            1/1     Running   0          100m    192.168.192.131   k8s-master01   <none>           <none>
kube-controller-manager-k8s-master01   1/1     Running   0          100m    192.168.192.131   k8s-master01   <none>           <none>
kube-flannel-ds-amd64-5chsx            1/1     Running   0          2m20s   192.168.192.129   k8s-node02     <none>           <none>
kube-flannel-ds-amd64-8bxpj            1/1     Running   0          2m23s   192.168.192.130   k8s-node01     <none>           <none>
kube-flannel-ds-amd64-g4gh9            1/1     Running   0          26m     192.168.192.131   k8s-master01   <none>           <none>
kube-proxy-cznqr                       1/1     Running   0          2m23s   192.168.192.130   k8s-node01     <none>           <none>
kube-proxy-mcsdl                       1/1     Running   0          2m20s   192.168.192.129   k8s-node02     <none>           <none>
kube-proxy-t7v46                       1/1     Running   0          101m    192.168.192.131   k8s-master01   <none>           <none>
kube-scheduler-k8s-master01            1/1     Running   0          100m    192.168.192.131   k8s-master01   <none>           <none>

[root@k8s-master01 flannel]# kubectl get node
NAME           STATUS   ROLES    AGE     VERSION
k8s-master01   Ready    master   104m    v1.15.1
k8s-node01     Ready    <none>   5m34s   v1.15.1
k8s-node02     Ready    <none>   5m31s   v1.15.1

The 1.15.1 version of this k8s has been successfully deployed. It is not yet highly available
Finally, save important files to /usr/local/ to prevent them from being deleted

mv install-k8s/ /usr/local/

harbor host configuration

Reference documents: https://github.com/WeiboGe2012/Data/tree/master/Linux/k8s/colony
Set ip address
vi /etc/sysconfig/network-scripts/ifcfg-ens33
master host

TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ea3b61ed-9232-4b69-b6c0-2f863969e750"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.192.128
NETMASK=255.255.255.0
GATEWAY=192.168.192.1
DNS1=192.168.192.1
DNS2=114.114.114.114

Install docker

  1. Uninstall old version
$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
# Set up repository
# Install the yum utils package (provides the yum config manager utility) and set up a stable repository.
$ sudo yum install -y yum-utils
$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

# Install the Docker Engine and containerd of the specified version
$ sudo yum install -y docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io

Start docker

$ sudo systemctl start docker

Verify that the Docker Engine is properly installed by running the Hello world image.

$ sudo docker run hello-world

Set startup timing to start docker

$ sudo systemctl enable docker

Close docker service

$ sudo systemctl stop docker

Restart docker service

$ sudo systemctl restart  docker

verification

[root@k8s-master01 ~]# docker version
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:51:21 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:22:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false
# Create /etc/docker directory
mkdir /etc/docker

# Configuring the daemon
cat > /etc/docker/daemon.json <<EOF
{
	"exec-opts": ["native.cgroupdriver=systemd"],
	"log-driver": "json-file",
	"log-opts": {
		"max-size": "100m"
	},
	"insecure-registries": ["https://hub.atguigu.com"]
}
EOF

mkdir -p /etc/systemd/system/docker.service.d

# Restart docker service
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

Install docker compose

download

sudo curl -L "https://get.daocloud.io/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

Set permissions

sudo chmod +x /usr/local/bin/docker-compose

verification

docker-compose --version

Download harbor https://github.com/goharbor/harbor/releases

wget https://github.91chi.fun//https://github.com//goharbor/harbor/releases/download/v1.10.11/harbor-offline-installer-v1.10.11.tgz

tar -zxvf harbor-offline-installer-v1.10.11.tgz

mv harbor /usr/local/

cd /usr/local/harbor/

vi harbor.yml

harbor.yml

# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: hub.atguigu.com  # Modification point 1, mailing address

# http related config
http:
  # port for http, default is 80. If https enabled, this port will redirect to https port
  port: 80

# https related config
https:
  # https port for harbor, default is 443
  port: 443
  # The path of cert and key files for nginx
  certificate: /data/cert/server.crt    # Modify and create /data/cert path
  private_key: /data/cert/server.key    # Modify and create /data/cert path

# Uncomment external_url if you want to enable external proxy
# And when it enabled the hostname will no longer used
# external_url: https://reg.mydomain.com:8433

# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: Harbor12345     # harbor login admin account password

# Harbor DB configuration
database:
  # The password for the root user of Harbor DB. Change this before any production use.
  password: root123
  # The maximum number of connections in the idle connection pool. If it <=0, no idle connections are retained.
  max_idle_conns: 50
  # The maximum number of open connections to the database. If it <= 0, then there is no limit on the number of open connections.
  # Note: the default number of connections is 100 for postgres.
  max_open_conns: 100

# The default data volume
data_volume: /data      # Data storage address, no directory needs to be created

# Harbor Storage settings by default is using /data dir on local filesystem
# Uncomment storage_service setting If you want to using external storage
# storage_service:
#   # ca_bundle is the path to the custom root ca certificate, which will be injected into the truststore
#   # of registry's and chart repository's containers.  This is usually needed when the user hosts a internal storage with self signed certificate.
#   ca_bundle:

#   # storage backend, default is filesystem, options include filesystem, azure, gcs, s3, swift and oss
#   # for more info about this configuration please refer https://docs.docker.com/registry/configuration/
#   filesystem:
#     maxthreads: 100
#   # set disable to true when you want to disable registry redirect
#   redirect:
#     disabled: false

# Clair configuration
clair:
  # The interval of clair updaters, the unit is hour, set to 0 to disable the updaters.
  updaters_interval: 12

jobservice:
  # Maximum number of job workers in job service
  max_job_workers: 10

notification:
  # Maximum retry count for webhook job
  webhook_job_max_retry: 10

chart:
  # Change the value of absolute_url to enabled can enable absolute url in chart
  absolute_url: disabled

# Log configurations
log:
  # options are debug, info, warning, error, fatal
  level: info
  # configs for logs in local storage
  local:
    # Log files are rotated log_rotate_count times before being removed. If count is 0, old versions are removed rather than rotated.
    rotate_count: 50
    # Log files are rotated only if they grow bigger than log_rotate_size bytes. If size is followed by k, the size is assumed to be in kilobytes.
    # If the M is used, the size is in megabytes, and if G is used, the size is in gigabytes. So size 100, size 100k, size 100M and size 100G
    # are all valid.
    rotate_size: 200M
    # The directory on your host that store log
    location: /var/log/harbor

  # Uncomment following lines to enable external syslog endpoint.
  # external_endpoint:
  #   # protocol used to transmit log to external endpoint, options is tcp or udp
  #   protocol: tcp
  #   # The host of external endpoint
  #   host: localhost
  #   # Port of external endpoint
  #   port: 5140

#This attribute is for migrator to detect the version of the .cfg file, DO NOT MODIFY!
_version: 1.10.0

# Uncomment external_database if using external database.
# external_database:
#   harbor:
#     host: harbor_db_host
#     port: harbor_db_port
#     db_name: harbor_db_name
#     username: harbor_db_username
#     password: harbor_db_password
#     ssl_mode: disable
#     max_idle_conns: 2
#     max_open_conns: 0
#   clair:
#     host: clair_db_host
#     port: clair_db_port
#     db_name: clair_db_name
#     username: clair_db_username
#     password: clair_db_password
#     ssl_mode: disable
#   notary_signer:
#     host: notary_signer_db_host
#     port: notary_signer_db_port
#     db_name: notary_signer_db_name
#     username: notary_signer_db_username
#     password: notary_signer_db_password
#     ssl_mode: disable
#   notary_server:
#     host: notary_server_db_host
#     port: notary_server_db_port
#     db_name: notary_server_db_name
#     username: notary_server_db_username
#     password: notary_server_db_password
#     ssl_mode: disable

# Uncomment external_redis if using external Redis server
# external_redis:
#   host: redis
#   port: 6379
#   password:
#   # db_index 0 is for core, it's unchangeable
#   registry_db_index: 1
#   jobservice_db_index: 2
#   chartmuseum_db_index: 3
#   clair_db_index: 4

# Uncomment uaa for trusting the certificate of uaa instance that is hosted via self-signed cert.
# uaa:
#   ca_file: /path/to/ca

# Global proxy
# Config http proxy for components, e.g. http://my.proxy.com:3128
# Components doesn't need to connect to each others via http proxy.
# Remove component from `components` array if want disable proxy
# for it. If you want use proxy for replication, MUST enable proxy
# for core and jobservice, and set `http_proxy` and `https_proxy`.
# Add domain to the `no_proxy` field, when you want disable proxy
# for some special registry.
proxy:
  http_proxy:
  https_proxy:
  # no_proxy endpoints will appended to 127.0.0.1,localhost,.local,.internal,log,db,redis,nginx,core,portal,postgresql,jobservice,registry,registryctl,clair,chartmuseum,notary-server
  no_proxy:
  components:
    - core
    - jobservice
    - clair

Create directory according to above

[root@localhost harbor]# mkdir -p /data/cert
[root@localhost harbor]# cd !$
cd /data/cert
[root@localhost cert]# 

Create certificate

# Generate private key
[root@localhost cert]# openssl genrsa -des3 -out server.key 2048
Generating RSA private key, 2048 bit long modulus
....................................................................................+++
...........+++
e is 65537 (0x10001)
Enter pass phrase for server.key:					# Enter the same password twice
Verifying - Enter pass phrase for server.key:

[root@localhost cert]# openssl req -new -key server.key -out server.csr
Enter pass phrase for server.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:CN										# country
State or Province Name (full name) []:BJ									# province
Locality Name (eg, city) [Default City]:BJ									# city
Organization Name (eg, company) [Default Company Ltd]:atguigu				# organization
Organizational Unit Name (eg, section) []:atguigu							# mechanism
Common Name (eg, your name or your server's hostname) []:hub.atguigu.com	# domain name
Email Address []:wuyongtao8@126.com											# Administrator mailbox

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:													# Whether to change the password? Enter directly
An optional company name []:												# Whether to change the password? Enter directly


# Backup private key
[root@localhost cert]# cp server.key server.key.org

# Convert to certificate and return the password
[root@localhost cert]# openssl rsa -in server.key.org -out server.key
Enter pass phrase for server.key.org:
writing RSA key

# Sign the certificate request
[root@localhost cert]# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Signature ok
subject=/C=CN/ST=BJ/L=BJ/O=atguigu/OU=atguigu/CN=hub.atguigu.com/emailAddress=wuyongtao8@126.com
Getting Private key

# Grant authority to a certificate
[root@localhost cert]# chmod a+x *

# Run script installation
[root@localhost cert]# cd /usr/local/harbor/
[root@localhost harbor]# ls
common.sh  harbor.v1.10.11.tar.gz  harbor.yml  install.sh  LICENSE  prepare
# The following one has encountered problems 5
[root@localhost harbor]# ./install.sh 

Add the harbor host domain name to the master, node01 and node02 hosts

[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.192.131 k8s-master01
192.168.192.130 k8s-node01
192.168.192.129 k8s-node02
[root@k8s-master01 ~]# echo "192.168.192.128 hub.atguigu.com" >> /etc/hosts
[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.192.131 k8s-master01
192.168.192.130 k8s-node01
192.168.192.129 k8s-node02
192.168.192.128 hub.atguigu.com

harbor hosts are also added with domain names

[root@k8s-master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.192.131 k8s-master01
192.168.192.130 k8s-node01
192.168.192.129 k8s-node02
192.168.192.128 hub.atguigu.com

Modify the Windows local hosts file and add the domain name
C:\Windows\System32\drivers\etc\hosts

192.168.192.128 hub.atguigu.com

harbor host, viewing the running container

[root@localhost harbor]# docker ps -a
CONTAINER ID        IMAGE                                  COMMAND                  CREATED             STATUS                      PORTS                                         NAMES
cf5d0df2935f        goharbor/nginx-photon:v1.10.11         "nginx -g 'daemon of..."   48 seconds ago      Up 46 seconds (healthy)     0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp   nginx
5f373c689525        goharbor/harbor-jobservice:v1.10.11    "/harbor/harbor_jobs..."   48 seconds ago      Up 46 seconds (healthy)                                                   harbor-jobservice
242b4f35f322        goharbor/harbor-core:v1.10.11          "/harbor/harbor_core"    48 seconds ago      Up 47 seconds (healthy)                                                   harbor-core
6fc46205eccb        goharbor/harbor-registryctl:v1.10.11   "/home/harbor/start...."   50 seconds ago      Up 48 seconds (healthy)                                                   registryctl
8ca6e340e8b5        goharbor/harbor-db:v1.10.11            "/docker-entrypoint...."   50 seconds ago      Up 48 seconds (healthy)     5432/tcp                                      harbor-db
bed1ed36df00        goharbor/redis-photon:v1.10.11         "redis-server /etc/r..."   50 seconds ago      Up 48 seconds (healthy)     6379/tcp                                      redis
42f03bcc4fb8        goharbor/harbor-portal:v1.10.11        "nginx -g 'daemon of..."   51 seconds ago      Up 48 seconds (healthy)     8080/tcp                                      harbor-portal
0647d52988cf        goharbor/registry-photon:v1.10.11      "/home/harbor/entryp..."   51 seconds ago      Up 48 seconds (healthy)     5000/tcp                                      registry
229aa32bbc70        goharbor/harbor-log:v1.10.11           "/bin/sh -c /usr/loc..."   51 seconds ago      Up 50 seconds (healthy)     127.0.0.1:1514->10514/tcp                     harbor-log
f349984bf935        91d0ab894aff                           "stf local --public-..."   6 months ago        Exited (1) 6 months ago                                                   stf
3b7be288d1ff        7123ee61b746                           "/sbin/tini -- adb -..."   6 months ago        Exited (143) 6 months ago                                                 adbd
a4bfb45049e4        2a54dcb95502                           "rethinkdb --bind al..."   6 months ago        Exited (0) 6 months ago                                                   rethinkdb

Open browser to access: hub.atguigu.com

Input account: admin Password: Harbor12345

Verify node01 login to harbor

[root@k8s-node01 ~]# docker login https://hub.atguigu.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

[root@k8s-node01 ~]# docker pull wangyanglinux/myapp:v1
v1: Pulling from wangyanglinux/myapp
550fe1bea624: Pull complete 
af3988949040: Pull complete 
d6642feac728: Pull complete 
c20f0a205eaa: Pull complete 
fe78b5db7c4e: Pull complete 
6565e38e67fe: Pull complete 
Digest: sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
Status: Downloaded newer image for wangyanglinux/myapp:v1



Format: docker tag SOURCE_IMAGE[:TAG] hub.atguigu.com/library/IMAGE[:TAG]
Modify the image label and push

[root@k8s-node01 ~]# docker tag wangyanglinux/myapp:v1 hub.atguigu.com/library/myapp:v1
[root@k8s-node01 ~]# docker push hub.atguigu.com/library/myapp:v1
The push refers to repository [hub.atguigu.com/library/myapp]
a0d2c4392b06: Pushed 
05a9e65e2d53: Pushed 
68695a6cfd7d: Pushed 
c1dc81a64903: Pushed 
8460a579ab63: Pushed 
d39d92664027: Pushed 
v1: digest: sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e size: 1569

View push succeeded

delete mirror

[root@k8s-node01 ~]# docker rmi -f hub.atguigu.com/library/myapp:v1       
Untagged: hub.atguigu.com/library/myapp:v1
Untagged: hub.atguigu.com/library/myapp@sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e
[root@k8s-node01 ~]# docker rmi -f wangyanglinux/myapp:v1                 
Untagged: wangyanglinux/myapp:v1
Untagged: wangyanglinux/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
Deleted: sha256:d4a5e0eaa84f28550cb9dd1bde4bfe63a93e3cf88886aa5dad52c9a75dd0e6a9
Deleted: sha256:bf5594a16c1ff32ffe64a68a92ebade1080641f608d299170a2ae403f08764e7
Deleted: sha256:b74f3c20dd90bf6ead520265073c4946461baaa168176424ea7aea1bc7f08c1f
Deleted: sha256:8943f94f7db615e453fa88694440f76d65927fa18c6bf69f32ebc9419bfcc04a
Deleted: sha256:2020231862738f8ad677bb75020d1dfa34159ad95eef10e790839174bb908908
Deleted: sha256:49757da6049113b08246e77f770f49b1d50bb97c93f19d2eeae62b485b46e489
Deleted: sha256:d39d92664027be502c35cf1bf464c726d15b8ead0e3084be6e252a161730bc82



docker pull hub.atguigu.com/library/myapp:v1
On the master host

[root@k8s-master01 ~]# kubectl run nginx-deployment --image=hub.atguigu.com/library/myapp:v1 --prot=80 --replicas=1
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx-deployment created

[root@k8s-master01 ~]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   1/1     1            1           3m32s
[root@k8s-master01 ~]# kubectl get rs
NAME                          DESIRED   CURRENT   READY   AGE
nginx-deployment-78b46578cd   1         1         1       4m39s
[root@k8s-master01 ~]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-78b46578cd-ttnkc   1/1     Running   0          4m48s

# It can be seen that it runs on node01 contact
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
nginx-deployment-78b46578cd-ttnkc   1/1     Running   0          5m19s   10.244.1.2   k8s-node01   <none>           <none>

# Direct access is available
[root@k8s-master01 ~]# curl 10.244.1.2
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@k8s-master01 ~]# curl 10.244.1.2/hostname.html 
nginx-deployment-78b46578cd-ttnkc

We went to node01 to check the hub atguigu. Com/library/myapp is running

[root@k8s-node01 ~]# docker ps
CONTAINER ID        IMAGE                           COMMAND                  CREATED             STATUS              PORTS               NAMES
ead8b7b658d0        hub.atguigu.com/library/myapp   "nginx -g 'daemon of..."   6 minutes ago       Up 6 minutes                            k8s_nginx-deployment_nginx-deployment-78b46578cd-ttnkc_default_aba645b2-b0bc-40df-bd3f-74e872c5eb3e_0
2e9604fe6814        k8s.gcr.io/pause:3.1            "/pause"                 6 minutes ago       Up 6 minutes                            k8s_POD_nginx-deployment-78b46578cd-ttnkc_default_aba645b2-b0bc-40df-bd3f-74e872c5eb3e_0
14144597ee6c        4e9f801d2217                    "/opt/bin/flanneld -..."   6 hours ago         Up 6 hours                              k8s_kube-flannel_kube-flannel-ds-amd64-8bxpj_kube-system_cf50e169-2798-496b-ac94-901ae02fc836_3
da5aa7976f63        89a062da739d                    "/usr/local/bin/kube..."   6 hours ago         Up 6 hours                              k8s_kube-proxy_kube-proxy-cznqr_kube-system_4146aaa5-e985-45bb-9f42-871c7671eea2_2
db9c540c9d69        k8s.gcr.io/pause:3.1            "/pause"                 6 hours ago         Up 6 hours                              k8s_POD_kube-proxy-cznqr_kube-system_4146aaa5-e985-45bb-9f42-871c7671eea2_3
b0c135210d4f        k8s.gcr.io/pause:3.1            "/pause"                 6 hours ago         Up 6 hours                              k8s_POD_kube-flannel-ds-amd64-8bxpj_kube-system_cf50e169-2798-496b-ac94-901ae02fc836_2


Deleting a pod automatically adds a pod

[root@k8s-master01 ~]# kubectl delete pod nginx-deployment-78b46578cd-ttnkc
pod "nginx-deployment-78b46578cd-ttnkc" deleted
[root@k8s-master01 ~]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-78b46578cd-vvkd6   1/1     Running   0          29s

Capacity expansion

[root@k8s-master01 ~]# kubectl scale --replicas=3 deployment/nginx-deployment
deployment.extensions/nginx-deployment scaled
[root@k8s-master01 ~]# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-78b46578cd-45ndq   1/1     Running   0          11s
nginx-deployment-78b46578cd-r627l   1/1     Running   0          11s
nginx-deployment-78b46578cd-vvkd6   1/1     Running   0          3m17s
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
nginx-deployment-78b46578cd-45ndq   1/1     Running   0          50s     10.244.2.2   k8s-node02   <none>           <none>
nginx-deployment-78b46578cd-r627l   1/1     Running   0          50s     10.244.1.4   k8s-node01   <none>           <none>
nginx-deployment-78b46578cd-vvkd6   1/1     Running   0          3m56s   10.244.1.3   k8s-node01   <none>           <none>

Delete one and then view three

[root@k8s-master01 ~]# kubectl delete pod nginx-deployment-78b46578cd-45ndq
pod "nginx-deployment-78b46578cd-45ndq" deleted
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
nginx-deployment-78b46578cd-4g4cb   1/1     Running   0          34s     10.244.2.3   k8s-node02   <none>           <none>
nginx-deployment-78b46578cd-r627l   1/1     Running   0          2m30s   10.244.1.4   k8s-node01   <none>           <none>
nginx-deployment-78b46578cd-vvkd6   1/1     Running   0          5m36s   10.244.1.3   k8s-node01   <none>           <none>
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   23h
[root@k8s-master01 ~]# kubectl get deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           22m
[root@k8s-master01 ~]# kubectl expose deployment nginx-deployment --port=30000 --target-port=80
service/nginx-deployment exposed
[root@k8s-master01 ~]# kubectl get svc
NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
kubernetes         ClusterIP   10.96.0.1      <none>        443/TCP     23h
nginx-deployment   ClusterIP   10.97.63.227   <none>        30000/TCP   39s
[root@k8s-master01 ~]# curl 10.97.63.227:30000
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

# Polled access
[root@k8s-master01 ~]# curl 10.97.63.227:30000/hostname.html
nginx-deployment-78b46578cd-r627l
[root@k8s-master01 ~]# curl 10.97.63.227:30000/hostname.html
nginx-deployment-78b46578cd-vvkd6
[root@k8s-master01 ~]# curl 10.97.63.227:30000/hostname.html
nginx-deployment-78b46578cd-4g4cb
[root@k8s-master01 ~]# curl 10.97.63.227:30000/hostname.html
nginx-deployment-78b46578cd-r627l
[root@k8s-master01 ~]# ipvsadm -Ln | grep 10.97.63.227
TCP  10.97.63.227:30000 rr
[root@k8s-master01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.192.131:6443         Masq    1      3          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.6:53                Masq    1      0          0         
  -> 10.244.0.7:53                Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.6:9153              Masq    1      0          0         
  -> 10.244.0.7:9153              Masq    1      0          0         
TCP  10.97.63.227:30000 rr
  -> 10.244.1.3:80                Masq    1      0          2         
  -> 10.244.1.4:80                Masq    1      0          2         
  -> 10.244.2.3:80                Masq    1      0          2         
UDP  10.96.0.10:53 rr
  -> 10.244.0.6:53                Masq    1      0          0         
  -> 10.244.0.7:53                Masq    1      0          0        
 
# IP is the same as above 
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
nginx-deployment-78b46578cd-4g4cb   1/1     Running   0          13m   10.244.2.3   k8s-node02   <none>           <none>
nginx-deployment-78b46578cd-r627l   1/1     Running   0          15m   10.244.1.4   k8s-node01   <none>           <none>
nginx-deployment-78b46578cd-vvkd6   1/1     Running   0          18m   10.244.1.3   k8s-node01   <none>           <none>

Unable to access the Internet

[root@k8s-master01 ~]# kubectl get svc
NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
kubernetes         ClusterIP   10.96.0.1      <none>        443/TCP     24h
nginx-deployment   ClusterIP   10.97.63.227   <none>        30000/TCP   10m
[root@k8s-master01 ~]# kubectl edit svc nginx-deployment
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2022-05-26T11:48:32Z"
  labels:
    run: nginx-deployment
  name: nginx-deployment
  namespace: default
  resourceVersion: "72884"
  selfLink: /api/v1/namespaces/default/services/nginx-deployment
  uid: 22af8fb4-9580-4918-9040-ac1de01e39d3
spec:
  clusterIP: 10.97.63.227
  ports:
  - port: 30000
    protocol: TCP
    targetPort: 80
  selector:
    run: nginx-deployment
  sessionAffinity: None
  type: NodePort     # Change to NodePort
status:
  loadBalancer: {}
[root@k8s-master01 ~]# kubectl get svc
NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE
kubernetes         ClusterIP   10.96.0.1      <none>        443/TCP           24h
nginx-deployment   NodePort    10.97.63.227   <none>        30000:30607/TCP   14m
[root@k8s-master01 ~]# netstat -anpt | grep :30607
tcp6       0      0 :::30607                :::*                    LISTEN      112386/kube-proxy   
[root@k8s-master01 ~]# netstat -anpt | grep :30607
tcp6       0      0 :::30607                :::*                    LISTEN      112386/kube-proxy   

External network access master host address plus port

Access to node01

Access to node02

In this way, the k8s test and the private connection with the warehouse are completed

Problem 1: the browser 192.168.1.1 cannot be accessed

Solution: shut down all virtual machines and restore the network configuration

Encountered problem 2:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Solution: docker has created less than one directory and carefully sorted out the environment deployment

Encountered problem 3:

[root@k8s-master01 ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
Flag --experimental-upload-certs has been deprecated, use --upload-certs instead
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.16. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

Solution: install the specified version of docker. Refer to 1: https://blog.csdn.net/mayi_xiaochaun/article/details/123421532
Uninstall docker, reference 2: https://blog.csdn.net/wujian_csdn_csdn/article/details/122421103

Encountered problem 4:

[root@k8s-master01 ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
Flag --experimental-upload-certs has been deprecated, use --upload-certs instead
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

Solution: add CPUs to the virtual machine. At least two CPUs must be guaranteed

Encountered problem 5:

[root@localhost harbor]# ./install.sh 
...
...
...
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating harbor-portal ... 
Creating registryctl   ... 
Creating registry      ... error
Creating harbor-portal ... done
Creating registryctl   ... done
ERROR: for registry  Cannot create container for service registry: Conflict. The container name "/registry" is already in use by container "3f42e1c7dd80b96b59848ac10698ef3f5537afeedb718a424dd91d13bb55440b". You have to remove (or renCreating redis         ... done
Creating harbor-db     ... done

ERROR: for registry  Cannot create container for service registry: Conflict. The container name "/registry" is already in use by container "3f42e1c7dd80b96b59848ac10698ef3f5537afeedb718a424dd91d13bb55440b". You have to remove (or rename) that container to be able to reuse that name.
ERROR: Encountered errors while bringing up the project.

Solution: delete the registry container and image

[root@localhost harbor]# docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
goharbor/chartmuseum-photon     v1.10.11            d00df92a5e3e        3 weeks ago         164MB
goharbor/redis-photon           v1.10.11            aa57c8e9fa46        3 weeks ago         151MB
goharbor/clair-adapter-photon   v1.10.11            e87900ea4eb9        3 weeks ago         66.1MB
goharbor/clair-photon           v1.10.11            03cd37f2ca5d        3 weeks ago         178MB
goharbor/notary-server-photon   v1.10.11            801719b38205        3 weeks ago         105MB
goharbor/notary-signer-photon   v1.10.11            005e711802d6        3 weeks ago         102MB
goharbor/harbor-registryctl     v1.10.11            fd34fcc88f68        3 weeks ago         93.4MB
goharbor/registry-photon        v1.10.11            c7076a9bc40b        3 weeks ago         78.6MB
goharbor/nginx-photon           v1.10.11            68e6d0e1c018        3 weeks ago         45MB
goharbor/harbor-log             v1.10.11            06df11c5e8f3        3 weeks ago         108MB
goharbor/harbor-jobservice      v1.10.11            f7d878b39e41        3 weeks ago         84.7MB
goharbor/harbor-core            v1.10.11            69d4874721a3        3 weeks ago         79.6MB
goharbor/harbor-portal          v1.10.11            83b24472c7c8        3 weeks ago         53.1MB
goharbor/harbor-db              v1.10.11            11278dbcadf4        3 weeks ago         188MB
goharbor/prepare                v1.10.11            66d60732b8ff        3 weeks ago         206MB
nginx                           latest              04661cdce581        6 months ago        141MB
rethinkdb                       latest              2a54dcb95502        7 months ago        131MB
192.168.111.129:5000/demo       latest              40fc65df2cf9        14 months ago       660MB
demo                            1.0-SNAPSHOT        40fc65df2cf9        14 months ago       660MB
registry                        latest              678dfa38fcfa        17 months ago       26.2MB
openstf/ambassador              latest              938a816f078a        21 months ago       8.63MB
openstf/stf                     latest              91d0ab894aff        21 months ago       958MB
sorccu/adb                      latest              7123ee61b746        4 years ago         30.5MB
java                            8                   d23bdf5b1b1b        5 years ago         643MB
[root@localhost harbor]# docker rmi -f registry:latest
 perhaps
[root@localhost harbor]# docker rmi 678dfa38fcfa
Error response from daemon: conflict: unable to delete 678dfa38fcfa (must be forced) - image is being used by stopped container 3f42e1c7dd80
[root@localhost harbor]# docker rm 3f42e1c7dd80
3f42e1c7dd80
[root@localhost harbor]# docker rmi 678dfa38fcfa
Error: No such image: 678dfa38fcfa

[root@localhost harbor]# cd /var/lib/docker/image/overlay2/imagedb/content/sha256/
[root@localhost sha256]# ll
 Total consumption 188
-rw-------. 1 root root 4265 5 June 2617:48 005e711802d665903b9216e8c46b46676ad9c2e43ef48911a33f0bf9dbd30a06
-rw-------. 1 root root 4791 5 June 2617:48 03cd37f2ca5d16745270fef771c60cd502ec9559d332d6601f5ab5e9f41e841a
-rw-------. 1 root root 7738 11 November 202104661 cdce5812210bac48a8af672915d0719e745414b4c322719ff48c7da5b83
-rw-------. 1 root root 4789 5 June 2617:48 06df11c5e8f3cf1963c236327434cbfe2f5f3d9a9798c487d4e6c8ba1742e5fe
-rw-------. 1 root root 5725 5 June 2617:48 11278dbcadf4651433c7432427bd4877b6840c5d21454c3bf266c12c1d1dd559
-rw-------. 1 root root 5640 3 June 18, 2021 27 fdf005f8f0ddc7837fcb7c1dd61cffdb37f14d964fb4edccfa4e6b60f6e339
-rw-------. 1 root root 3364 11 November 20212 a54dcb95502386ca01cdec479bea40b3eacfefe7019e6bb4883eff04d35b883
-rw-------. 1 root root 5771 3 June 18, 2021 40 fc65df2cf909b24416714b4640fa783869481a4eebf37f7da8cbf1f299b2ab
-rw-------. 1 root root 5062 3 April 18, 2021 e1d920c1cd61aeaf44f7e17b2dfc3dcb24414770a3955bbed61e53c39b90232
-rw-------. 1 root root 3526 5 June 2617:48 66d60732b8ff10d2f2d237a68044f11fc87497606cf8c5feae2adf6433f3d946
-rw-------. 1 root root 3114 3 June 18, 2021678 dfa38fcfa349ccbdb1b6d52ac113ace67d5746794b36dfbad9dd96a9d1c43
-rw-------. 1 root root 3501 5 June 2617:48 68e6d0e1c018d66a4cab78d6c2c46c23794e8a06b37e195e8c8bcded2ecc82d2
-rw-------. 1 root root 4058 5 June 2617:48 69d4874721a38f06228c4a2e407e4c82a8847b61b51c2d50a6e3fc645635c233
-rw-------. 1 root root 4284 11 November 20217123 ee61b7468b74c8de67a6dfb40c61840f5b910fee4105e38737271741582f
-rw-------. 1 root root 4263 5 June 2617:48 801719b38205038d5a145380a1954481ed4e8a340e914ef7819f0f7bed395df6
-rw-------. 1 root root 4814 5 June 2617:48 83b24472c7c8cc71effc362949d4fa04e3e0a6e4a8f2a248f5706513cbcdfa0a
-rw-------. 1 root root 5316 3 August 18, 2021 f36ad1be230ab68cc8337931358734ffedd968a076a9522cd88359fbf4af98d
-rw-------. 1 root root 6501 11 November 202191 d0ab894affa1430efb6bd130edb360d9600132bb4187a6cabe403d8ef98bdd
-rw-------. 1 root root 2070 11 November 2021938 a816f078a0b27753a0f690518754bfbb47e0bb57e61275600757845d7e4b1
-rw-------. 1 root root 3751 5 June 2617:48 aa57c8e9fa464f8124a753facda552af0a79f5aa169901f357a9003c8a65d9c5
-rw-------. 1 root root 5204 3 August 18, 2021 bb56dc5d8cbd1f41b9e9bc7de1df15354a204cf17527790e13ac7d0147916dd6
-rw-------. 1 root root 4347 5 June 2617:48 c7076a9bc40b43db80afb4699e124c0dd1c777825bd36d504a2233028421a178
-rw-------. 1 root root 4858 3 August 18, 2021 c70a4b1ffafa08a73e469ed0caa6693111aad55d34df8df7eea6bae0fb542547
-rw-------. 1 root root 4592 5 June 2617:48 d00df92a5e3e330c2231d9f54a0a0c320c420946e04afcfe02ad596d778a8370
-rw-------. 1 root root 4733 3 August 18, 2021 d23bdf5b1b1b1afce5f1d0fd33e7ed8afbc084b594b9ccf742a5b27080d8a4a8
-rw-------. 1 root root 3715 5 June 2617:48 e87900ea4eb9beafd8264e874f71fcb72ce1df827e999d6bb7b2e47ebb0ca5e4
-rw-------. 1 root root 3677 5 June 2617:48 f7d878b39e411b345d4eb263205cde950ad59ba3102d4ac1079238ef1b3de903
-rw-------. 1 root root 4632 5 June 2617:48 fd34fcc88f68fe44a06c34d3a754f70fd36ee6174e64e65cb1072f0d4a9826d0
[root@localhost sha256]# rm -rf 678dfa38fcfa349ccbdb1b6d52ac113ace67d5746794b36dfbad9dd96a9d1c43
[root@localhost sha256]# docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             SIZE
goharbor/chartmuseum-photon     v1.10.11            d00df92a5e3e        3 weeks ago         164MB
goharbor/redis-photon           v1.10.11            aa57c8e9fa46        3 weeks ago         151MB
goharbor/clair-adapter-photon   v1.10.11            e87900ea4eb9        3 weeks ago         66.1MB
goharbor/clair-photon           v1.10.11            03cd37f2ca5d        3 weeks ago         178MB
goharbor/notary-server-photon   v1.10.11            801719b38205        3 weeks ago         105MB
goharbor/notary-signer-photon   v1.10.11            005e711802d6        3 weeks ago         102MB
goharbor/harbor-registryctl     v1.10.11            fd34fcc88f68        3 weeks ago         93.4MB
goharbor/registry-photon        v1.10.11            c7076a9bc40b        3 weeks ago         78.6MB
goharbor/nginx-photon           v1.10.11            68e6d0e1c018        3 weeks ago         45MB
goharbor/harbor-log             v1.10.11            06df11c5e8f3        3 weeks ago         108MB
goharbor/harbor-jobservice      v1.10.11            f7d878b39e41        3 weeks ago         84.7MB
goharbor/harbor-core            v1.10.11            69d4874721a3        3 weeks ago         79.6MB
goharbor/harbor-portal          v1.10.11            83b24472c7c8        3 weeks ago         53.1MB
goharbor/harbor-db              v1.10.11            11278dbcadf4        3 weeks ago         188MB
goharbor/prepare                v1.10.11            66d60732b8ff        3 weeks ago         206MB
nginx                           latest              04661cdce581        6 months ago        141MB
rethinkdb                       latest              2a54dcb95502        7 months ago        131MB
192.168.111.129:5000/demo       latest              40fc65df2cf9        14 months ago       660MB
demo                            1.0-SNAPSHOT        40fc65df2cf9        14 months ago       660MB
openstf/ambassador              latest              938a816f078a        21 months ago       8.63MB
openstf/stf                     latest              91d0ab894aff        21 months ago       958MB
sorccu/adb                      latest              7123ee61b746        4 years ago         30.5MB
java                            8                   d23bdf5b1b1b        5 years ago         643MB

Tags: Docker Kubernetes Container

Posted by carlheaton on Fri, 03 Jun 2022 04:37:05 +0530