kubeadm1.20.0+cilium+hubble environment setup

1, Overview

Cilium is an open source network implementation scheme. Different from other network schemes, cilium emphasizes its advantages in network security and can transparently protect the network connection between application services on container management platforms such as Kubernetes.

In the design and implementation of Cilium, eBPF, a new kernel technology based on Linux, can dynamically insert powerful security, visibility and network control logic into Linux, and the corresponding security policies can be applied and updated without modifying application code or container configuration.

Cilium's product positioning on its official website is called "API aware networking and security". Therefore, it can be seen that its features mainly include these three aspects:

(1) Provide the basic network interconnection and interworking capability in Kubernetes, and realize the basic network connectivity function including Pod, Service, etc. in the container cluster;

(2) Relying on eBPF, the network observability, basic network isolation, troubleshooting and other security strategies in Kubernetes are realized;

(3) Relying on eBPF, it breaks through the limitation that traditional host firewalls only support L3 and L4 micro isolation, and supports API based network security filtering capability. Cilium provides a simple and effective method to define and implement container /Pod Identity Based network layer and application layer (such as HTTP/gRPC/Kafka) security policies.

2, Architecture

Cilium officially provides the following reference architecture [3]. Cilium is located between the container orchestration system and the Linux Kernel. Upward, it can configure the network and corresponding security for the container through the orchestration platform. Downward, it can control the forwarding behavior of the container network and the implementation of security policies by mounting the eBPF program in the Linux Kernel.

 

Simple relationship description

 

3, Environmental preparation

Two special conditions need to be noted

kubernetes >=1.9
linux kernel >= 4.9

 

For kernel upgrade, please refer to the link: https://www.cnblogs.com/xiao987334176/p/16273902.html

For kubernetes installation, please refer to the link: https://www.cnblogs.com/xiao987334176/p/16274066.html

 

The server information is as follows:

Operating system: ubuntu-18.04.6-server-amd64

Configuration: 2-core 3g

ip address: 192.168.1.12

Host name: k8smaster

 

Operating system: ubuntu-18.04.6-server-amd64

Configuration: 2-core 4g

ip address: 192.168.1.13

Host name: k8snode1

 

4, Installing cilium

The version selected here is: 1.7.0

 

Note: a special note should be made here. Before installing cilium, we should ensure that there are no other cni plug-ins, such as flannel

I found that if flannel already exists and then cilium is installed, the installation will fail because there is a conflict between the two plug-ins.

 

So before that, delete the flannel

kubectl delete -f kube-flannel.yml

 

Open Google browser and download yaml file

https://raw.githubusercontent.com/cilium/cilium/v1.7/install/kubernetes/quick-install.yaml

The default download is a txt file, which needs to be manually changed to a yaml file.

 

Upload the yaml file to the server and execute it on the master

kubectl apply -f quick-install.yaml

 

Wait a few minutes to check the pod status

# kubectl get pods -A|grep cilium
kube-system   cilium-8bkqp                        1/1     Running   0          161m
kube-system   cilium-kfqnk                        1/1     Running   0          162m
kube-system   cilium-operator-746766746f-xtsr4    1/1     Running   0          162m

 

Check the ip address. Note that there are 4 more network cards

# ifconfig 
cilium_host: flags=4291<UP,BROADCAST,RUNNING,NOARP,MULTICAST>  mtu 1500
        inet 10.222.0.209  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::c4b6:27ff:fe03:c42d  prefixlen 64  scopeid 0x20<link>
        ether c6:b6:27:03:c4:2d  txqueuelen 1000  (Ethernet)
        RX packets 1065  bytes 80847 (80.8 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 366  bytes 24032 (24.0 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

cilium_net: flags=4291<UP,BROADCAST,RUNNING,NOARP,MULTICAST>  mtu 1500
        inet6 fe80::5cd7:4bff:fec8:e267  prefixlen 64  scopeid 0x20<link>
        ether 5e:d7:4b:c8:e2:67  txqueuelen 1000  (Ethernet)
        RX packets 366  bytes 24032 (24.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1065  bytes 80847 (80.8 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

cilium_vxlan: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::74d7:7bff:fe3a:1d63  prefixlen 64  scopeid 0x20<link>
        ether 76:d7:7b:3a:1d:63  txqueuelen 1000  (Ethernet)
        RX packets 7132  bytes 4542061 (4.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5733  bytes 1282422 (1.2 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:af:83:a0:88  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp0s3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.1.12  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 2409:8a1e:af4e:ac10:a00:27ff:fec8:200c  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::a00:27ff:fec8:200c  prefixlen 64  scopeid 0x20<link>
        ether 08:00:27:c8:20:0c  txqueuelen 1000  (Ethernet)
        RX packets 121149  bytes 40742391 (40.7 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 104963  bytes 47334122 (47.3 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2054621  bytes 421532535 (421.5 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2054621  bytes 421532535 (421.5 MB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lxc00259a3b8fde: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::904c:e8ff:fe8c:425c  prefixlen 64  scopeid 0x20<link>
        ether 92:4c:e8:8c:42:5c  txqueuelen 1000  (Ethernet)
        RX packets 1405  bytes 632795 (632.7 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1319  bytes 146621 (146.6 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lxc_health: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80::607a:2ff:fec8:cc10  prefixlen 64  scopeid 0x20<link>
        ether 62:7a:02:c8:cc:10  txqueuelen 1000  (Ethernet)
        RX packets 2108  bytes 170024 (170.0 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2602  bytes 216113 (216.1 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

cilium_host,cilium_net,lxc_health, lxc00259a3b8fde (this string is random)

Description cilium installation is complete.

 

5, Install hubble

Hubble is specially designed for network visualization. It can use the eBPF data path provided by Cilium to gain deep visibility into the network traffic of Kubernetes applications and services. These network traffic information can be connected to the Hubble CLI and UI tools, and can quickly diagnose DNS related problems in an interactive way. In addition to Hubble's own monitoring tools, it can also connect with the mainstream cloud native monitoring systems Prometheus and Grafana to realize scalable monitoring strategies.

Installation documentation: https://github.com/cilium/hubble/blob/v0.5/Documentation/installation.md

 

Download yaml file using Google browser

https://raw.githubusercontent.com/cilium/hubble/v0.5/tutorials/deploy-hubble-servicemap/hubble-all-minikube.yaml

The default download is a txt file, which needs to be manually changed to a yaml file.

 

Upload the yaml file to the server. Since the svc of the bubble UI is ClusterIP by default, it is inconvenient to access it using the browser. This is manually changed to NodePort

Modify document

vi hubble-all-minikube.yaml

Change the ClusterIP in line 132 to NodePort

 

Then execute on the master

kubectl apply -f hubble-all-minikube.yaml

 

Wait a few minutes to check the pod status

# kubectl get pods -A|grep hubble
kube-system   hubble-5q7zd                        1/1     Running   0          174m
kube-system   hubble-q5447                        1/1     Running   0          174m
kube-system   hubble-ui-649d76c898-swqrq          1/1     Running   0          174m

 

View the svc of the bubble UI

# kubectl get svc -A|grep hubble-ui
kube-system   hubble-ui     NodePort    10.1.54.175   <none>        12000:32286/TCP          178m

Here you can see that the mapped port of nodeport is 32286. Note: this port is random, and the actual situation shall prevail.

 

Accessing the bubble UI

use http://master ip+32286

The effect is as follows:

 

Here I have a default application, flaskapp. First visit the flaskapp page, and then view the bubble UI again

A link will appear here

 

From the Hubble interface above, we can simply see some of its functions and data. For example, we can visually display the communication relationship between the network and services, view a variety of detailed data indicators of Flows, view the corresponding security policies, and filter the observation results through the namespace.

 

Reference link of this article:

https://blog.csdn.net/M2l0ZgSsVc7r69eFdTj/article/details/107969613

https://blog.csdn.net/saynaihe/article/details/115187298

 

Posted by Hatdrawn on Wed, 01 Jun 2022 22:48:51 +0530