PostgreSQL container deployment tutorial for beginners

Author: wangzhibin

Editor: zhonghualong

This article is contributed by wangzhibin, a community partner. From the perspective of Xiaobai, we will take you step by step to deploy the RadonDB PostgreSQL cluster to Kubernetes. The article is divided into two parts. The first part will demonstrate how to build a Kubernetes environment, including detailed configuration and optimization.

What is the RadonDB PostgreSQL Operator?

RadonDB PostgreSQL is a database containerization project implemented using Operator based on PostgreSQL.

  • Widely used in geospatial and mobile fields
  • High availability, stability and data integrity
  • Support online horizontal expansion
  • Support automatic failover and provide HA function
  • Provide PostgreSQL common parameter interfaces to facilitate parameter adjustment
  • Provide PostGIS plug-in, with the ability to store, query and modify spatial relationships
  • Provide real-time monitoring, health check, automatic log cleaning and other functions

RadonDB PostgreSQL Operator can be delivered based on Kubernetes container platforms such as KubeSphere, OpenShift and Rancher. Automatically perform tasks related to running RadonDB PostgreSQL clusters.

RadonDB PostgreSQL Operator is based on https://github.com/CrunchyData/postgres-operator The project has been realized, improved and optimized, and will continue to feed back to the community in the future.

Warehouse address: https://github.com/radondb/radondb-postgresql-operator

Deployment target

Objective: to prepare a set of Kubernetes as the subsequent database deployment environment.

host nameIProleRemarks
master192.168.137.2kubernetes masterRemove stains
node1192.168.137.3kubernetes node
node2192.168.137.4kubernetes node

Configuring the operating system

Operating system: CentOS 7

1. Modify the respective host name and modify the host list

#192.168.137.2
vi /etc/hostname
master
cat >> /etc/hosts << EOF
192.168.137.2 master
192.168.137.3 node1
192.168.137.4 node2
EOF

#192.168.137.3
vi /etc/hostname
node1
cat >> /etc/hosts << EOF
192.168.137.2 master
192.168.137.3 node1
192.168.137.4 node2
EOF

#192.168.137.4
vi /etc/hostname
node2
cat >> /etc/hosts << EOF
192.168.137.2 master
192.168.137.3 node1
192.168.137.4 node2
EOF

2. Other configurations

Turn off the firewall, SELinux, SWAP, and set time synchronization.

systemctl stop firewalld && systemctl disable firewalld

sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0

swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab

yum install ntpdate -y && ntpdate time.windows.com

3. Configuring kernel parameters

A chain that passes bridged IPv4 traffic to iptables.

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

Prepare Docker

1. Install some necessary system tools

yum install -y yum-utils device-mapper-persistent-data lvm2

2. Add software source information

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3. Update and install docker CE

yum makecache fast
yum -y install docker-ce

4. Start Docker service

systemctl start docker && systemctl enable docker
# You can modify the daemon configuration file /etc/docker/daemon JSON to use the accelerator

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
 "registry-mirrors": ["https://s2q9fn53.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload && sudo systemctl restart docker

Kubernetes preparation

1. Initialize

Add the YUM source of Kubernetes Alibaba cloud.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install kubectl-1.20.0 kubelet-1.20.0 kubeadm-1.20.0
 && systemctl enable kubelet && systemctl start kubelet

Initialize the cluster.

#master
kubeadm init --kubernetes-version=1.20.0  \
--apiserver-advertise-address=192.168.137.2  \
--image-repository registry.aliyuncs.com/google_containers  \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Other nodes join the cluster.

kubeadm join 192.168.137.2:6443 --token scw8xm.x5y7fck1via4mwc2 \
  --discovery-token-ca-cert-hash sha256:8944421887121b6a2ac32987d9d1c7786fe64316cebdf7a63b6048fba183cc67

2. Deploy CNI network plug-in

Install the flannel network plug-in.

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Send admin Conf to all nodes. You can manage the cluster at any node.

scp /etc/kubernetes/admin.conf root@node1:/etc/kubernetes/admin.conf
scp /etc/kubernetes/admin.conf root@node2:/etc/kubernetes/admin.conf

Add environment variables to the node.

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile

After execution, you can manage the cluster on any node node to check whether the node is normal.

kubectl get nodes

The node status is normal.

3. View nodes

[root@node2 images]# kubectl label node node1 node-role.kubernetes.io/worker=worker
node/node1 labeled
[root@node2 images]# kubectl label node node2 node-role.kubernetes.io/worker=worker
node/node2 labeled

4. Create default storageclass

vi sc.yml

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: fast-disks
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

[root@master ~]# kubectl apply -f sc.yml
storageclass.storage.k8s.io/local-storage create

Set as default.

kubectl patch storageclass fast-disks -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

5. Remove stains

Add a PreferNoSchedule that does not schedule as much as possible.

kubectl taint nodes master node-role.kubernetes.io/master:PreferNoSchedule

Remove the stain NoSchedule, and the last "-" represents deletion.

kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-

Problems with mirroring

Some images need to be prepared for the installation of RadonDB PostgreSQL operator, but the steps of compiling images can be omitted. The community has built all the images that RadonDB PostgreSQL depends on in the dockerhub, and can use them directly.

So far, we have prepared the Kuberentes environment. In the next phase, we will take you to deploy the RadonDB PostgreSQL Operator.

About the author:

Wangzhibin, master of Beijing University of Aeronautics and Astronautics, has won PMP and NPDP professional certification, Jushan database certification lecturer and official PostgreSQL certification lecturer, and has PGCM, PCTP, SCDD, KCP, OBCA and other relevant certifications. Won the PostgreSQL ACE of China open source software promotion alliance of the Ministry of industry and information technology and the PostgreSQL open source community promotion contribution award, and participated in the compilation of PostgreSQL high availability practice.

Acknowledgment
The RadonDB open source community thanks the contributors to this article! We also look forward to contributions from more community partners~

Tags: Database PostgreSQL

Posted by dharprog on Tue, 31 May 2022 04:34:50 +0530