docker installing RabbitMQ cluster

catalogue

1. Introduction

1.1. Cluster mode

1.2 node type

2, Actual combat

2.1. Common cluster construction

2.2 image cluster

2.3 load balancing

 

1. Introduction

RabbitMQ is written based on Erlang. Erlang naturally supports distribution. You only need to synchronize erlang Cookie implementation.

But it does not support load balancing.

 

1.1. Cluster mode

There are two cluster modes: normal cluster mode and mirror cluster mode.

 

① Normal cluster mode

Synchronize metadata information only:

  • Queue metadata: queue name and attributes;
  • Exchange metadata: exchange name, type, and attribute;
  • Binding metadata: a simple table shows how to route messages to queues;
  • vhost metadata: provides namespaces and security attributes for queues, switches, and bindings in vhost;

Therefore, accessing any node to query queue, user, exchange, vhost and other information is the same,

 

It should be noted that messages are stored only on the node where the message queue is created, not on each node. When a node other than the data node obtains data, it will route and forward the metadata information to the corresponding data node for acquisition.

 

Advantages: message storage can be spread to all nodes to improve the message backlog capacity of the cluster; Messages do not need to be synchronously copied and stored to all nodes, which is efficient.

Disadvantages: the high availability of the queue cannot be guaranteed. When a node goes down, the queue cannot be applied and can only be restarted.

 

 

② Mirror cluster mode

The queue is mirrored to ensure the reliability of the queue. The queue is mirrored to other nodes in the cluster. When a node in the cluster fails, the queue can automatically switch to the node in the mirror. A mirror queue contains one master and several slaves. The master-slave node has the following characteristics:

  • Messages are read and written in the master, not separated from each other;
  • After receiving the command, the master will multicast to the slave, and the slave will execute the command in sequence;
  • If the master fails, the slave that is joined first will be promoted to the master according to the joining time of the node;
  • Queues, not nodes, mirror each other. Different nodes in the cluster can mirror each other's queues, that is, the master of queues can be distributed on different nodes.

 

The main difference from a normal cluster is that it has an image queue, which can store message entities in the image queue of each node.

 

Advantages: high availability.

Disadvantages: compared with ordinary clusters, the message backlog capacity of the cluster is weakened, and there are more transmission steps of message data at each node, resulting in lower performance.

 

The choice of cluster mode should be based on the business scenario.

 

 

1.2 node type

There are two types: disk nodes and memory nodes.

The disk node stores metadata in the disk. After restarting, the disk can be read for reconstruction to ensure that metadata is not lost.

The memory node stores metadata in memory, which has higher performance, but the metadata is lost after restart.

 

Although the metadata will be lost after the memory node is restarted, the metadata information can be synchronized through the disk nodes in the cluster (the only information stored in the memory node to the disk is the address of the disk node).

Therefore, the cluster needs at least one disk node to ensure availability. If the only disk node of the cluster goes down, nothing can be changed, such as creating queues, switches, adding or deleting nodes, etc; However, you can continue to route messages (because the metadata of other nodes still exists).

In a cluster, it is best to set up two or more disk nodes.

 

 

 

2, Actual combat

RabbitMQ version: rabbitmq:3-management

docker version: 19.03.8

Three centos7 virtual machine servers ip:192.168.45.129, 192.168.45.130, 192.168.45.131

We build a cluster (130) with two disk nodes (129131) and one memory node.

 

2.1. Common cluster construction

All three servers perform the following two operations:

1. Pull image

docker pull rabbitmq:3-management

2. Configure mount directory

mkdir -p /home/soft/rabbitmq/data

 

Server 129 executes

docker run -d --restart=always \
--hostname myrabbit01 --name rabbit01 \
-e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password \
-e RABBITMQ_ERLANG_COOKIE='rabbitmqCookie' \
-p 15672:15672 -p 5672:5672 -p 5671:5671 -p 4369:4369 -p 25672:25672 \
-v /home/soft/rabbitmq/data:/var/lib/rabbitmq \
--add-host myrabbit02:192.168.45.130 \
--add-host myrabbit03:192.168.45.131 \
rabbitmq:3-management

Server 130 executes

docker run -d --restart=always \
--hostname myrabbit02 --name rabbit02 \
-e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password \
-e RABBITMQ_ERLANG_COOKIE='rabbitmqCookie' \
-p 15672:15672 -p 5672:5672 -p 5671:5671 -p 4369:4369 -p 25672:25672 \
-v /home/soft/rabbitmq/data:/var/lib/rabbitmq \
--add-host myrabbit01:192.168.45.129 \
--add-host myrabbit03:192.168.45.131 \
rabbitmq:3-management

Server 131 executes

docker run -d --restart=always \
--hostname myrabbit03 --name rabbit03 \
-e RABBITMQ_DEFAULT_USER=user -e RABBITMQ_DEFAULT_PASS=password \
-e RABBITMQ_ERLANG_COOKIE='rabbitmqCookie' \
-p 15672:15672 -p 5672:5672 -p 5671:5671 -p 4369:4369 -p 25672:25672 \
-v /home/soft/rabbitmq/data:/var/lib/rabbitmq \
--add-host myrabbit01:192.168.45.129 \
--add-host myrabbit02:192.168.45.130 \
rabbitmq:3-management

 

Perform statement analysis:

-d Background execution  --restart=always Container auto start 
--hostname myrabbit03  host name  --name rabbit03   Container name
-e RABBITMQ_DEFAULT_USER=user Management interface account  -e RABBITMQ_DEFAULT_PASS=password   Management interface password
-e RABBITMQ_ERLANG_COOKIE='rabbitmqCookie'  colony cookie,Node settings of the same cluster cookie unanimous   
-p 15672:15672 Management interface port -p 5672:5672 -p 5671:5671 Connection port 
-p 4369:4369 -p 25672:25672 Required ports for cluster
-v /home/soft/rabbitmq/data:/var/lib/rabbitmq   Mount data directory
--add-host myrabbit01:192.168.45.129 \   Add host
--add-host myrabbit02:192.168.45.130 \
rabbitmq:3-management

 

After execution, you can see that there is no information about other nodes in the 129 management interface.

 

 

We then operate node 129 to join node 131

① Enter container

docker exec -it rabbit01 bash

② Close app

rabbitmqctl stop_app

③ Reset the node (the data will be cleared and used on demand. This step can be ignored for newly built nodes)

rabbitmqctl reset

④ Add node 131, -ram means to set rabbit01 as a memory node. If it is not configured, it is a disk node by default.

rabbitmqctl join_cluster --ram rabbit@myrabbit03

⑤ start application

rabbitmqctl start_app

⑥ Exit container

exit

 

At this time, let's go to the management interface

You can see that rabbit03 node information already exists, and rabbit01 is a RAM node, that is, a memory node.

 

We continue to operate node 130, add node 131, and set it as a disk node.

 

In this way, the normal cluster is set up.

 

For the test, you can create queues and add messages in the 130 management interface, and then get messages in the 129 management interface to see if you can get messages.

After creating queues and non persistent messages at 130, we use stop_ After the app closes the application, you can see that the queue becomes down in the management interface. Use start_ After the app starts again, you will find that the message is lost.

The test steps are not demonstrated here.

 

 

2.2 image cluster

Operate 129 nodes on the basis of the built ordinary cluster.

① Enter container

docker exec -it rabbit01 bash

② Set the Policies policy to implement the mirroring queue. Ha all is the user-defined policy name, ^ is the matching of queue names (regular expression), which means all queues, and all means copying to all nodes. Setting a policy on a node of the cluster will automatically synchronize to all nodes of the cluster. You can also set Policies through the management interface, which will not be detailed here.

Image queue detailed explanation reference

rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'

 

That's it.

After creating queues and non persistent messages at 130, we use stop_ After the app closes the application, you can see in the management interface that the queue does not change to the down state, it still works normally, and messages still exist.

 

 

 

2.3 load balancing

We mentioned earlier that RabbitMQ naturally supports distribution, but does not support load balancing.

We can achieve load balancing through nginx. docker installing nginx

 

We install nginx on the 129 server and run the container instructions as follows. Map the 15676 port as the mq management interface and the 5676 port as the tcp port of mq (in this way, the 15676 and 5676 ports of the access server will be mapped to the 15676 and 5676 ports of the nginx container, and these two ports will be forwarded to our mq cluster by the agent).

 

docker run -p 8080:80 -p 5676:5676 -p 15676:15676 --restart always --name mynginx -v /home/soft/nginx/:/etc/nginx/ -d nginx

 

Modify nginx Conf configuration file

vi /home/soft/nginx/nginx.conf

The contents are as follows:

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;
 
    #mq management interface load balancing configuration
    upstream mqweb{
      server 192.168.45.129:15672;
      server 192.168.45.130:15672;
      server 192.168.45.131:15672;
    }
    server{
      listen 15676;
      server_name 192.168.45.129;
      location / {
        proxy_pass http://mqweb;
      }
    }

    include /etc/nginx/conf.d/*.conf;
}

#mq tcp connection load balancing configuration. Attention!!! tcp configuration needs to be in the stream module, at the same level as the http module, not in the http module, and does not need a server_name and location
stream{
  upstream mqtcp{
    server 192.168.45.129:5672 max_fails=2 fail_timeout=5s;
    server 192.168.45.130:5672 max_fails=2 fail_timeout=5s;
    server 192.168.45.131:5672 max_fails=2 fail_timeout=5s;
  }
  server{
    listen 5676;
    proxy_pass mqtcp;
  }
}

 

Test access http://192.168.45.129:15676/ If you can access the mq management interface, it means success.

You can create a springboot project to simply test whether messages can be sent when connecting to port 5676. I won't demonstrate it here.

 

 

Reference article:

Explain the RabbitMQ cluster principle in detail, which is worth collecting

docker easy to build RabbitMQ cluster

RabbitMQ cluster principle

Installing and deploying RabbitMQ cluster based on docker in CentOS7.X environment

Posted by snizkorod on Mon, 30 May 2022 10:50:11 +0530