mysql MMM highly available cluster (with mysql environment)

1, Overview of MySQL MMM cluster technology;

Overview: MMM (Master Master replication manager for mysql) is a script program that supports dual master failover and dual master daily management. MMM is developed in Perl. It is mainly used to monitor and manage mysql master master replication. It can be said to be the mysql master replication manager.

Although it is called dual master replication, only one master is allowed to write at the same time, and the other alternative master provides partial read services to speed up the warm-up of the alternative master at the time of master master master switchover. It can be said that MMM, on the one hand, implements the failover function, and on the other hand, its internal additional tool scripts can also realize the read load balancing of multiple slave s.

Mmm provides automatic and manual methods to remove the virtual ip of servers with high replication delay in a group of servers. At the same time, it can back up data and realize data synchronization between two nodes. Because MMM cannot completely guarantee data consistency, mmm is applicable to scenarios that do not require high data consistency, but want to ensure business availability to the greatest extent.

 

2, Advantages and disadvantages of MySQL MMM;

3, MySQL MMM internal working architecture;

Process type:

mmm_mond: monitors the process, takes charge of all monitoring work, and determines and processes all node role activities. This script needs to be run on the supervisor;

mmm_agentd: the agent processes running on each mysql server (Master and Slave) complete the monitoring probe work and perform simple remote service settings. This script needs to be run on the supervised machine;

mmm_control: a simple script to manage mmm_ Command of Mond process;

mmm Construction:

  • Deploy the ntp service and domain name resolution of the master1 node;
  • Configure the master 2, slave1, slave2, and monitor nodes to synchronize ntp time and domain name resolution (only the master 2 single host configuration is listed here);
  • Install mysql services on the master1, master2, slave1, and slave2 nodes respectively (only the configuration of a single master1 host is listed here);
  • Configure master1 to realize dual master replication;
  • Configure master2 to realize dual master replication;
  • Configure slave1 and slave2 to realize master-slave replication (only slave1 single host configuration is listed here);
  • Install the MySQL MMM software program on the four nodes of master1, master2, slave1 and slave2, and configure the agent to specify the node name of the local machine (only the configuration of a single master of master1 is listed here);
  • Authorize the monitor node on the master1 node to connect to the database cluster;
  • Install the MySQL MMM software program on the monitor node;
  • Configure the mysql MMM configuration file on the monitor node and copy it to each mysql node;
  • Start the MySQL MMM service on the master1, master2, slave1 and slave2 nodes (only the master 1 single host configuration is listed here);
  • Start the MySQL MMM service on the monitor node and view the cluster status;
  • Read and write data for client verification test;
  • Stop the mysql service on the master node of master1 and test the cluster status;
  • Turn off the write function of slave1 and slave2 nodes to make them read-only;
  • Client access tests whether the slave node is writable;
  • Deploy ntp service and domain name resolution of dual master and dual slave nodes;

[root@master1 ~]# cat <<END >>/etc/hosts

192.168.100.208 master1

192.168.100.209 master2

192.168.100.210 slave1

192.168.100.211 slave2

192.168.100.212 monitor

END
####A total of four servers should be configured, and the remaining one is used to install mmm

Configure the name of each server

Write: master1, master2, slave1, slave2, monitor in each /etc/hostsname file

Configure time synchronization:

[root@master2 ~]# yum -y install ntpdate
[root@master2 ~]# /usr/sbin/ntpdate 192.168.100.101
ech 9 Aug 18:04:38 ntpdate[1106]: adjust time server 192.168.100.101 offset 0.299673 sec
[root@master2 ~]# echo "/usr/sbin/ntpdate 192.168.100.101">>/etc/rc.local 
[root@master2 ~]# chmod +x /etc/rc.local

  • Configure master1 to realize dual master replication;

Open relay log first

[root@master1 ~]# cat <<END >>/etc/my.cnf

server-id=1

log-bin=mysql-bin

log-slave-updates                                        ##Enable binary log configuration

sync_binlog=1

auto_increment_increment=2

auto_increment_offset=1

relay-log=relay1-log-bin

relay-log-index=slave-relay1-bin.index        ##Enable binary relay log

END

Notes:

sync_binlog=1                                                ##Each time the host commits a transaction, it synchronizes the contents of the binary log to the disk, so even if the server crashes, it will write the time to the log;

auto_increment_increment=2                  ##The following two parameters are used in primary replication to stagger increment and prevent key value conflicts

Enable mysql to authorize the master, view the log file status, and then connect

[root@master1 ~]# systemctl restart mysqld

[root@master1 ~]# mysql -uroot -p123123

mysql> show master status;

+------------------+----------+--------------+------------------+---------------------------------------+

| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+------------------+----------+--------------+------------------+---------------------------------------+

| mysql-bin.000001 |      120 |              |                  |                   |

+------------------+----------+--------------+------------------+---------------------------------------+



mysql> grant replication slave on *.* to 'master'@'192.168.100.%' identified by '123123';

Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)

mysql> change master to master_host='other party ip',master_user='Authorized user( master)',master_password='123123',master_log_file='mysql-bin.000001',master_log_pos=View the status code of the log;
Query OK, 0 rows affected, 2 warnings (0.01 sec)

Make the same configuration in the master2 host (open the binary file and relay log, check the log code, authorization, and change link)

Next, configure the dual slave server:

  • Configure slave1 and slave2 to realize master-slave replication (only slave1 single host configuration is listed here);

1. two slave servers enable relay logs

[root@slave1 ~]# cat <<END >>/etc/my.cnf

server-id=4

relay-log=relay4-log-bin

relay-log-index=slave-relay4-bin.index

END

Start the mysql service, log in, connect to the first primary server, check the status and table, and confirm whether it is successful

[root@slave1 ~]# systemctl restart mysqld

[root@slave1 ~]# mysql -uroot -p123123

mysql> change master to master_host='192.168.100.101',master_user='master',master_password='123123',master_log_file='mysql-bin.000001',master_log_pos=120;

Query OK, 0 rows affected, 2 warnings (0.02 sec)

mysql> start slave;

Query OK, 0 rows affected (0.01 sec)

mysql> show slave status\G;

*************************** 1. row ***************************

               Slave_IO_State: Waiting for master to send event

                  Master_Host: 192.168.100.101

                  Master_User: master

                  Master_Port: 3306

                Connect_Retry: 60

              Master_Log_File: mysql-bin.000002

          Read_Master_Log_Pos: 421

               Relay_Log_File: relay3-log-bin.000003

                Relay_Log_Pos: 584

        Relay_Master_Log_File: mysql-bin.000002

             Slave_IO_Running: Yes

            Slave_SQL_Running: Yes



[root@master1 ~]# mysql -uroot -p123123
##############Create a table to see if the slave server is synchronized
mysql> create database linuxfan;

Query OK, 1 row affected (0.00 sec)

mysql> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| linuxfan           |

| mysql              |

| performance_schema |

| test               |

+--------------------+

mysql> exit



[root@master2 ~]# mysql -uroot -p123123

mysql> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| linuxfan           |

| mysql              |

| performance_schema |

| test               |

+--------------------+

mysql> exit



[root@slave1 ~]# mysql -uroot -p123123

mysql> show databases;

+--------------------+

| Database           |

+--------------------+

| information_schema |

| linuxfan           |

| mysql              |

| performance_schema |

| test               |

+--------------------+

mysql> exit
  • Install MySQL MMM software programs on the five nodes of master1, master2, slave1 and slave2 monitor respectively, and configure the agent to specify the node name of the local machine (only the configuration of a single master of master1 is listed here);

Configure local yum source and upload Mysql_mmm_rpm folder to /root of master1

scp -r Mysql_mmm_rpm root@192.168.100.209:/root/

scp -r Mysql_mmm_rpm root@192.168.100.210:/root/

scp -r Mysql_mmm_rpm root@192.168.100.214:/root/

scp -r Mysql_mmm_rpm root@192.168.100.212:/root/

Configure local yum source

cat /etc/yum.repos.d/dgf.repo

[local]

name=local

baseurl=file:///mnt

enabled=1

gpgcheck=0

[Mysql_mmm_rpm]

name=Mysql_mmm_rpm

baseurl=file:///root/Mysql_mmm_rpm

enabled=1

gpgcheck=0

Transfer yum sources to four other servers

[root@master1 ~]# cd /etc/yum.repos.d/

scp dgf.repo root@192.168.100.209:/etc/yum.repos.d/
scp dgf.repo root@192.168.100.210:/etc/yum.repos.d/

scp dgf.repo root@192.168.100.214:/etc/yum.repos.d/

scp dgf.repo root@192.168.100.212:/etc/yum.repos.d/

Install mmm package using yum or up2date

[root@master1 ~]# yum -y install mysql-mmm mysql-mmm-agent mysql-mmm-tools

Configure the db number of each server for management

[root@master1 ~]# vi /etc/mysql-mmm/mmm_agent.conf

include mmm_common.conf



# The 'this' variable refers to this server.  Proper operation requires

# that 'this' server (db1 by default), as well as all other servers, have the

# proper IP addresses set in mmm_common.conf.

this db1

:wq



[root@master2 ~]# egrep -v "^#|^$" /etc/mysql-mmm/mmm_agent.conf

include mmm_common.conf

this db2



[root@slave1 ~]# egrep -v "^#|^$" /etc/mysql-mmm/mmm_agent.conf

include mmm_common.conf

this db3



[root@slave2 ~]# egrep -v "^#|^$" /etc/mysql-mmm/mmm_agent.conf

include mmm_common.conf

this db4
  • Authorize the monitor node on the master1 node to connect to the database cluster;

Authorize two users, mmm_monitor monitors the server, mmm_agent scheduling server

[root@master1 ~]# mysql -uroot -p123123

mysql> grant replication client on *.* to 'mmm_monitor'@'192.168.100.%' identified by 'monitor';

Query OK, 0 rows affected (0.00 sec)

mysql> grant super,replication client,process on *.* to 'mmm_agent'@'192.168.100.%' identified by 'agent';

Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;

Query OK, 0 rows affected (0.00 sec)

mysql> exit

Permission notes:

replication client: permission is used to execute commands such as show master status. These commands are used to view the replication status;

replication slave: it is used to connect the master library and read binary files from the library to realize replication;

super: kill the processes connected in mysql, set global variables, and reset the permissions of the master-slave configuration;
process: have the permission to view the currently running sql and explain the execution plan;

  • Install the MySQL MMM software program on the monitor node;
[root@monitor ~]# yum -y install mysql-mmm mysql-mmm-tools mysql-mmm-monitor

Modify configuration mmm Management Profile

to configure monitor On node mysql-mmm And copy to each mysql Nodes;
[root@monitor ~]# vi /etc/mysql-mmm/mmm_common.conf
active_master_role      writer 				##Specify active role as write role
<host default>
    cluster_interface       eth0			##The hosted network card $$$is modified to its own network card
    pid_path                /var/run/mysql-mmm/mmm_agentd.pid
    bin_path                /usr/libexec/mysql-mmm/		##Executable path
    replication_user        master			##mysql Cluster master-slave replication account
    replication_password    123123           $$$Your own authorization password
    agent_user              mmm_agent	##monitor the account connected to the cluster
    agent_password          agent
</host>
<host db1>                        $$$$$Change to your own IP
    ip      192.168.100.101
    mode    master
    peer    db2					  $$$$$Slave server of master server		##Specify opposite node names
</host>
<host db2>
    ip      192.168.100.102         $$$$$Change to your own IP
    mode    master
    peer    db1                    $$$$$Slave server of master server
</host>
<host db3>
    ip      192.168.100.103         $$$$$Change to your own IP
    mode    slave
</host>
<host db4>
    ip      192.168.100.104             $$$$$Change to your own IP
    mode    slave
</host>
<role writer>
    hosts   db1, db2
    ips     192.168.100.250
    mode    exclusive					##A single master exists at the same time
</role>
<role reader>
    hosts   db3, db4
    ips     192.168.100.251, 192.168.100.252
    mode    balanced				##polling 
</role>

Use the for loop to send the file to the other four servers:

for i in 101 102 103 104;do scp /etc/mysql-mmm/mmm_common.conf root@192.168.100.$i:/etc/mysql-mmm/; done		##Copy the configuration file to the mysql node

Modify the configuration file: /etc/mysql-mmm/mmm_mon.conf

[root@monitor ~]# vi  /etc/mysql-mmm/mmm_mon.conf

include mmm_common.conf

<monitor>

    ip                  127.0.0.1

    pid_path            /var/run/mysql-mmm/mmm_mond.pid

    bin_path            /usr/libexec/mysql-mmm   

    status_path         /var/lib/mysql-mmm/mmm_mond.status   

         ping_ips            192.168.100.101, 192.168.100.102, 192.168.100.103, 192.168.100.104                                                  ##Specify the IP addresses of all nodes listening (remember to separate them when adding)

         auto_set_online     60                            ##The online time is determined. If it exceeds 60s, it is considered as down

   </monitor>

<host default>

    monitor_user        mmm_monitor           ##Working user of monitor

    monitor_password    monitor

</host>

debug 0

Reload the disk, turn on (all servers) service, and check whether the port number is turned on

[root@master1 ~]# systemctl daemon-reload
[root@master1 ~]# systemctl start mysql-mmm-agent
[root@master1 ~]# netstat -utpln |grep mmm
tcp        0      0 192.168.100.101:9989    0.0.0.0:*               LISTEN      

If there are no problems, the test can be carried out:

1. check their status first:

[root@monitor ~]# mmm_control show
# Warning: agent on host db4 is not reachable
  db1(192.168.100.101) master/ONLINE. Roles: writer(192.168.100.250)
  db2(192.168.100.102) master/ONLINE. Roles: 
  db3(192.168.100.103) slave/ONLINE. Roles: reader(192.168.100.251) 
  db4(192.168.100.104) slave/ONLINE. Roles: reader(192.168.100.252)

2. shut down the first server directly for test:

[root@monitor ~]# mmm_control show
# Warning: agent on host db4 is not reachable
  db1(192.168.100.101) master/HARD_OFFLINE. Roles: 
  db2(192.168.100.102) master/ONLINE. Roles: writer(192.168.100.250)
  db3(192.168.100.103) slave/ONLINE. Roles: reader(192.168.100.251)
  db4(192.168.100.104) slave/ONLINE. Roles:

It can be found that the server vip automatically floats to the second one

Tags: Database MySQL

Posted by yumico23 on Fri, 03 Jun 2022 01:56:01 +0530