Generally speaking, if the database is read and written in the same database server, the business system performance will be reduced. In order to improve the performance of the business system and optimize the user experience, the master-slave replication (read-write separation) can be used to reduce the load on the master database. In addition, if the primary database goes down, the business system can be quickly switched to the secondary database to avoid data loss. Source code: https://github.com/limingios/netFuture
Why master-slave synchronization
- Real time disaster recovery for failover
- Read / write load balancing
- Dedicated to scheduled tasks
- Developer view
Understand the principle
The following figure shows the principle of master-slave synchronization.
- webapp (that is, java application) writes to the main library mysqld.
1.1 drop to the data node of mysql
1.2 data statements are dropped into binlogs (binlogs must be enabled to enable master-slave synchronization). Except for select query statements, they are not recorded.
1.3 the master node mysqld has a filter that can specify which databases generate the corresponding binlogs. Those that need real-time synchronization, those that don't.
1.4 if binlogs are generated all the time, will the hard disk explode? If not, you can set the validity period of binlogs and specify what will automatically expire before it will be deleted automatically.
- Generate two threads from the library, one I/O thread and one SQL thread.
2.1 the i/o thread requests the binlog of the main database and writes the binlog to the relay log file.
2.2 the relay log is read by sql thread and generates binlog and data
2.3 the master library will generate a log dump thread to transfer binlog to the slave library i/o thread.
2.4 the binlog generated in the slave (which can be opened or closed) can be read by another slave. Form a chain.
- According to the principle of synchronization, binlog has a position. When the master and slave are inconsistent, the slave will get the data.
Problems and Solutions
Problems with mysql master-slave replication:
- Data may be lost after the primary library goes down
- There is only one sql Thread in the slave database. The master database is under great write pressure, and replication is likely to delay
- Semi synchronous replication - solve the problem of data loss (when the primary node inserts the previous statement, binlogs have reached the relay binlog of the slave)
- 5.5 integrated into mysql, exists as a plug-in, and needs to be installed separately
- Ensure that the binlog is transferred to at least one slave database after the transaction is committed
- It is not guaranteed that the binlog of this transaction will be applied from the database
- The performance is reduced to a certain extent, and the response time will be longer
- The network is abnormal or the slave library is down, and the master library is stuck until it times out or the slave library is recovered
- Problem: when the slave hangs, the master keeps communicating, which may cause the whole system to jump
- The timeout time for the master to wait for notification is 10 seconds. If the 10 second slave synchronization ID has not arrived, I should continue to insert data into the local master.
- However, many companies set 1 second, because 99% of them are OK, but 1% of them are OK. This setting depends on the business. The level and experience of architects.
Consistency and synchronization time are double-edged swords in themselves. There is no complete general solution, and the optimal solution can only be selected through comprehensive consideration of business and performance.
When the master and slave nodes enjoy consistency, the following analogy is used: the master is the emperor, and there are many slave princes, but one of the slaves must be the prince. When the master dies, one of the princes becomes the emperor.
The mainstream master-slave scheme of the Internet
- One master and one slave are used by small companies. There is no read-write separation. The slave node is mainly hot standby. Recover quickly if problems occur. I remember that a colleague accidentally deleted the data of a very important table, which may have deleted more than 20 items. As a result, the problem of data consistency has been solved for half a month. Find one by one by hand.
- One master node has multiple slaves. Small companies often use one master node and three to four slave nodes. One of the three slave nodes is the crown prince, which can ensure data consistency,
- Dual master is abnormal. It is generally unavailable. It may be used when the writing performance is very high. mysql is allowed. Beijing's are written in one library and Shanghai's are written in one library. To be honest, it is not recommended.
- Cascaded synchronization: a master has one slave node and multiple slave nodes. If a master is not cascaded, it needs to do both read and write, as well as synchronization. The master node is under great pressure. Cascading is to synchronize one of the slaves, reducing the pressure on the master node and achieving the problem of master-slave synchronization. There is also a problem with this. If the crown prince dies, the prince behind him will have no leader and become a single machine.
- The master-slave, that is, two masters exchange visits. The slave is synchronized with the master in the back. However, if the master in the back hangs up, the slave in the back is consistent with the cascade synchronization.
- Circular multimaster, very abnormal, must not be recommended. The concurrency of writes is extremely high. There is no way to play like this. There are 31 masters in 31 provinces across the country, one master in each province. If one of the masters dies, the whole ring will be suspended. Slaves can also be added to the master. It is estimated that very large slave s will be used. But it won't be used at present.
Introduce the mainstream scheme of one master and multiple slaves
The master is responsible for writing and the slave is responsible for reading. A load balancing solution is required. Here is atlas. Scenario if an order is generated in the mall, the order must be read immediately. According to the traditional scheme, I should read the slave library, but the slave has not finished synchronizing at this time. Isn't this embarrassing? The solution is atlas. If you need to read the orders generated in the master immediately, you just need to add an annotation to the sql statement, and you can read them directly in the master.
Atlas is a data middle tier project based on MySQL protocol developed and maintained by the infrastructure team of the Web Platform Department of Qihoo 360 company. On the basis of MySQL proxy version 0.8.2 officially launched by mysql, it has modified a large number of bug s and added many features. At present, the project has been widely used in 360 company. Many MySQL services have been connected to the atlas platform, and the number of read and write requests carried every day reaches billions. At the same time, more than 50 companies have deployed atlas in the production environment, and more than 800 people have joined our developer exchange group, and these numbers are increasing.
Official Atlas link: https://github.com/Qihoo360/Atlas/blob/master/README_ZH.md
Atlas download link: https://github.com/Qihoo360/Atlas/releases
vagrant generate virtual machine
Generate three virtual machines through the source code and prepare for the work. Vagrant has installed the corresponding docker. Using docker to install nexus is to avoid complex operations such as environment variables and user empowerment. For different systems of vagrant, please refer to
mac installation vgarant: https://idig8.com/2018/07/29/docker-zhongji-07/
Windows install VGA ranthttps://idig8.com/2018/07/29/docker-zhongji-08/
|System type||IP address||Node role||CPU||Memory||Hostname|
- Open the remote login under the root user in the window/mac of the three machines
su - # password vagrant #Set PasswordAuthentication yes vi /etc/ssh/sshd_config sudo systemctl restart sshd
- Turn off and disable the firewall (allow all machines to establish connections through any port)
In the actual production environment, the specified ports are opened according to the actual conditions
systemctl stop firewalld systemctl disable firewalld #View status systemctl status firewalld
102 (master), 103 (slave) configuration
docker pull mysql:5.7
102 (master) configuration
- configure directory
mkdir -p /usr/local/mysqlData/master/cnf mkdir -p /usr/local/mysqlData/master/data cd /usr/local/mysqlData/master/cnf vi mysql.cnf
- Configure mysql CNF
[mysqld] pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock datadir = /var/lib/mysql symbolic-links=0 character-set-server = utf8 #skip-networking innodb_print_all_deadlocks = 1 max_connections = 2000 max_connect_errors = 6000 open_files_limit = 65535 table_open_cache = 128 max_allowed_packet = 4M binlog_cache_size = 1M max_heap_table_size = 8M tmp_table_size = 16M read_buffer_size = 2M read_rnd_buffer_size = 8M sort_buffer_size = 8M join_buffer_size = 28M key_buffer_size = 4M thread_cache_size = 8 query_cache_type = 1 query_cache_size = 8M query_cache_limit = 2M ft_min_word_len = 4 log-bin = mysql-bin server-id = 1 binlog_format = mixed performance_schema = 0 explicit_defaults_for_timestamp #lower_case_table_names = 1 interactive_timeout = 28800 wait_timeout = 28800 # Recommended in standard MySQL setup sql_mode=NO_ENGINE_SUBSTITUTION,NO_AUTO_CREATE_USER,STRICT_TRANS_TABLES [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 8M sort_buffer_size = 8M read_buffer = 4M write_buffer = 4M
- Start container
docker images docker run -itd -p 3306:3306 --name master \ -v /usr/local/mysqlData/master/cnf:/etc/mysql/conf.d \ -v /usr/local/mysqlData/master/data:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=masterpwd ae6b78bedf88 docker container ls
- Enter container to open remote access
docker container ls docker exec -it e2ca4d3cd633 /bin/bash mysql -uroot -pmasterpwd
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'masterpwd' WITH GRANT OPTION; GRANT REPLICATION SLAVE ON *.* to 'reader'@'%' identified by 'readerpwd'; FLUSH PRIVILEGES; show master status;
103 (slave) configuration
- configure directory
mkdir -p /usr/local/mysqlData/slave/cnf mkdir -p /usr/local/mysqlData/slave/data cd /usr/local/mysqlData/slave/cnf vi mysql.cnf
- Configure mysql CNF
[mysqld] pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock datadir = /var/lib/mysql symbolic-links=0 character-set-server = utf8 #skip-networking innodb_print_all_deadlocks = 1 max_connections = 2000 max_connect_errors = 6000 open_files_limit = 65535 table_open_cache = 128 max_allowed_packet = 4M binlog_cache_size = 1M max_heap_table_size = 8M tmp_table_size = 16M read_buffer_size = 2M read_rnd_buffer_size = 8M sort_buffer_size = 8M join_buffer_size = 28M key_buffer_size = 4M thread_cache_size = 8 query_cache_type = 1 query_cache_size = 8M query_cache_limit = 2M ft_min_word_len = 4 log-bin = mysql-bin server-id = 2 binlog_format = mixed performance_schema = 0 explicit_defaults_for_timestamp #lower_case_table_names = 1 interactive_timeout = 28800 wait_timeout = 28800 # Recommended in standard MySQL setup sql_mode=NO_ENGINE_SUBSTITUTION,NO_AUTO_CREATE_USER,STRICT_TRANS_TABLES [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 8M sort_buffer_size = 8M read_buffer = 4M write_buffer = 4M
- Start container
docker images docker run -itd -p 3307:3306 --name slave1 \ -v /usr/local/mysqlData/slave/cnf:/etc/mysql/conf.d \ -v /usr/local/mysqlData/slave/data:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=slavepwd ae6b78bedf88 docker container ls
- Enter container to open remote access
Create a remote connection user, and grant query database and query permission, which can be used for read-write separation.
docker container ls docker exec -it c380d2b5f6cf /bin/bash mysql -uroot -pslavepwd
grant SHOW DATABASES,SELECT on *.* to 'slave'@'%' identified by 'slavepwd'; FLUSH PRIVILEGES;
Enter the mysql container of 102master
View the master database show master status
show master status
Enter the mysql container of 103slave
Configure the master-slave. The position=889 of the 102 status should also be configured to be consistent here.
master_log_file also needs to be consistent with the main database
change master to master_host='192.168.66.102',master_user='reader',master_password='readerpwd',master_log_file='mysql-bin.000003',master_log_pos=889;
#Start master-slave start slave; show slave status\G
Create a database auto sync complete
Atlas proxy configuration
If you want to use Atlas proxy, you need to keep the MySQL password of 102 consistent with the MySQL password of 103. The reason is that 360 has not been maintained for a long time. Someone in the question has used multiple pwds, and there are garbled codes directly. 103 (slave) configuration (configuration directory, mysql.cnf configuration remains unchanged)
- Re delete the mysql of the 103 slave and create a new
cd /usr/local/mysqlData/slave/data rm -rf * docker ps docker rm -f c380d2b5f6cf #The password of slave103 here is the same as that of master102 docker run -itd -p 3307:3306 --name slave1 \ -v /usr/local/mysqlData/slave/cnf:/etc/mysql/conf.d \ -v /usr/local/mysqlData/slave/data:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=masterpwd ae6b78bedf88 docker ps
- Modify database permissions
docker ps docker exec -it e698f50d7ece /bin/bash mysql -uroot -pmasterpwd
- 102 enter mysql to view position
#Be sure to remember that the slave must be used for synchronization. If it is inconsistent, it cannot be synchronized show master status;
- 103 enter container synchronization
docker ps docker exec -it 6djasdad /bin/bash GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'masterpwd' WITH GRANT OPTION; FLUSH PRIVILEGES; change master to master_host='192.168.66.102',master_user='root',master_password='masterpwd',master_log_file='mysql-bin.000003',master_log_pos=1047; #Start master-slave start slave; show slave status\G
On the slave host, the configuration is successful only when both of these two are yes.
- Enter 101 host to start configuring agent
wget https://github.com/Qihoo360/Atlas/releases/download/2.2.1/Atlas-2.2.1.el6.x86_64.rpm rpm -ivh Atlas-2.2.1.el6.x86_64.rpm
- Password encryption, configuration file needs
cd /usr/local/mysql-proxy/ cd bin ./encrypt masterpwd #The password test CNF needs XYx4FmVBYrXmTh762ogNww==
- Configuring atlas files
The password corresponding to pwds here is generated in the previous step, and the log level is set to debug for testing purposes. sql log = realtime to view the direction of the sql.
[mysql-proxy] #belt#No. is not a required configuration item #User name of the management interface admin-username = user #Password for management interface admin-password = pwd #The IP and port of the MySQL main database connected to the Atlas back-end. Multiple items can be set, separated by commas proxy-backend-addresses = 192.168.66.102:3306 #The IP and port of MySQL slave database connected to Atlas backend. The number after @ represents the weight and is used for load balancing. If omitted, it is 1 by default. Multiple items can be set and separated by commas proxy-read-only-backend-addresses = 192.168.66.103:3307@1 #The user name and its corresponding encrypted MySQL password. The password is encrypted by using the encryption program encrypt under the PREFIX/bin directory. The user1 and user2 in the downlink are taken as examples. Replace them with your MySQL user name and encryption password! pwds = root:XYx4FmVBYrXmTh762ogNww== #Set the operation mode of Atlas. When it is set to true, it is the daemon mode. When it is set to false, it is the foreground mode. Generally, it is set to false when developing and debugging. When running online, it is set to true. There can be no spaces after true. daemon = true #Set the operation mode of atlas. When it is set to true, Atlas will start two processes, one is monitor and the other is worker. The monitor will automatically restart the worker after the worker exits unexpectedly. When it is set to false, there is only worker, and there is no monitor. Generally, it is set to false during development and debugging, and set to true during online operation. There can be no spaces after true. keepalive = true #The number of working threads has a great impact on the performance of Atlas, which can be set appropriately according to the situation event-threads = 8 #The log level is divided into five levels: message, warning, critical, error and debug log-level = debug #Log storage path log-path = /usr/local/mysql-proxy/log #The switch of the SQL log can be set to OFF, ON, and REALTIME. OFF means that the SQL log is not recorded, ON means that the SQL log is recorded, and REALTIME means that the SQL log is recorded and written to the disk in real time. The default value is OFF sql-log = REALTIME #Slow log output settings. When this parameter is set, the log only outputs log records whose execution time exceeds SQL log slow (unit: ms). If this parameter is not set, all logs will be output. #sql-log-slow = 10 #Instance name, used to distinguish multiple Atlas instances on the same machine #instance = test #Working interface IP and port monitored by Atlas proxy-address = 0.0.0.0:1234 #Management interface IP and port monitored by Atlas admin-address = 0.0.0.0:2345 #Table splitting settings. In this example, person is the database name, mt is the table name, id is the table splitting field, and 3 is the number of sub tables. Multiple items can be set, separated by commas. If there is no table splitting, this item does not need to be set #tables = person.mt.id.3 #The default character set. After this item is set, the client no longer needs to execute the SET NAMES statement #charset = utf8 #The IP addresses of clients that are allowed to connect to Atlas can be exact IP addresses or IP segments, separated by commas. If this item is not set, all IP connections are allowed. Otherwise, only IP connections in the list are allowed #client-ips = 127.0.0.1, 192.168.1 #The IP of the physical network card of the LVS attached in front of Atlas (note that it is not a virtual IP). If there is an LVS and client IPS is set, this item must be set, otherwise it can not be set #lvs-ips = 192.168.1.1
- Start atlas
cd /usr/local/mysql-proxy/bin/ # The name of the test configuration file start start, stop stop ./mysql-proxyd test start
The listening port is 1234 by default. I haven't modified it. If you need to modify it, you can directly change it to test CNF
- View log preparation test
cd /usr/local/mysql-proxy/log tail -f sql_test.log
Finally got this picture
In addition, when the framework mybatis is is used to connect the database in the project, it directly reads and writes data in the main database. Transaction @Transactional is added
The solution is to add @transactional (promotion=promotion.not\u supported) to the method.
PS: in fact, many companies manage master-slave databases through agents. It can optionally control which database to go from. I felt very good. 102 went for insert and 103 went for select.