MySql transaction and isolation level details (from concept to implementation and)

1. Introduction to MySql transactions

Four characteristics of a transaction (ACID): atomicity, consistency, isolation, persistence?

Atomicity

  • Transactions are the smallest unit of execution and do not allow splitting; all operations contained in a transaction are either fully successful or all failures are rolled back;

Consistency

  • Before and after the transaction is executed, the status of all data should be consistent. For example, if A transfers to B, it is not possible that A withholds money, but B does not receive it.

Isolation

  • Isolation is when multiple users access the database concurrently, each user's transaction cannot be interfered with by other transactions, and multiple concurrent transactions need to be isolated from each other.

At the same time, only one transaction is allowed to request the same data, and there is no interference between different transactions. For example, A is withdrawing money from a bank card, and B cannot transfer money to A until the withdrawal process ends. (Its impact on the database is the same as when they are serial execution.)

Durability

  • Persistence means that once a transaction is committed, changes to the data in the database are permanent, and the committed transaction should not be lost even if the database system fails.

2. Introduction to MySql isolation level

Concurrency issues with 2.1 transactions

1. Dirty reading: When one transaction has updated a piece of data and another transaction reads this data, the previous transaction rolls back, and the data read by the latter transaction is dirty data.

2. Non-repeatable reading: within a transaction scope, multiple queries of a data result in different results, which may be two queries in which a transaction is inserted to update the original data.

3. Magic reading: Transaction A reads rows that match the search criteria. Transaction B modifies the result set of transaction A by inserting or deleting rows, etc., and then commits. (Transaction A cannot read the modified result)

Summary: Non-repeatable reading and magic reading are easy to confuse, non-repeatable reading focuses on modifications, magic reading focuses on additions or deletions. To solve the problem of non-repeatable reading, you only need to lock the rows that meet the conditions, and to solve magic reading, you need to lock the table.

2.2 Transaction isolation level

Transaction isolation levelDirty readingNon-repeatable readingphantom read
Read uncommitted (minimum isolation level): Allows reading uncommitted data changes.yesyesyes
Read committed read-committed: Allows read of data already committed by concurrent transactions.noyesyes
Repeatable read repeatable-read: Multiple reads of the same field will result in the same result unless the data is modified by its own transaction.nonoyes
serializable (highest isolation level): Completely subject to the isolation level of the ACID, all transactions execute one by one, with no interference between them.nonono
  • Read Uncommitted: Another transaction modified the data but did not commit, and SELECT in this transaction will read the uncommitted dirty reads
  • Non-repeatable reading: Transaction A reads the same data more than once, while transaction B updates and commits the data during the process of reading transaction A more than once, resulting in inconsistent results when transaction A reads the same data more than once.
  • REPEATABLE READ: In the same transaction, the result of SELECT is the state of the point at which the transaction started, so the same SELECT operation will read the same result. However, there will be hallucinations
  • Serialization: The highest level of isolation in which no exceptions are generated. Concurrent transactions are executed in the same order as transactions.

MySQL default transaction isolation level is repeatable-read

2.2 Transaction Concurrency Problem Simulation Test:

    1.Dirty reading
    Open two windows with autocommit transaction closed  set autocommit = 0 Switch isolation level to read uncommitted
 It then modifies the data of one of the tables, but does not commit the transaction, and finds that the query result has been changed; calls rollback After that, the results returned to normal.
    2.Non-repeatable reading
    //Switch isolation level to read committed;
    //One window modifies the data without committing it, the other opens a transaction and makes a select * query;
    //The first window commits the transaction, and the other window makes a select * query.
    //You will find that the results of the two queries are different.
start transaction;//Open a transaction;
set session transaction isolation level read committed;
	3.phantom read
    Switch isolation level to RR;
    Window 1 opens the transaction first, do it once select *Query, window 2 opens the transaction, then inserts a piece of data;
    Window 2 commits the transaction, and window 1 makes another query, and finds that this new record has not been read at this time. Then if window 1 does the same thing as window 2, it will make an error id Already exists (hallucination); this is the case with transactions
1 do rollback,If you go to Window 1 again, you will find this extra record.
        //That is, transaction 1 is not affected by transaction 2 when querying, which is also the key to magic reading of the results.

3. Implementing Transaction Features

The principles involved will be expanded at the end of the article, which is just a brief overview.
1. Atomicity

MySQL The atomicity of database transactions is accomplished by undo log Realized. ( undo log Also implemented MVCC)

undo log Implementation principle:
	Before a transaction begins, a copy of the data is copied to the undo log And then proceed with the operation of the data; if an error occurs or the user executes it rollback Statement, which can be used by the system undo log Backup in
 Return to the state before the transaction started; 
    Stores unique key values ( insert_undo Type) or ( update_undo Type) Unique key value+old column Record; (logical log, equivalent to a backup log of data)

2. Consistency

Atomicity+isolation+persistence;
3. Isolation (focus)

Isolation implementation principle: lock + MVCC;

Implementing 3.1 Transaction Isolation Level

3.1.Read uncommitted: no locks;

3.2.Read submitted: read without lock, write, modify, delete with exclusive lock; (solve dirty read)

The following is a dirty reading scenario, Transaction T2 Read T1 Unsubmitted data.
    
//Solution: read-committed: data read without lock, data write, modify, delete need row lock, can overcome dirty read, but can not avoid repetitive read
 After using the locking policy, T1 Write data x When, first get x Locks, resulting in T2 Read operation wait, T1 After data rollback, release the lock, T2 You can continue to read the original data without the possibility of reading dirty data.   (write Before the lock is released, read Unable to read data) 


3.3. Repeatable reading

MVCC (Resolve non-repeatable reads)

The picture below is a non-repeatable reading scenario. Because T1 Update operation, resulting in T2 The data read twice is inconsistent. Adding row locks alone cannot solve this problem. T2 Read First x Value, T1 After the lock, unlock steps, update x Value of,
Commit a transaction. T2 If you read again, what you read is T1 Updated value, inconsistent results read twice.        


3.4. serializable

Shared locks when reading, that is, other transactions can read concurrently, but cannot write. Exclusive locks when writing, other transactions cannot write concurrently or read concurrently; (Resolves magic reading)

//Locking the output of each row read can cause a large number of timeout problems; (e.g. ID as the primary key, range query for id, e.g. query for data id<=8, then we can insert a data id=9; (Proximal key lock)

4. Persistence

MySQl The persistence of database transactions is achieved by redo log Implemented.

	All modification actions of the transaction(Add, delete, change),All databases produce one redo Log to redo log.Distinguish from undo log Record SQL Sentence, redo log Which data page of the transaction to the database is logged
 What changes have been made is a physical log (fixed size).
    //Supplement: Bilog is an appended file, which can be opened to store data contents.

	redo Log application scenario: The database system crashes directly and needs to be restored. Generally, databases use a point-in-time backup strategy, which restores the database to the point-in-time state of the most recent backup, and then reads at that time
 After Interval redo log Record, re-execute the corresponding records to achieve the ultimate recovery purpose.

4. The above-mentioned principle answers

4.1 redo and undo logs

Main logs: redo, undo, binlog;

The host Master writes the log to the bin log binary log, monitors the bin log from the machine Slave, writes the data to the Relay log, and reads it from the machine.

1. undo (logical log)

//Principle:
	Before manipulating any data, first back up the data to a place (this is where unod log),Then make changes to the data. If an error occurs or the user executes it rollback,That would work undo log Backup in to restore data to the state it was before the transaction started
 State;  
//Data type:        
	undo Logs are used to store values before data changes are made, assuming changes are made t In table id=2 Row data Name='B' Modify to Name = 'B2' ,that undo The log will be used for storage Name='B'Record, if there is an exception to this modification, you can use undo Logging for rollback
 Operations to ensure transaction consistency.
//1.insert undo log (transaction rollback)
	Represents a transaction in insert Generated when a new record is created undo log, Required only when the transaction is rolled back and can be discarded immediately after the transaction is committed
//2.update undo log (transaction rollback, snapshot read)
	Transaction in progress update or delete Generated when undo log ; This is required not only for transaction rollback, but also for snapshot read; therefore, it cannot be deleted at will, and only if the log is not involved in fast read or transaction rollback purge Thread Unified Cleanup
//Logical concept understanding:       
    All modification actions of the transaction(Add, delete, change)Reverse operations are written undo log,For example, a transaction executes a insert Statement, then undo log A corresponding delete Statement. So undo log Is a logical file that records the corresponding SQL Statement; once by
 On failure, the transaction cannot be successfully committed and the system executes undo log Corresponding undo operations in order to achieve the purpose of transaction rollback.

2. redo (physical log)

//Principle:
	and undo Log the opposite, redo log Recorded is a backup of new data; before the transaction is committed, simply add redo log Persistence is sufficient without the need to persist data.
        
//Writing features:
	Redo log Write files sequentially, then backtrack to the first file to overwrite when full.
        
//crash-safe
    become pregnant redo log Log, then when the database is restarted abnormally, you can redo log Log recovery is achieved crash-safe.     
        
        
//Why not write directly to Mysql?
	Transaction writes to disk before committing to write redo log Go inside.

	1.This is because the data is written to Mysql In, need to find the disk Mysql Corresponding page, involving random disks I/O Access, involving disk randomness I/O Access is a very time consuming process, compared to writing first redo log(Write in sequence, cycle, and then find the right one
 Time brush disc can greatly improve efficiency. (Do not wait while writing)

	2.In addition, if Mysql Process restarted abnormally, system will check automatically redo log,Write Not To Mysql Data from redo log Restore to Mysql Go in.     

An example of the process for committing redo and undo transactions is presented:

Undo + Redo Transaction simplification
  Assume there are A , B Two data values of 1,2,Start a transaction, which changes 1 to 3 and 2 to 4. The actual record is as follows (simplified):
  A.Transaction Start.
  B.Record A=1 reach undo log.
  C.modify A=3.
  D.Record A=3 reach redo log.
  E.Record B=2 reach undo log.
  F.modify B=4.
  G.Record B=4 reach redo log.
  H.take redo log Write to disk.//Make the redo log persistent so that the transaction can commit
  I.Transaction Commit

4.1.1 Supplement: Log file refresh policy:

The undo log and redo log are not written directly to the disk, but to the log buffer. Wait for the appropriate time to synchronize to the OS buffer, and then the operating system decides when to refresh to the disk. Fig.

(log buffer in user space, os buffer in kernel space)

MySQL There are three main log refresh strategies, the first one by default. The three strategies decrease security and increase efficiency in turn
(1, 2, 0 for each parameter)
Write per transaction commit OS buffer,And call fsync Refresh to Disk    
Write per submission OS buffer,Then called every second fync Refresh to Disk    
Write per second OS buffer,And call fsync Refresh to Disk    
    
    //Refresh modified data from memory to disk on a regular basis

4.1.2 Supplement: binlog

Bilog belongs to MySQL Server level, also known as archive log, is a logical log, is recorded in binary form is the original logic of this statement, relying on binlog is not crash-safe;

Generate two threads from the library, one I/O Threads, one SQL Threads;

i/o Threads to request the main library's binlog,And will get binlog Log to relay log(Relay log) file;
The main library generates a log dump Threads, used to give slave Libraries i/o Thread Pass binlog;

SQL Thread, reads relay log Log in the file, and parse into specific operations to achieve master-slave operation consistency;
    
    //Host can only have one, slave can have more than one;
    //The biggest problem is delay;

4.1.3 Supplement: crash-safe: (mainly through redo log)

- Data for all committed transactions still exists.
- Data for all uncommitted transactions is automatically rolled back.//Obviously, binlog does not have crash-safe capabilities.
    
redo log Existence makes the database have crash-safe Ability, that is, if Mysql Process restarted abnormally, system will check automatically redo log,Write Not To Mysql Data from redo log Restore to Mysql Go in.    ,
    
When the database restarts abnormally, the system automatically locates the last time checkpoint There is also a location within each data page LSN,When redo log In LSN Larger than in data page LSN When restarting, specify before restarting redo log If the data in the data page is not fully written, it will be recorded from the data page LSN Start from redo log Restore data in.
such as redolog Of LSN 13 000, database page LSN Is 10000, then some data is not fully flushed into the data page on disk before restart, then the system will recover redo log in LSN Records from 10,000 to 13,000 are included in the data page.
   

4.1.4 Supplement: common binlog scenarios: (master-slave replication)

Generate two threads from the library, one I/O Threads, one SQL Threads;

i/o Threads to request the main library's binlog,And will get binlog Log to relay log(Relay log) file;
The main library generates a log dump Threads, used to give slave Libraries i/o Thread Pass binlog;

SQL Thread, reads relay log Log in the file, and parse into specific operations to achieve master-slave operation consistency;
    
    //Host can only have one, slave can have more than one;
    //The biggest problem is delay;

4.1.5 Supplementary Introduction: Differences between redo log and bin log

1.redo log Is Belongs to innoDB Level, binlog belong to MySQL Server Level, so that consistency can be achieved when the database is using a different storage engine.
2.binlog Is a logical log, and redo log Is a physical log with a fixed size;
3.redo log Is a circular write, the log space size is fixed; binlog Is append writing, refers to a piece written to a certain size will change the next file, will not overwrite.
4.binlog It can be used as recovery data, built by master-slave replication; redo log Recover as data after abnormal downtime or media failure.   

Execution process:


1) The executor invokes the storage engine interface, and after the storage engine updates the modifications to memory, it writes the modifications to the redo log, where the redo log is in a prepare d state;
2) The storage engine tells the executor that execution is complete, the executor starts writing operations to the bin log, writes the bin log to disk, and commits the transaction after writing.
3) After the transaction is committed, the storage engine sets the redo log status to commit.

4.1.6 Supplement: Two-stage submission mechanism

binlog Exist in Mysql Server Layer, mainly for data recovery; when data is deleted by mistake, you can add a period of time through the last full backup of data binlog Restores data to a specified point in time.
redo log Exist in InnoDB In the engine, InnoDB Engines are introduced as plug-ins Mysql Of, redo log The main purpose of the introduction is to implement Mysql Of crash-safe Ability.

hypothesis redo log and binlog Submitting separately may cause inconsistencies between the data recovered from the log and the original data.

1)Assume Write First redo log Rewrite binlog,That is redo log No, prepare Stage, write directly as commit Status, then write binlog. So if I finish redo log after Mysql Down, system will be used automatically after restart redo log The restored data will be better than binlog More data is recorded, which results in database data pages on disk and binlog Inconsistency, Next Need binlog When restoring deleted data, you will find that the restored data is inconsistent with the original data.
2)Assume Write First binlog Rewrite redolog. If done bin log after Mysql Down, then binlog The record on the disk will have some more data than the record on the data page. Next time, use binlog If you restore the data, you will find that the restored data is inconsistent with the original data.

Thus it can be seen, redo log and binlog A two-stage submission is essential.

4.2 locks (including previously raised shared and exclusive locks)

Classification of locks:

- By granularity: row, table, page locks
- By Lock Level: Shared Lock, Exclusive Lock
- Classify by usage: pessimistic lock, optimistic lock
- By interval: record lock, gap lock, proximal key lock(understand)    

By granularity: MySQL has three levels of locks: page, table, and row.

  • Table level locks: low overhead, fast locking; no deadlock; large granularity of locks, the highest probability of lock conflicts, and the lowest concurrency.
//Full table locking;
select * from student where name = 'tom' for update; //The name column is not the primary key;
  • Row-level locks: expensive and slow to lock; deadlocks occur; lock de has the smallest granularity, the lowest probability of lock conflicts, and the highest concurrency.

    InnoDB engines mostly use level locks (locks the index, or locks all data if no index exists);

    MyISAM only supports table locks;

//Row lock; Lock index;
select * from student where id > 10 for update;//The id column is the primary key;
  • Page level locks: overhead and locking time are bounded between table and row locks; deadlocks occur; lock granularity is bounded between table and row locks, and concurrency is moderate

By lock level: shared lock (read lock), exclusive lock (write lock).

A shared lock, also known as a read lock, means that multiple transactions can share a lock on the same data and access the data, but only read can't modify it.

Exclusive locks, also known as write locks, do not coexist with other locks. If a transaction acquires an exclusive lock on a row of data, other transactions can no longer acquire the lock on that row, including shared locks and exclusive locks. A transaction that acquires an exclusive lock can read and modify data.(Note: If MySql scans the entire table at this time, for example, if it does not query the index column with where criteria, then it locks the entire table at this time)

Note: For the select statement, InnoDB does not place any locks, that is, it can perform multiple concurrent selects without any lock conflicts, because there are no locks at all. For insert, update, delete operations, InnoDB automatically places exclusive locks on the data involved, and only query selects require us to set exclusive locks manually.

According to the way of use: pessimistic lock, optimistic lock.

Pessimistic Lock

Pessimistic locks are characterized by acquiring locks before performing business operations, i.e.'pessimistic'Recognize that acquiring locks is highly likely to fail, so ensure successful acquisition of locks before performing business operations. Generally speaking, pessimistic locks on databases require the support of the database itself, that is, to achieve pessimistic locks through commonly used select...for update operations. When a database executes select for update, it acquires row locks from data rows in the select, so other concurrencies occur.Executing select for update will achieve the effect of a lock if it attempts to select the same row (it needs to wait for the row lock to be released). The row lock acquired by select for update is automatically released at the end of the current transaction and must therefore be used in a transaction.

One thing to note here is that different databases have different implementations and support for select for update. For example, oracle supports select for update no wait, which means that MySQL does not have no wait if the lock is not immediately erroneous, instead of waiting. Another problem with MySQL is that all scanned rows in the execution of the select for UPDATE statement are lockedThis is easily problematic on MySQL, so if you use pessimistic locks in MySQL, make sure you walk the index, not the full table scan.

Optimistic Lock

		Optimistic locking, also known as optimistic concurrency control, assumes that multiuser concurrent transactions do not interact with each other during processing. Before committing a data update, each transaction first checks to see if another transaction has modified the data after it has read it. If other transactions have updates, the transaction that is currently committing is rolled back.

	Optimistic Lock is characterized by advanced industry business operations that must not have to take locks. That is, Optimism considers locks to be mostly successful, so it is good to take the lock again at the last step of actually updating the data to complete business operations.

Steps to achieve:

	The optimistic lock implementation on a database is entirely logical and does not require special support from the database. A common practice is to add a version number or a timestamp to the data that needs to be locked, then implement it as follows:
	Therefore, if you obtain the current version number of the data that needs to be locked before the business operation proceeds, and then confirm the same version number as previously acquired when the data is actually updated, you can confirm that there are no concurrent modifications between them. If different considers the acquisition of a lock failed, you need to roll back the entire business operation.

By interval: record lock, gap lock, proximal key lock.

//Row locks are index-based in InnoDB, so once a lock operation does not use an index, the lock degenerates into a table lock.

1.Record Lock ( Record Locks)
A record lock is a lock on a row of records that blocks the indexed records of that row:

-- id Column as primary key column or unique index column
SELECT * FROM table WHERE id = 1 FOR UPDATE;//Line lock;
UPDATE SET age = 50 WHERE id = 1;//Or write it like this
id Record lines of 1 are locked.    
Note that: id Columns must be unique index columns or primary key columns, or the locks placed by the above statements become proximal keys.
The query statement must also be an exact match (=),Cannot be >,<,like Otherwise, it will degenerate into an adjacent key lock.    
    
2.Gap lock ( Gap Locks)
Gap locks are based on non-unique indexes and lock index records over a range of extents. Gap locks are based on those mentioned below Next-Key Locking It is important to keep in mind that the gap locks an interval, not just every piece of data in that interval.

SELECT * FROM table WHERE id BETWEN 1 AND 10 FOR UPDATE;
That is, all the record rows in the (1,10) interval are locked, and all id Insertions of rows 2, 3, 4, 5, 6, 7, 8, 9 will be blocked, but rows 1 and 10 will not be locked.(Prerequisites are 1 and 10)
    
3.Proximal key lock(Next-Key Locks):Proximal key locks are only related to non-unique index columns and do not exist on unique index columns, including primary key columns.
A key-facing lock exists on a non-unique index column on each data row, and when a transaction holds a key-facing lock on that data row, it locks a piece of data in a left open and right closed interval.

The following table is an example:

In this table age Potential key-proximity locks for columns are:
(-∞, 10], 
(10, 24], //In fact, it is gap lock (10,24) + record lock 24;
(24, 32],
(32, 45],
(45, +∞],
 
-- According to non-unique index columns UPDATE A record
UPDATE table SET name = Vladimir WHERE age = 24;
-- Or lock a record based on a non-unique index column
SELECT * FROM table WHERE age = 24 FOR UPDATE;
Regardless of the above implementation SQL Which sentence in the B If the following command is executed in, the command will be blocked:

INSERT INTO table VALUES(100, 26, 'Ezreal');
Obviously, transactions A In age For column 24 UPDATE At the same time, the (24, 32] Proximal key locks within this interval.

Ultimately, we know that the rows are recorded against a non-unique index UPDATE \ FOR UPDATE \ LOCK IN SHARE MODE During operation, InnoDB It acquires the adjacent key lock for the record row and the gap lock for the next interval of the record row.
That is, transactions A Executed above SQL After that, the final locked record interval is (10, 32]. It's actually(10,24]+(24,32];
                                           
                                                      
//Summary:
InnoDB The implementation of row locks in depends on indexes, and once an index is not used for a lock operation, the lock degenerates into a table lock.
Record locks exist in unique indexes including primary key indexes, locking individual index records. (Precise match)
Gap locks exist in non-unique indexes and lock intervals within an open interval based on proximity locks. (Unmatched index, degraded by proximity locks, interval size is determined by the two nearest index locations)
//Matched and unmatched means whether the corresponding values match within a specified range or not.
Proximal key locks exist in non-unique indexes and exist on the index of each record of this type. They are special gap locks that lock a left open right closed index interval. (Matching)                                                      

4.3 MVCC

4.3.1 Snapshot Read and Current Read

Snapshot Read:
    Improve reading-Write or write-Concurrency of read scenarios, lock only write operations, read the visible version of the current transaction (unlocked), and pass through MVCC Implemented, each transaction reads the visible version (view) of the current transaction.
    
//Details the differences between the snapshot under RC and RR:
    stay RC At the isolation level, each snapshot read generates and gets the latest Read View,While in RR At the isolation level, the first snapshot read in the same transaction is created Read View,Later snapshots read the same Read View(That is, the version data for each query
 Is the same; (so in RC At isolation level, there are unreadable reads)

Current Read: (Current Read is actually a lock operation, a pessimistic lock implementation)
    Both read and write are locked, reading the latest version of the record, through next-key Proximal key lock(Row Record Lock+Gap lock)Implement. When we open a transaction and query in it, add select...lock in share mode or for update And insert,update,
delete Operation, will use next-key Locks the number of rows corresponding to a query range, and the lock is an exclusive lock, with other transactions queued.

4.3.2 MVCC

Only read committed and repeatable read isolation levels are supported;

Reading in MVCC refers to snapshot reading;

mvcc The implementation principle mainly depends on three parts: the three fields hidden in the record; undo log;read view;

Hidden three fields:

DB_TRX_ID//6 Bytes Last Modified Transaction ID
DB_ROLL_PTR//7-byte rollback pointer, pointing to the previous version of this record, used with undo log (an address, pointing to the previous version)
DB_ROW_ID//6-byte hidden primary key, innodb automatically generates a 6-byte row_id if the data table does not have a primary key    

read view

		Read View Is the read view produced when a transaction is snapshot read(Read View),At the moment the snapshot read of the transaction is executed, a current snapshot of the database system is generated, recording and maintaining the current active transactions of the system ID (Each transaction is scored when it is opened
 Match one ID , this ID Is incremental, so the newer the transaction, ID The larger the value)

​		So we know Read View Mainly used for visibility judgment, That is, when a snapshot read is performed on a transaction, a record is created Read View Read the view and compare it to a condition that determines which version of the data the current transaction can see, perhaps the most current
 New data, also possibly recorded in that row undo log A version of the data inside.

​		Read View Follow a visibility algorithm, primarily from the latest records of the data that will be modified DB_TRX_ID(That is, the current transaction ID )Remove, with other currently active transactions in the system ID To compare (by) Read View Maintenance), if DB_TRX_ID with Read 
View The attributes of the DB_ROLL_PTR Roll back the pointer to take it out Undo Log In DB_TRX_ID Compare again, that is, traverse the list of chains DB_TRX_ID(From the beginning to the end of the chain, i.e. from the last modification, until a satisfactor is found
 Conditional DB_TRX_ID , So this one DB_TRX_ID The old record is the latest and oldest version visible to the current transaction

Visibility rules for read view:

Read View Three global properties in the
//trx_list
    A list of values
    For maintenance Read View Generate transactions when the system is active ID list
//up_limit_id
    lower water remark
    yes trx_list Transactions in List ID Least ID
//low_limit_id
    hight water mark
    ReadView Next transaction unassigned by the system at build time ID ,That is, transactions that have occurred so far ID Maximum + 1
    Why is it low_limit ? Because it is also the system's assignable transaction at this time ID Minimum value

//Comparison Rule:
    First compare DB_TRX_ID < up_limit_id , If less than, the current transaction can see DB_TRX_ID Where the record is, if greater than or equal to the next judgment
	Next Judgment DB_TRX_ID >= low_limit_id , Represents if greater than or equal to DB_TRX_ID The record is in Read View If it occurs after generation, it must not be visible to the current transaction, and if it is less than it proceeds to the next judgment
	judge DB_TRX_ID Is it in an active transaction? trx_list.contains (DB_TRX_ID),If the Read View At the time of the build, your business is still active, not yet Commit,The data you modified is also invisible to my current transaction; if not
 In this case, your business is in the Read View Already before build Commit Yes, as a result of your modification, my current transaction is visible

The following is the reloaded content, the link has been forgotten, I hope you can remind below;

Tags: Database MySQL

Posted by Joe on Tue, 21 Sep 2021 22:15:58 +0530