Redis&Pika single machine and master-slave data migration scheme

Foreword: if the data volume is inconsistent during synchronization, do not disconnect the master and slave. Analyze whether the daily amount of data entering redis is large and the expiration time of the key

redis The following operations will be performed in the background 10 times per second:
Select 100 at random key Check whether it is expired. If there are more than 25 key It's expired. Select another 100 at random immediately key. That is, if the expired key Not much, redis About 200 pieces can be recycled per second at most

1, Redis to redis data migration scheme

1. create a "master-slave" migration mode (Note: all operations are performed in the new redis. In order to avoid the reverse of the master-slave relationship, please try not to operate in the original redis)

Prerequisites:

    primary redis  ip1  port1   masterauth=old-pwd
    new redis  ip2  port2   masterauth=new-pwd

The common problem encountered when establishing master-slave relationship is that the password is not synchronized. Before establishing master-slave synchronization, the password should be set consistently

1.Available in New redis Medium execution: config set masterauth "old-pwd"
2.new redis Execute in: slaveof ip1 port1(And the original redis Establish master-slave relationship)
                 info replication (check master_link_status Whether the status is up ,Confirm that the master-slave relationship has been established )
3.new redis primary redis Execute in: dbsize(Confirm that data synchronization is completed)
4.new redis Medium execution: slaveof no one(Remove master-slave relationship)  
                 info replication (Confirm that the new node has become the master node)
                 config set masterauth "pwd" (Set password back)

2. Redis port data migration

Step 1: tool preparation

redis-port This tool has compiled the link: https://Pan Baidu Com/s/1qragd66oa0vaweeiu85meq extraction code: xeoj
 To compile by yourself, please visit the official website to download and compile https://github.com/CodisLabs/redis-port

Step 2: redis port parameter description

Options:
        -n N, --ncpu=N                    Set runtime.GOMAXPROCS to N.
        -p M, --parallel=M                Set the number of parallel routines to M.
        -i INPUT, --input=INPUT           Set input file, default is stdin ('/dev/stdin').
        -o OUTPUT, --output=OUTPUT        Set output file, default is stdout ('/dev/stdout').
        -f MASTER, --from=MASTER          Set host:port of master redis.
        -t TARGET, --target=TARGET        Set host:port of slave redis.
        -P PASSWORD, --password=PASSWORD  Set redis auth password.
        -A AUTH, --auth=AUTH              Set auth password for target.
        --fromencodepassword=PASSWORD     Set encode password for from.
        --targetncodepassword=PASSWORD    Set encode password for target.
        --fromauthtype=FROMAUTHTYPE           Set auth type for from.
        --targetauthtype=TARGETAUTHTYPE   Set auth type for target.
        --fromversion=FROMVERSION         Set From-RDB version no, default to 6 (6 for Redis Version <= 3.0.7, 7 for >=3.2.0)
        --toversion=TOVERSION             Set To-RDB version no, default to 6 (6 for Redis Version <= 3.0.7, 7 for >=3.2.0)
        --faketime=FAKETIME               Set current system time to adjust key's expire time.
        --sockfile=FILE                   Use FILE to as socket buffer, default is disabled.
        --filesize=SIZE                   Set FILE size, default value is 1gb.
        --pidfile=REDISPORT.PID           Pid file path.
        --logfile=REDISPORT.LOG           Log file path.
        -e, --extra                       Set ture to send/receive following redis commands, default is false.
        --rewrite                         Force rewrite when destination restore has the key
        --filterdb=DB                     Filter db = DB, default is *.
        --targetdb=DB                     Target db = DB, default is *.
        --filterkey="a|b|c"               Filter key with prefix string, multiple string is separated by '|'
        --httpport=HTTPPORT               Http port.
        --bigkeysize=BIGKEYSIZE           Big Key Size.
        --psync                           Use PSYNC command.
        --replacehashtag                  Replace key hash tag
        --filterslots="1|2|3"             Filter slots = slots, default is *.
        --pacluster                       set pacluster = true, default is false

Executable commands:
redis-port decode   [--ncpu=N]  [--parallel=M]  [--input=INPUT]  [--output=OUTPUT] [--fromversion=RDBVERSION] [--toversion=RDBVERSION] [--pidfile=REDISPORT.PID] [--logfile=REDISPORT.LOG] [--httpport=HTTPPORT] [--bigkeysize=BIGKEYSIZE]
redis-port restore  [--ncpu=N]  [--parallel=M]  [--input=INPUT]   --target=TARGET   [--auth=AUTH] [--extra] [--faketime=FAKETIME]  [--filterdb=DB] [--filterkey="str1|str2|str3"] [--fromversion=RDBVERSION] [--toversion=RDBVERSION] [--rewrite] [--pidfile=.PID] [--logfile=REDISPORT.LOG] [--httpport=HTTPPORT] [--bigkeysize=BIGKEYSIZE] [--targetauthtype=TARGETAUTHTYPE] [--targetencodepassword=PASSWORD] [--targetdb=DB]
redis-port dump     [--ncpu=N]  [--parallel=M]   --from=MASTER   [--password=PASSWORD]  [--output=OUTPUT]  [--extra] [--fromversion=RDBVERSION] [--toversion=RDBVERSION] [--pidfile=.PID] [--logfile=REDISPORT.LOG]  [--httpport=HTTPPORT] [--bigkeysize=BIGKEYSIZE]
redis-port sync     [--ncpu=N]  [--parallel=M]   --from=MASTER  [--fromencodepassword=PASSWORD] [--fromauthtype=FROMAUTHTYPE] [--targetauthtype=TARGETAUTHTYPE] [--password=PASSWORD]   --target=TARGET [--targetencodepassword=PASSWORD]  [--auth=AUTH]  [--sockfile=FILE [--filesize=SIZE]] [--filterdb=DB]  [--targetdb=DB] [--filterkey="str1|str2|str3"] [--psync] [--fromversion=RDBVERSION] [--toversion=RDBVERSION] [--rewrite]  [--pidfile=.PID] [--logfile=REDISPORT.LOG] [--httpport=HTTPPORT] [--bigkeysize=BIGKEYSIZE] [--replacehashtag] [--filterslots="1|2|3"] [--pacluster]

Step 3: specific use

Prerequisites:
    primary redis1: ip1  port1  pwd1
    new redis2: ip2  port2  pwd2

Parameter Description:

Options:
        -n N, --ncpu=N                    Set runtime.GOMAXPROCS to N.
        -p M, --parallel=M                Set the number of parallel routines to M.
        -i INPUT, --input=INPUT           Set input file, default is stdin ('/dev/stdin').
        -o OUTPUT, --output=OUTPUT        Set output file, default is stdout ('/dev/stdout').
        -f MASTER, --from=MASTER          Set host:port of master redis.
        -t TARGET, --target=TARGET        Set host:port of slave redis.
        -P PASSWORD, --password=PASSWORD  Set redis auth password.
        -A AUTH, --auth=AUTH              Set auth password for target.
        --fromencodepassword=PASSWORD     Set encode password for from.
        --targetncodepassword=PASSWORD    Set encode password for target.
        --fromauthtype=FROMAUTHTYPE           Set auth type for from.
        --targetauthtype=TARGETAUTHTYPE   Set auth type for target.
        --fromversion=FROMVERSION         Set From-RDB version no, default to 6 (6 for Redis Version <= 3.0.7, 7 for >=3.2.0)
        --toversion=TOVERSION             Set To-RDB version no, default to 6 (6 for Redis Version <= 3.0.7, 7 for >=3.2.0)
        --faketime=FAKETIME               Set current system time to adjust key's expire time.
        --sockfile=FILE                   Use FILE to as socket buffer, default is disabled.
        --filesize=SIZE                   Set FILE size, default value is 1gb.
        --pidfile=REDISPORT.PID           Pid file path.
        --logfile=REDISPORT.LOG           Log file path.
        -e, --extra                       Set ture to send/receive following redis commands, default is false.
        --rewrite                         Force rewrite when destination restore has the key
        --filterdb=DB                     Filter db = DB, default is *.
        --targetdb=DB                     Target db = DB, default is *.
        --filterkey="a|b|c"               Filter key with prefix string, multiple string is separated by '|'
        --httpport=HTTPPORT               Http port.
        --bigkeysize=BIGKEYSIZE           Big Key Size.
        --psync                           Use PSYNC command.
        --replacehashtag                  Replace key hash tag
        --filterslots="1|2|3"             Filter slots = slots, default is *.
        --pacluster                       set pacluster = true, default is false

Specific operation:

1.connect redis1 client  redis-cli -a pwd1 -h ip1 -p port1  implement bgsave
2.take redis-port Upload to original redis1 Server for db The corresponding dump.rdb
3.authorization chmod +x redis-port 
4.There are two ways,
    First transmission dump.rdb Documents,
      ./redis-port restore -i dump.rdb -t ip2:port2 -A 123456 -n 4
    The second two redis Transmission between
      ./redis-port sync -f 10.19.185.17:7006 -t 10.19.xx.17:8522 --fromversion=9  --toversion=9 -A 123456 -P 123456 -n 4 -p 32  --sockfile=test.tmp --filesize=32GB   (If redis5.0 Above version number configuration--fromversion=9  --toversion=9)
5.Check whether the data volume on both sides is consistent after execution
    redis-cli -a pwd1 -h ip1 -p port1 dbsize
    redis-cli -a pwd2 -h ip2 -p port2 dbsize

2, redis migration pika scheme (aof_to_pika tool migration)

Note: before migrating data, please confirm whether there are duplicate keys and whether the overwritten key values can be accepted:

1.redis 0-15 Libraries,If there is data,So in execution save Hour,Same in different libraries key Will be overwritten,for example db1 Medium key Will be overwritten db0 Of the same 0 in key Data for
2.if pika Data in,that redis Medium key Will be overwritten pika Same as in key Data for

1. tool preparation AOF_ To_ Pika Zip

Link: https://Pan Baidu Com/s/1kkkp4kxcbdcpkogmwelkrq extraction code: b0s6

2. specific operation steps

1.Stop application writing redis Data, and enter redis Medium execution dbsize Query data volume
2.primary redis Has it been done aof Persistence, if no entry is made redis implement config set appendonly yes Persistent while copying appendonly.aof
3.Machine required gcc Higher version (proven by practice gcc 4.8.x Yes)
4.install aof_to_pika tool
   Create on prepared machine pika/bin/ Execute under directory
        unzip aof_to_pika.zip    //Decompression Kit
        cd aof_to_pika && make   //Enter the toolkit and perform compilation
5.take appendonly.aof File upload to  pika/bin/output/bin/Directory
6.stay pika/bin/output/bin/Execute under directory  "./aof_to_pika -i ./appendonly.aof -h [pika-ip] -p [pika-port] -a [pwd]-v " Migrate
7.After migration, execute“ redis-cli -a [pwd] -h [pika-ip] -p [pika-port] info keyspace 1 " Check the data volume after migration (check both the master and slave data to ensure that the master and slave data are synchronized)

3, Pika migration pika data migration scheme

Establish master-slave mode to migrate data

primary pika ip1 port1 pwd1
 new pika colony (main) ip2 port2  pwd2
          (from) ip3 port3  pwd3

1.Enter (from) ip3 port3 ,Disconnect master and slave
       redis-cli -a [pwd3] -h [ip3] -p [port3]slaveof no one
2.Enter (Master) ip2 port2,And the original pika Establish master-slave relationship
      redis-cli -a [pwd2] -h [ip2] -p [port2]slaveof [ip1][port1]
3.Check whether the data on both sides are consistent
      redis-cli -a [pwd1] -h [ip1] -p [port1]info keyspace 1
      redis-cli -a [pwd2] -h [ip2] -p [port2]info keyspace 1
4.After the data is consistent, enter the slave node to re-establish the master-slave relationship
      redis-cli -a [pwd3] -h [ip3] -p [port3]slaveof pika1 9221
5.Query whether the master-slave data is consistent
      redis-cli -a [pwd1] -h [ip1] -p [port1]info keyspace 1
      redis-cli -a [pwd2] -h [ip2] -p [port2]info keyspace 1
      redis-cli -a [pwd3] -h [ip3] -p [port3]info keyspace 1

4, Redis migrates to redis with existing data

1. establish the master-slave mode. The master-slave mode is not feasible because the redis will execute Flushing old data before creating the master-slave mode
2. redis port migration is recommended. The migration method is the same as above

./redis-port sync -f [Migration required ip]:[Migration required port]-t  [target ip]:[target port]--fromversion=9  --toversion=9 -A 123456 -P 123456 -n 4 -p 32  --sockfile=test.tmp --filesize=32GB

Note: in the process of data migration, since there is data in both redis, it is necessary to determine whether duplicate keys can be lost. By default, duplicate key values in the target redis are retained in the migration. Duplicate keys in the redis that need to be migrated will not be migrated

Posted by webhead on Tue, 31 May 2022 02:48:42 +0530