1. redis learning notes
1. NoSql overview
- The user's information, data, and logs have a large number of bursts. At this time, you need to use the NoSql database
- NoSql means not only sql, but also many data types can be expanded horizontally without fixed data patterns and redundant operations
2. NoSql features:
- Easy to expand (there is no relationship between database and database, so it can be expanded well)
- Large amount of data and high performance (Redis writes 80000 times and reads 110000 times a second. NoSQL cache record is a fine-grained cache with high performance)
- The data types are diverse (there is no need to design the database in advance, and it can be used as needed)+
- The difference between traditional database and Nosql
Traditional database: - Structured speech organization - SQL - Database relationships exist in separate tables - Operation, data definition language - Strict consistency - Basic transaction NoSQL: - Not just databases - There is no specific query language - Key value pair storage, column storage, document storage, graphic database - Final consistency - CAP Theorem sum BASE theory -
3v and three high
3V in the era of big data mainly describes the following problems:
- Massive Volume
- Diversity
- Real time Velocity
Three highs in the era of big data:
- High concurrency
- High availability
- High performance
Real enterprise practice, relational database and non relational database are used together
3. Classification of Nosql
KV key value pair:
- redis
Document database:
- MongDB
- MongoDB is a database based on distributed storage, written in c + +, which is mainly used to store a large number of articles
- MongoDB is a product between relational database and non relational database. MongDB is the most functional and relational database among non relational databases
- ConthDB
Column stored database
- HBase
- Distributed file storage
Graphical relational database:
- She saves relationships, not pictures, Noo4j.InfoGrid
4,Redis
The remote dictionary service is an open source service using ANSI C language Write, support network, memory based and persistent log type, key value database , and provides API s in multiple languages. redis will periodically write the updated data to the magnetic disk, or append the operation commands to the record file. On this basis, it will realize master-slave synchronization of maseter slave, which is free and open source. It is the most popular technology at present
- Memory storage, persistence. Memory is lost immediately after power failure, so persistence is very important
- High efficiency and cache
- Publish subscribe system
- Map information analysis
- Timer, counter
characteristic
- Diverse data types
- Persistence
- colony
- affair Redis
[the external link image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-UvHhF1LU-1633416353573)(C:\Users451\Desktop \ learning notes \ new text document. assets\image-20211001171908668.png)]
redis has 16 databases, and the zero database is used by default! You can use select to switch databases: select 3
[the external link image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-DWajErkd-1633416353587)(C:\Users451\Desktop \ learning notes \ new text document. Assets \ image-202110011742669. PNG)]
[the external link image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-vuG8n5Yi-1633416353591)(C:\Users451\Desktop \ learning notes \ new text document. assets\image-20211001174348516.png)]
Empty the current database: flush db, empty all databases: flush all
[the external link image transfer fails. The source station may have an anti-theft chain mechanism. It is recommended to save the image and upload it directly (img-mzrzshtW-1633416353595)(C:\Users451\Desktop \ learning notes \ new text document. assets\image-20211001174645839.png)]
Redis is single threaded
-
Redis is fast. Redis operates based on memory. CPU is not the bottleneck of redis performance. Redis's bottleneck is the bottleneck of the machine and the bandwidth of the network
-
redis is written in c language. High performance servers are not necessarily multi-threaded. Multi threading is not necessarily more efficient than single threading. The cpu will perform context switching
-
redis places all data in memory, so the efficiency of single thread operation is the highest. Multithreading (the CPU will switch the context and consume a certain amount of time). For the memory system, if there is no context switching, the efficiency is the highest. Multiple reads and writes are in the CPU
Common commands:
5. Five data types of Redis
1,String
keys *:View all keys in the current database exists key:Check whether the current key value pair exists, move key: Remove key value pair expire key s:set up key Expiration time ttl key:see key Time remaining to expire set key:Set key get key:Get the value corresponding to the key type key: View current key Corresponding type append key "hello": Increase correspondence key Content, if key If it doesn't exist, it's equivalent to set One key incr key: Add 1 decr key: Minus 1 incrby key 10:Set the step size to 10 and subtract 10 each time getrange key 0,3:Intercept string
2. List collection
In redis, we can regard the list set as stack, queue and blocking queue. Redis is not case sensitive, and the list commands start with L
lpush list one :Insert one or more values into the list header rpush list righrP: Insert one or more values at the end of the list lrangelist 0 1:Gets the value within the specified interval list Values in Lpop:Remove the first element on the left Rpop:Remove the first element on the right Lindex:Get the value of the specified subscript, and get the value through the subscript Lrem:Removes the specified value ltrim list 1 2:Intercepts the value of the specified length through the subscript and returns the specified length rpopLpush:Remove the last element in the list and add a new element exit list:See if there are values in the collection lset:Replace the value of the specified subscript in the list with another value to update the operation linsert list before :"sa" "asdd": stay qq Front insert asdd
- List is actually a linked list. If the key does not exist, create a new linked list. If the key exists, add new content. If the data in the collection is removed, an empty linked list also means that it does not exist. Inserting or changing values on both sides is the most efficient. Intermediate elements are inefficient and can be used as message queues. However, rabbitMQ seems to be better than what it does
3. Hash (hash)
Map set, key map! At this time, the value is a map set, which is not much different from string in essence. It is still a simple key vlue
hset myhash key value:Store data into it hget myhash key :Get the value through the key. The value is a key-vlue Form of hgetall myhash:Get hash All data in hdel myhash key:Delete key value pair hlen myhash: Get hash Number of fields in the table hexists myhash field:judge hash Does the field specified in exist hkeys myhash: # Only get all key s hvals myhash:Only get all value
You can use hash to store user information
4. Zset (ordered set)
On the basis of set, a value is added for sorting, which can be used as a leaderboard
zadd myset 1 one: An ID is added for sorting, zrangebyscore salary -inf +inf:Query all data in the collection zrange salary 0 -1:Query all data in the collection zrem salary 0 1 :Remove data from collection zcard salary :Gets the number in the ordered collection
5. Set set
setrange key 1,xx:Set to replace the string starting at the specified location setex(set with exxpire) Set expiration time setnx(set if not exist) No settings exist mset k1 v1 k2 v1 :Batch setting key value pairs mget k1 k2 k3:Multiple values are obtained at the same time msetnx k1 v1 k2 v2: Batch setting multiple values is an atomic operation, either all successful or all failed ##object set user: 1 {name:kangkang,age:1} ############### getset: before get Then in set
6. Special types in redis 3
geospatial geographic location
The positioning of friends, people nearby and the calculation of taxi distance can accurately calculate the geographic location information and the distance between the two places. Generally, download the city data and import it through java
key: value (latitude, longitude)
getadd:Add geographic location getpos:Get the current positioning, coordinate value geodist:Returns the distance between two given positions
Hyperloglog data structure
-
Redis Hyperloglog: technical statistical algorithm. A person can visit multiple times, but still be counted as a person
-
In the traditional method, set saves the user id, and then the number of elements in set can be counted as the standard judgment. This method will save a large number of user IDs, which is very troublesome
pfadd:Create add a set of base elements pfcount:Counts the number of a set of base elements pfmerge:Merge two sets of base elements and take the union set
Bitmaps bit storage
Make statistics on the user's information, active and inactive. Use the ID 01 to record. Bitmaps can be used for only two states, and bitmaps can also be used to record the clock out from Monday to Sunday,
7. Transactions in Redis
The essence of transaction in Redis: a group of commands are executed together, and a group of commands need to be serialized. In the process of transaction execution, it will be executed in order, one-time, sequential and exclusive. Some columns of commands will be executed
redis transactions do not have the concept of isolation level. All commands are not directly executed in transactions. They are executed only when the execution command is initiated
A single Redis command is atomic, but transactions are not atomic
redis transactions:
- Open transaction (multi)
- Order to join the team
- Execute transaction (exec)
- DISCARD the transaction (choose one from executing the transaction) and the commands in the transaction will not be executed
After the transaction is executed, the transaction needs to be restarted the next time the command operation is performed
When there are compile time exceptions, code problems, and command errors, all commands in the transaction will not be executed
Run time exception. If there is a syntax problem in the transaction queue (add one to the null value), other commands can be executed normally when executing the command, and the error command throws an exception
package com.hema; import com.alibaba.fastjson.JSONObject; import redis.clients.jedis.Jedis; import redis.clients.jedis.Transaction; public class TestTs { public static void main(String[] args) { Jedis jedis = new Jedis("127.0.0.1",6379); // Open transaction Transaction multi = jedis.multi(); JSONObject jsonObject = new JSONObject(); jsonObject.put("aa","asdd"); jsonObject.put("bb","adassdd"); String s = jsonObject.toJSONString(); jedis.watch(s); try { multi.set("json",s); multi.set("json2",s); // Execute command multi.exec();//If successful, execute the transaction } catch (Exception e) { multi.discard();//Abandon transaction e.printStackTrace(); } finally { System.out.println(jedis.get("json")); System.out.println(jedis.get("json2")); jedis.close();//Close connection } } }
monitor!
Pessimistic lock:
- It's very pessimistic. There will be problems at any time. It will be locked at any time
Optimistic lock:
- I'm optimistic that there will be no problem at any time, so I won't lock it. When updating the data, judge whether anyone has modified the data during this period
- Get version
- Compare the version when updating
watch:Lock unWatch:Unlock
If a transaction fails to execute, unlock it first, and then re lock it to obtain the latest value, and then compare whether the monitored value has changed. If it has not changed, it can be executed successfully. If it has changed, it will fail
8,Jedis
Jedis is a Java connection development tool officially recommended by redis. It uses Java to operate redis middleware. If you use java to operate redis, you must be very familiar with redis
test
- Import dependency:
<dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> <version>3.7.0</version> </dependency> <dependency> <groupId>com.alibaba</groupId> <artifactId>fastjson</artifactId> <version>1.2.78</version> </dependency>
- Test connection
package com.hema; import redis.clients.jedis.Jedis; public class Testping { public static void main(String[] args) { //1.new an object of jedis Jedis jedis = new Jedis("127.0.0.1",6379); // 2. All jedis commands correspond to redis commands String ping = jedis.ping(); System.out.println(ping); } }
Common API s for jedis
Operation on key:
package com.hema; import redis.clients.jedis.Jedis; import java.util.Set; public class TestKey { public static void main(String[] args) { Jedis jedis = new Jedis("127.0.0.1",6379); System.out.println("Clear the data in the current database"+jedis.flushDB()); System.out.println("Clear data in all databases"+jedis.flushAll()); System.out.println("Determine whether a value 1 exists"+jedis.exists("name")); System.out.println("newly added<'name','kaka'>Key value pair"+jedis.set("username","kuang")); System.out.println("newly added<'password','password'>Key value pair"+jedis.set("password","kuang")); System.out.println("All keys in the current database are as follows"); Set<String> keys = jedis.keys("*"); System.out.println(keys); System.out.println("delete password"+jedis.del("username")); System.out.println("Determine whether the key exists"+jedis.exists("username")); System.out.println("see password The type of value stored"+jedis.type("password")); System.out.println("Random return key One of"+jedis.randomKey()); System.out.println("Rename key"+jedis.rename("password","pass")); System.out.println("Take out the modified name"+jedis.get("pass")); System.out.println("Query by index"+jedis.select(0)); System.out.println("Return to the current database key Number of"+jedis.dbSize()); } }
- String
package com.hema; import redis.clients.jedis.Jedis; import java.util.Set; import java.util.concurrent.TimeUnit; public class TestKey { public static void main(String[] args) { Jedis jedis = new Jedis("127.0.0.1",6379); System.out.println(jedis.flushDB());//clear database //Add data System.out.println(jedis.set("key1","value1")); System.out.println(jedis.set("key2","value2")); System.out.println(jedis.set("key3","value3")); System.out.println("Delete key key2"+jedis.del("key2")); System.out.println("Get key2"+jedis.get("key2")); System.out.println("modify key1"+jedis.set("key1","kangkang")); System.out.println("Gets the modified value"+jedis.get("key1")); System.out.println("stay key3 Add value after"+jedis.append("key3","end")); System.out.println("Get key3 Modified value"+jedis.get("key3")); System.out.println("Add multiple key value pairs"+jedis.mset("key01","value01","key02","value02")); System.out.println("Multiple key value pairs were obtained"+jedis.mget("key01","key02")); System.out.println("Delete multiple key value pairs"+jedis.del("key01","key02")); // Distributed lock System.out.println("If it does not exist, set a new key to prevent overwriting the original key"+jedis.setnx("kk","sad")); System.out.println("Get new key"+jedis.get("kk")); System.out.println("===============Add a new key and set a valid time======================"); System.out.println(jedis.setex("key3",2,"value23")); System.out.println("Gets a value that sets the valid time"+jedis.get("key3")); try { //Set the thread sleep time to three seconds TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("Retrieve value"+jedis.get("key3")); System.out.println("Get the original value and update the value"+jedis.getSet("key2","dasasasasasasas")); System.out.println(jedis.get("key2")); System.out.println("intercept key2 String of"+jedis.getrange("key2",2,4)); } }
- list
package com.hema; import redis.clients.jedis.Jedis; import java.util.Set; import java.util.concurrent.TimeUnit; public class TestKey { public static void main(String[] args) { Jedis jedis = new Jedis("127.0.0.1",6379); System.out.println(jedis.flushDB());//clear database jedis.lpush("collections","ArrayList","vector","stack","hash"); jedis.lpush("collections","hashset"); jedis.lpush("collections","TreeMap"); jedis.lpush("collections","TreeSet"); System.out.println("collections Content in"+jedis.lrange("collections",0,-1));//-1 represents the penultimate element, - 2 represents the penultimate element System.out.println("collections 0 in-3 content"+jedis.lrange("collections",0,3)); //Delete the value specified in the list. The second parameter is the number of wies to be deleted (in case of duplication), and those added later will be deleted first System.out.println("Deletes the number of specified elements"+jedis.lrem("collections",2,"hash")); System.out.println("Querying the contents of a collection"+jedis.lrange("collections",0,-1)); System.out.println("Delete subscript 0-3 Elements outside the interval"+jedis.ltrim("collections",0,3)); System.out.println("View all elements in the current collection"+jedis.lrange("collections",0,-1)); System.out.println("collections List out of stack"+jedis.lpop("collections")); System.out.println("collections All elements in"+jedis.lrange("collections",0,-1)); System.out.println("collections Add elements from the right end of the list"+jedis.lpush("collections","ds")); System.out.println("collections Add elements from the left end of the list"+jedis.rpush("collections","ds")); System.out.println("collections All elements in the collection"+jedis.lrange("collections",0,-1)); System.out.println("collections Right stack"+jedis.rpop("collections")); System.out.println("collections All content"+jedis.lrange("collections",0,-1)); System.out.println("Modify the content of an element with a subscript of one"+jedis.lset("collections",1,"qwqwqwq")); System.out.println("collections All content"+jedis.lrange("collections",0,-1)); System.out.println("Get collections Length of"+jedis.llen("collections")); System.out.println("Get collections Element with subscript 2"+jedis.lindex("collections",2)); jedis.lpush("sortedList","1","10","4","9","3"); System.out.println("Before sorting"+jedis.lrange("sortedList",0,-1)); System.out.println("sort"+jedis.sort("sortedList")); System.out.println("After sorting"+jedis.lrange("sortedList",0,-1)); } }
- hash
package com.hema; import redis.clients.jedis.Jedis; import java.util.HashMap; import java.util.Map; import java.util.Set; import java.util.concurrent.TimeUnit; public class TestKey { public static void main(String[] args) { Jedis jedis = new Jedis("127.0.0.1",6379); System.out.println(jedis.flushDB());//clear database Map<String,String> map = new HashMap<>(); map.put("key1","value1"); map.put("key2","value2"); map.put("key3","value3"); map.put("key4","value4"); // Add element 1 to redis jedis.hmset("hash",map); // Add an element with key 5 and value 5 to redis jedis.hset("hash","key5","value5"); System.out.println("All key value pairs are"+jedis.hgetAll("hash")); System.out.println("All key values are"+jedis.hkeys("hash")); System.out.println("All values are"+jedis.hvals("hash")); System.out.println("Judge whether it exists key6,Add if it doesn't exist"+jedis.hsetnx("hash","key6","value6")); System.out.println("All key value pairs"+jedis.hgetAll("hash")); System.out.println("Delete one or more key value pairs"+jedis.hdel("hash","key2")); System.out.println("All key value pairs"+jedis.hgetAll("hash")); System.out.println("Number of all key value pairs"+jedis.hlen("hash")); System.out.println("Judge whether it exists key2"+jedis.hexists("hash","key2")); System.out.println("Get hash Values in"+jedis.hmget("hash","key3")); } }
- set
package com.hema; import redis.clients.jedis.Jedis; import java.util.Set; import java.util.concurrent.TimeUnit; public class TestKey { public static void main(String[] args) { Jedis jedis = new Jedis("127.0.0.1",6379); System.out.println(jedis.flushDB());//clear database System.out.println("=======Add element to collection======"); System.out.println(jedis.sadd("eleset","q1","q2","q3","q4","q5","q6")); System.out.println(jedis.sadd("eleset","q8")); System.out.println("eleset All elements of are"+jedis.smembers("eleset")); System.out.println("delete e0 element"+jedis.srem("eleset","q0")); System.out.println("Delete multiple elements"+jedis.srem("eleset","q5","q4")); System.out.println("elest All elements of W by"+jedis.smembers("eleset")); System.out.println("Randomly remove an element from the collection"+jedis.spop("eleset")); System.out.println("Randomly remove an element from the collection"+jedis.spop("eleset")); System.out.println("All elements"+jedis.smembers("eleset")); System.out.println("eleset Number of elements in"+jedis.scard("eleset")); System.out.println("eleset Exists in e2"+jedis.sismember("eleset","q2")); System.out.println(jedis.sadd("eleset1","q1","q2","q3","q4","q5","q6")); System.out.println(jedis.sadd("eleset2","q1","q2","q3")); System.out.println("take eleset1 Such deletion e1 And deposit eleset3 in"+jedis.smove("eleset1","eleset3","q1")); System.out.println("eleset3 Elements in"+jedis.smembers("eleset3")); System.out.println("=======Operations in sets========="); System.out.println("eleset1 and eleset2 Communication of"+jedis.sinter("eleset1","eleset2")); System.out.println("eleset1 and eleset2 Union of"+jedis.sunion("eleset1","eleset2")); System.out.println("eleset1 and eleset2 Difference set of"+jedis.sdiff("eleset1","eleset2")); jedis.sinterstore("eleset4","eleset1","eleset2");//Take the intersection and save it in eleset4 System.out.println("eleset4 Elements in"+jedis.smembers("eleset4")); } }
- zset
9. Spring boot integrates redis
Spring boot operation data: Spring data. After spring 2. X, the original jedis is replaced by lettuce
- jedies:
jedies:Direct connection is adopted. If multiple threads operate, it is unsafe. If you want to 1 avoid unsafe, use jedis pool Connection pool
- lettuce:
Using netty,Instances can be shared among multiple threads. There is no thread insecurity. You can shoulder the data of threads, which is more like Nio pattern
-
package com.hema.redis02springboot; import org.junit.jupiter.api.Test; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.data.redis.connection.RedisConnection; import org.springframework.data.redis.core.RedisTemplate; @SpringBootTest class Redis02SpringbootApplicationTests { @Autowired RedisTemplate redisTemplate; @Test void contextLoads() { //opsForList: operation list collection // opsforvalue: operation string // opsforset: operation set // opsforhash: Operation hash // opsforZset: operation zset //In addition to the basic operations, our common methods can directly use RedisTemplate operations, such as transactions and the most basic crud operations // // RedisConnection connection = redisTemplate.getConnectionFactory().getConnection();// Manipulate database objects // connection.flushAll();// Delete content from all databases // connection.flushDb();// Delete content in the current database redisTemplate.opsForValue().set("key","Yang Xiaohua"); System.out.println(redisTemplate.opsForValue().get("key")); } }
In real development, we will use Json to pass values. If we want to use an object, we can serialize the object by inheriting the serializable interface. During development, all our POJOs will be serialized, and the company will generally encapsulate them by itself
Custom serialization:
package org.magic.redis.config; import com.fasterxml.jackson.annotation.JsonAutoDetect; import com.fasterxml.jackson.annotation.PropertyAccessor; import com.fasterxml.jackson.databind.ObjectMapper; import com.fasterxml.jackson.databind.ObjectMapper.DefaultTyping; import com.fasterxml.jackson.databind.jsontype.impl.LaissezFaireSubTypeValidator; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.connection.RedisConnectionFactory; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer; @Configuration public class RedisConfig { @Bean @SuppressWarnings("all") public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory connectionFactory) { RedisTemplate<String, Object> template = new RedisTemplate<>(); template.setConnectionFactory(connectionFactory); //Customize Jackson serialization configuration Jackson2JsonRedisSerializer jsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class); ObjectMapper objectMapper = new ObjectMapper(); objectMapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY); objectMapper.activateDefaultTyping(LaissezFaireSubTypeValidator.instance, DefaultTyping.NON_FINAL); jsonRedisSerializer.setObjectMapper(objectMapper); //key uses the serialization method of String StringRedisSerializer stringRedisSerializer = new StringRedisSerializer(); template.setKeySerializer(stringRedisSerializer); //The key of hash is also serialized by String template.setHashKeySerializer(stringRedisSerializer); //value's key uses jackson's serialization method template.setValueSerializer(jsonRedisSerializer); //The value of hash is also serialized by jackson template.setHashValueSerializer(jsonRedisSerializer); template.afterPropertiesSet(); return template; } }
10. Redis.config details
- When starting, start through the configuration file
Unit, case insensitive
Yes, you can include other configuration files
network
bind 127.0.0.1 #Bound ip address port 6379 #Port number
journal
# Specify the server verbosity level. # This can be one of: # debug (a lot of information, useful for development/testing) # verbose (many rarely useful info, but not a mess like the debug level) # notice (moderately verbose, what you want in production probably) # warning (only very important / critical messages are logged) loglevel notice #Log level printed # Specify the log file name. Also 'stdout' can be used to force # Redis to log on the standard output. logfile stdout#File name of log output databases 16 #There are 16 databases by default
snapshot
Persistence: the number of operations performed within the specified time will be persisted to the file. redis is a 1-memory database. If there is no persistence, the data will be lost immediately after power failure
# Save the DB on disk: # # save <seconds> <changes> # # Will save the DB if both the given number of seconds and the given # number of write operations against the DB occurred. # # In the example below the behaviour will be to save: # after 900 sec (15 min) if at least 1 key changed # after 300 sec (5 min) if at least 10 keys changed # after 60 sec if at least 10000 keys changed # # Note: you can disable saving at all commenting all the "save" lines. # # It is also possible to remove all the previously configured save # points by adding a save directive with a single empty string argument # like in the following example: # # save "" save 900 1 #If at least one key is modified within 900s. We will persist it save 300 10 #If at least 10 key s have been modified within 300s. We will persist it save 60 10000 #If at least 10000 key s have been modified within 60s. We will persist it stop-writes-on-bgsave-error yes#Do you want to continue working after persistence fails rdbcompression yes#Whether to compress rdb files requires cpu resources rdbchecksum yes#When saving rdb files, check and verify the errors. If there are errors, repair them dir ./ #Directory where rdb files are saved
REPLICATION master-slave copied configuration files
SECURITY security
You can set the password of redis here. There is no password by default
requirepass foobared
LIMITS limits
maxclients 10000 #Set the maximum number of clients connected to redis maxheap <bytes> #Set the maximum memory capacity of redis configuration ide maxmemory-policy volatile-lru #The processing strategy after the memory reaches the maximum limit, such as removing some expired key s and reporting an error.....
APPEND ONLY MODE aof configuration
appendonly no #It is not enabled by default. redis uses rdb for persistence by default appendfilename appendonly.aof #aof's persistent file name # appendfsync always #Each modification is written appendfsync everysec#sync is synchronized every second # appendfsync no#sync is not executed synchronously, and the operating system synchronizes the data itself, which is the fastest
11. RDB operation
Write the data in memory to the disk in the form of snapshot within the specified time interval, that is, in the form of snapshot. When he recovers, he reads the snapshot file directly into memory. Redis will create a separate (fork) sub process for persistence. The main process does not carry out any io operation, which ensures high performance, If large-scale data recovery is required, RDB is more efficient than AOF. The disadvantage of RDB is that the last persistent data may be lost. Redis uses RDB by default
The RDB save file is dump.rdb
AOF saves appendonly.aof
Trigger mechanism
- When the save rule is satisfied, it will automatically trigger the rdb rule (generate rdb file)
- When we execute the flush command, our rdb rules will also be triggered
- Exiting redis will also generate rdb rules
After the backup is completed, a dump.rdb file will be generated
If you restore an rdb file
- Just put the RDB file in the startup directory of redis, and redis will automatically check dump.rdb and recover the data in it!
advantage:
- Suitable for large-scale data recovery
- The requirements for data integrity are not high
Disadvantages:
- The process operation needs a certain time interval. If redis goes down unexpectedly, the last modified data will not be available
- When using fork process, it will occupy a certain content space
12. AOF operation
Record all our commands and execute all the documents when replying
Each write operation is recorded in the form of a log. All instructions during the execution of redis are recorded. Only files are allowed to be added, but files cannot be overwritten. At the beginning of redis startup, the modified files will be read to rebuild the data. When redis restarts, the commands will be executed from front to back according to the contents of the log file to complete the data recovery
AOF saves the appendonly.aof file, which is not enabled by default. We need to change the configuration information in the configuration file. After modifying the file, we need to restart it to take effect
If the AOF file is misplaced, it needs to be repaired again. Redis provides a tool, redis check AOF -- fix, which either discards all or only discards the wrong commands
advantage:
- Each modification will be synchronized to improve file integrity
- Data is synchronized once per second, and one second of data may be lost
- redis will record the size of the file. If the size exceeds 64mb, the rewriting mechanism will be triggered, and fork will create a new process to rewrite the file
Disadvantages:
- Compared with data files, aof is much larger than rdb, and the repair speed is also slower than rdb
- Aof also runs slower than rdb. The default configuration of redis is rdb persistence
- Only cache. If you only want your data to exist when the server is running, you can also not use any persistence
- Enable two persistence methods at the same time
- In this case, when redis restarts, AOF files will be loaded first to recover the original data, because normally, the data set saved by AOF is more complete than that saved by RDB files
- RDB data is not real-time. When both are used at the same time, only AOF files will be found when the server is restarted. However, RDB is more suitable for backing up the database and starting quickly. AOF is constantly changing and difficult to back up
- Performance recommendations:
- Because RDB files are only used for backup purposes, it is only recommended to persist RDB files on slave nodes. It only takes 15 minutes to back them up
- If AOF is enabled, the advantage is that in the worst case, only less than two seconds of data is lost. Start the script to load its own AOF file. The cost is that it brings continuous io operation. 2. At the end of AOF rewriting, new data generated in the rewriting process is written to a new file. Blocking is almost inevitable. As long as the hard disk is allowed, The frequency of AOF rewriting should be minimized, and the base size allowed to be rewritten can be increased.
- AOF can also be turned off, and high availability can be achieved only by master slave Master-Slave replication, which can save a large part of io operations. The price is that when the Master/Slave goes down at the same time, a lot of data will be lost
13. Redis publish and subscribe
Redis publish subscribe is a message communication mode. The sender sends messages and the subscriber receives messages
Redis client can subscribe to any number of channels
The following figure describes the relationship between a channel and a message subscriber: a channel is a dictionary
When a new message is sent to channel1 through the PUBLISH command, the message will be pushed to the subscribers of the platform
command
advantage:
- Implement message system!
- Real time chat room (the channel can be used as a chat room and the information can be returned to all people in Xi'an)
- Subscribe to concerns
- Slightly complex scenarios will be handled using message oriented middleware, because others are professional
14. Redis master-slave replication
Master-Slave replication refers to copying data from one Redis server to other Redis servers. The former is called master / leader and the latter is called Slave / follower; Data replication is one-way. It can only be from master node to Slave node. Master is mainly write and Slave is mainly read. By default, each Redis server is master node, and a master node can have multiple Slave nodes (or no Slave nodes), but a Slave node can only have one master node. The main functions of master-Slave replication include:
- Data redundancy: master-slave replication realizes the hot backup of data, which is a way of data redundancy other than persistence
- Fault recovery: when the master node has a problem, the slave node can provide services. Rapid fault recovery is actually a kind of service redundancy
- Load balancing: on the basis of master-slave replication, combined with read-write separation, the master node can provide write services, and the slave node can provide read services (that is, the master node should be connected when writing Redis data, and the slave node should be connected when reading Redis data), so as to share the load of the server, especially in the case of more reading and less writing, the load can be shared through multiple slave nodes, It can greatly improve the concurrency of Redis server
- In addition to the above functions, master-slave replication is also the basis for sentinels and clusters to implement. Therefore, master-slave replication is the basis for Redis and high availability
Generally speaking, it is impossible to use only one Redis in the project
-
Structurally, a single Redis server will have a single point of failure, and one server needs to handle all request loads, which is too stressful
-
In terms of capacity, the memory capacity of a single Redis server is limited. Even if the memory of a Redis is 256G, all memory can not be used as Redis storage memory. Generally, the maximum memory of Redis should not exceed 20G. Generally, the commodities on e-commerce websites are uploaded at one time and browsed countless times, that is, read more and write less
Master-slave replication and read-write separation. In 80% of cases, read operations are carried out to reduce the pressure on the server. They are often used in the architecture.
Environment configuration
127.0.0.1:6379> info replication # Replication role:master #role connected_slaves:0 #Number of connected slaves master_repl_offset:0 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0 127.0.0.1:6379>
Copy the three configuration files, and then modify the corresponding information (clusters only exist after redis3.0 under windows)
- Port: the port number occupied by the process
- The pid name (windows cannot find) records the id of the process. The file has a lock, which can prevent the program from starting multiple times
- log file name to specify the location of the file
- The name of dump.rdb and the location of the persistent file
If you use the command line Slaveof the host address and host port number, the configuration is temporary. If it is permanent, you need to use the configuration file. The host can write, the slave cannot write, and can only read. All information and data in the host will be automatically saved by the slave. The host is disconnected, and the slave is still connected to the host, but there is no write operation, At this time, if the host comes back, the slave can still directly obtain the information written by the host. If the cluster is configured in the form of command line, it will become the host after recovery from downtime. As long as it becomes the slave, it will immediately obtain the value from the host
Replication principle
After the slave is successfully started and connected to the master, a sync synchronization command will be sent
The master receives the command and starts the background process. At the same time, all the commands used to modify the data set on the mobile phone are executed the day after tomorrow. After that, the master transfers the whole file to the slave and completes a synchronization
Full copy: when the Slave service receives the database file, it saves it and loads it into memory
Incremental replication: the Master continues to pass all new collected modification commands to the slave in turn to complete synchronization
However, as long as the master is reconnected, a full synchronization (full replication) will be performed automatically. As the name suggests, it is the data added after the connection-·
Sentinel mode
If the host is disconnected, we can use SLAVEOF no one to make ourselves become the host and name the new host manually. If the host master is repaired, it needs to be reconfigured
Sentinel mode
The method of master-slave switching technology is: when the server goes down, it needs to manually switch a slave server to the master server. It requires manual intervention, which is time-consuming and laborious, and the service will not be available for a period of time. We can use the sentry to solve this problem. He can monitor whether the host fails in the background. If it fails, According to the number of votes, it will automatically change from the library to the main library.
Sentinel mode is a special mode. Firstly, Redis provides sentinel commands. Sentinel is an independent process. As a process, it will run independently. The principle is that sentinel sends commands and waits for the response of Redis server, so as to monitor multiple running Redis instances
The role of sentinels:
- Send a command to let the redis server return to monitor its running status, including the master server and the slave server
- When the sentinel detects that the master is down, it will automatically call the slave switch the master, and then notify other servers by publishing and subscribing to modify the configuration file to let them switch the host
There may be problems when a sentinel monitors redis. Therefore, we can use multiple sentinels to monitor, and each sentinel can monitor each other to form a multi sentinel mode
Assuming that the main server is down and sentinel 1 detects this situation first, the system will not immediately carry out the election failover process. Only sentinel 1 subjectively thinks that the primary server is unavailable. This phenomenon is called supervisor offline. When the subsequent sentinels also detect that the primary server is unavailable and the number reaches a certain value, there will be a vote between sentinels. The voting result is initiated by one sentinel. After the switch is successful, it will pass the publish subscription mode, Let each sentinel switch their monitored from the server to the host. This process is called objective offline
The sentinel configuration file sentinel.conf
sentinel monitor myredis 127.0.0.1 6379 1 #sentinel monitor monitored name host port 1
The following number 1 means that after the host hangs up, the slave votes to see who will take over as the new host. The one with the largest number of votes will become the host. One means that as long as a sentry thinks that the host hangs up, the election will begin. The minimum number of votes passed is one. To put it bluntly, if a sentry thinks that the host is down, the host will be recognized as down
- If the host comes back, it can only be merged into the new master as a slave
advantage:
- Sentinel cluster is based on master-slave replication. It has all the advantages of master-slave replication
- The master-slave can be switched, and the fault can be transferred, so the system availability will be better
- Sentinel mode is the upgrade of master-slave mode, which is more robust
Disadvantages:
- redis is not easy to expand online. Once the cluster reaches the upper limit, online expansion will be very troublesome
15. Redis cache penetration and avalanche
The use of Redis cache greatly improves the performance and efficiency of the application, especially the convenience of data query. At the same time, it also brings some problems
Cache penetration
Cache penetration: when users want to query a data, they find that there is no data in the redis memory database, that is, the cache does not hit, so they query the persistence layer database, and the query fails. When many users do not hit the cache, they query the persistence layer database, which will cause great pressure on the database, Cache penetration occurred
resolvent
1. Bloom filter:
Bloom filter is a data structure that stores all possible query parameters in the form of hash. It is verified in the control layer first, and discarded if it does not meet the requirements, so as to avoid the pressure of query to the underlying storage system
2. Cache empty objects:
- If null values can be cached, it means that the cache needs more space to store more keys
- Even if the expiration time is set for a null value, there will still be a problem that the cache layer and the storage layer are inconsistent for a period of time,
Buffer breakdown
Attention should be paid to the difference between cache penetration and cache breakdown. Cache breakdown means that a key is very hot. It is constantly carrying a large number of concurrency. Large concurrency focuses on accessing one point. When the key fails, a large number of concurrency will break through the cache and directly request the database. These two types of data are generally hot data. Because the cache expires, At the same time, access to the database to query the latest data, and write back to the cache, resulting in an instantaneous increase in database 1 pressure
resolvent
1. Set the hotspot data to never expire
From the cache level, this problem will not occur without setting the expiration time
2. Add mutex lock
Distributed lock: distributed lock is used to ensure that for each key, only one thread can query the back-end service at the same time, and other threads do not have the permission to obtain the distributed lock. Therefore, they only need to wait and transfer the pressure to the distributed lock
Cache avalanche
Cache avalanche means that at a certain time, the cache set expires and Redis goes down
One of the reasons for the avalanche is that when a large number of concurrent Keys expire at the same time, they will directly access the database
Solution
1. High availability of redis:
Set up more redis and build clusters
2. Current limiting degradation
After the cache fails, you can control the number of threads reading and writing to the database cache by locking or queuing. For example, for a key, only one thread is allowed to query data and write to the cache, while other threads wait
3. Data preheating
The meaning of data heating is to access the data in advance during the formal deployment, load the data into the redis cache, and set different expiration times to make the cache expiration time as uniform as possible