The core principle of LruCache is the effective use of LinkedHashMap. There is a LinkedHashMap member variable inside it. There are 4 methods worth noting: construction method, get, put, trimToSize
The LRU(Least Recently Used) cache algorithm came into being. LRU is the least recently used algorithm. Its core idea is that when the cache is full, the least recently used cache objects will be eliminated first. There are two types of cache using the LRU algorithm: LrhCache and DisLruCache, which are used to implement memory cache and hard disk cache respectively. The core idea is the LRU cache algorithm.
principle
The core idea of LruCache is easy to understand. It is to maintain a list of cached objects. The arrangement of the object list is implemented in the order of access. That is, the objects that have not been accessed will be placed at the head of the queue and will be eliminated soon. The most recently accessed object will be placed at the end of the queue and will be eliminated at the end. (Add elements at the end of the queue, delete elements at the head of the queue)
LruCache actually uses the LinkedHashMap doubly linked list structure.
The core principle of LruCache is the effective use of LinkedHashMap objects. Set maxSize in the construction method and set accessOrder to true. After the get is executed, the access element will be placed at the end of the queue. After the put operation, trimToSize will be called to maintain the size of LinkedHashMap not greater than maxSize.
core method
Construction method
From the constructor of LruCache, we can see that the access order of LinkedHashMap is used.
/** * @param maxSize for caches that do not override {@link #sizeOf}, this is * the maximum number of entries in the cache. For all other caches, * this is the maximum sum of the sizes of the entries in this cache. */ public LruCache(int maxSize) { if (maxSize <= 0) { throw new IllegalArgumentException("maxSize <= 0"); } this.maxSize = maxSize; this.map = new LinkedHashMap<K, V>(0, 0.75f, true);//accessOrder is set to true }
put() method
It can be seen that the important thing about the put() method is to call the trimToSize() method after adding the cache object to ensure that the memory does not exceed maxSize
/** * Caches {@code value} for {@code key}. The value is moved to the head of * the queue. * * @return the previous value mapped by {@code key}. */ public final V put(K key, V value) { if (key == null || value == null) {//empty, not empty throw new NullPointerException("key == null || value == null"); } V previous; synchronized (this) { putCount++;//Insert cached object plus 1 size += safeSizeOf(key, value);//Increase the size of an existing cache previous = map.put(key, value);//Add cache objects to map if (previous != null) {//If there is already a cache object, the cache size is restored to before size -= safeSizeOf(key, previous); } } if (previous != null) {//entryRemoved() is an empty method that can be implemented by itself entryRemoved(false, key, previous, value); } trimToSize(maxSize);//Adjust cache size (key method) return previous; }
trimToSize method
The trimToSize() method continuously deletes the element at the head of the LinkedHashMap, that is, the least recently accessed element, until the cache size is smaller than the maximum value.
/** * Remove the eldest entries until the total of remaining entries is at or * below the requested size. * * @param maxSize the maximum size of the cache before returning. May be -1 * to evict even 0-sized elements. */ public void trimToSize(int maxSize) { while (true) {//Infinite loop K key; V value; synchronized (this) { //If the map is empty and the cache size is not equal to 0 or the cache size is less than 0, an exception is thrown if (size < 0 || (map.isEmpty() && size != 0)) { throw new IllegalStateException(getClass().getName() + ".sizeOf() is reporting inconsistent results!"); } //If the cache size is smaller than the maximum cache, or the map is empty, there is no need to delete the cache object and jump out of the loop if (size <= maxSize) { break; } // fetch the oldest mapping in map Map.Entry<K, V> toEvict = map.eldest(); if (toEvict == null) { break; } key = toEvict.getKey(); value = toEvict.getValue(); map.remove(key); size -= safeSizeOf(key, value); evictionCount++; } entryRemoved(true, key, value, null); } }
get method
When the get() method of LruCache is called to obtain the cache object in the collection, it means that the element has been accessed once, and the queue will be updated to keep the entire queue sorted according to the access order. This update process is done in the get() method in LinkedHashMap.
/** * Returns the value for {@code key} if it exists in the cache or can be * created by {@code #create}. If a value was returned, it is moved to the * head of the queue. This returns null if a value is not cached and cannot * be created. */ public final V get(K key) { if (key == null) {//key cannot be empty throw new NullPointerException("key == null"); } V mapValue; synchronized (this) { mapValue = map.get(key);//Get the corresponding cache object if (mapValue != null) { hitCount++; return mapValue; } missCount++; }
Example of use
//1. Initialization //①Set the size of the LruCache cache, which is generally 1/8 of the available capacity of the current process. //② Override the sizeOf method to calculate the size of each picture to be cached. int maxSize=(int)Runtime.getRuntime().maxMemory()/8; //1/8 of the total memory LruCache<String,Bitmap> cache=new LruCache<String,Bitmap>(maxSize) { @Override protected int sizeOf(String key, Bitmap value) { return value.getByteCount(); } } //2. Put it in the cache cache.put(key:String, value:Bitmap) //3. Take out the cache cache.get(key:String)
related reference
http://www.xujingzi.com/nav/m/84804.html
https://www.jianshu.com/p/b49a111147ee