Data structures and algorithms

LRU Cache

LRU Cache

LRU Cache: Design a data structure that follows the constraints of a Least Recently Used (LRU) cache .

Implement the LRUCache class:

  • LRUCache(int capacity) Initialize the LRU cache with positive size capacity.
  • int get(int key) Return the value of the key if the key exists, otherwise return -1.
  • void put(int key, int value) Update the value of the key if the key exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds the capacity from this operation, evict the least recently used key.

The functions get and put must each run in O(1) average time complexity.

Example 1:
Input
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
Output
[null, null, null, 1, null, -1, null, -1, 3, 4]

Explanation
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // cache is {1=1}
lRUCache.put(2, 2); // cache is {1=1, 2=2}
lRUCache.get(1);    // return 1
lRUCache.put(3, 3); // LRU key was 2, evicts key 2, cache is {1=1, 3=3}
lRUCache.get(2);    // returns -1 (not found)
lRUCache.put(4, 4); // LRU key was 1, evicts key 1, cache is {4=4, 3=3}
lRUCache.get(1);    // return -1 (not found)
lRUCache.get(3);    // return 3
lRUCache.get(4);    // return 4
Constraints:
  • 1 <= capacity <= 3000
  • 0 <= key <= 10^4
  • 0 <= value <= 10^5
  • At most 2 * 10^5 calls will be made to get and put.

Try this Problem on your own or check similar problems:

  1. LFU Cache
Solution:
class LRUCache {
  private class ListNode {
      int val;
      int key;
      ListNode next;
      ListNode prev;
      ListNode(int x) {
          val = x;
          next = null;
          prev = null;
      }
  }

  int capacity;
  int numberOfNodes;
  Map<Integer, ListNode> map;
  ListNode head, tail;

  private void removeNode(ListNode node){
      node.prev.next = node.next;
      node.next.prev = node.prev;
  }

  private void addToHead(ListNode node){
      node.next = head.next;
      head.next.prev = node;
      head.next = node;
      node.prev = head;
  }

  public LRUCache(int capacity) {
      this.capacity = capacity;
      numberOfNodes = 0;
      map = new HashMap<>();
      head = new ListNode(-1);
      tail = new ListNode(-1);
      head.next = tail;
      tail.prev = head;
  }

  public int get(int key) {
      if(!map.containsKey(key)) return -1;
      ListNode node = map.get(key);
      removeNode(node);
      addToHead(node);
      return node.val;
  }

  public void put(int key, int value) {
      ListNode node = map.getOrDefault(key, null);
      if(node != null){
          removeNode(node);
          node.val = value;
          addToHead(node);
      }else{
          node = new ListNode(value);
          node.key = key;
          addToHead(node);
          ++numberOfNodes;
          map.put(key, node);
      }

      if(numberOfNodes > capacity){
          map.remove(tail.prev.key);
          removeNode(tail.prev);
          --numberOfNodes;
      }
  }
}

/**
 * Your LRUCache object will be instantiated and called as such:
 * LRUCache obj = new LRUCache(capacity);
 * int param_1 = obj.get(key);
 * obj.put(key,value);
 */
Time/Space Complexity:
  • Time Complexity: O(1)
  • Space Complexity: O(c)
Explanation:

We have constant time complexity (for each query) since we have implemented fast lookup using the hashmap which also needs c space (where c is the maximum capacity). We need a fast lookup since it’s cache so your first hint should be to use hashmap, we also need a way to evict the least recently used keys from the cache, so we need a fast deletion and insertion from the front of the container (marking the key recently used) and deletion from the back of the container (least recently used range of keys). It turns out the double linked list enables us to have fast retrieval of the last element, and fast deletion and insertion (since there is no shifting required as we have, for example in array as continuous data storage). We utilize two sentinel (dummy) pointers head and tail so we can easily put the new nodes to the head of the list and delete elements from the end of the list. On get we try to find the node matching the key, if we don’t find it, we return -1, otherwise we return the value stored at that node which is mapped to the requested key. But also, since that key is now recently used, we must place it at the front of the list, by first removing it from its current place and placing it as the new list head. On put we check if the node with the key is already in the map, if it is we update its value and again repeat the process of shifting it to the front of the list. If, however we don’t have a node matching the required key, we create new node while also remembering it’s key and put the pair key, node to our hashmap. Finally, we also check if we’re over capacity for our LRU and then we evict the last node in the list (tail.prev) by removing it from the list and also using it’s stored key to remove it from the map, so we can return -1 on the next get for that specific key.

comments powered by Disqus

Join Our Newsletter

Don’t settle for anything less than the crown. Join our newsletter and become the King of Interviews! Click here to join now and get the latest updates.