LRU and LFU Cache

Cache replacement policies determine which items to evict when the cache is full. LRU (Least Recently Used) and LFU (Least Frequently Used) are two popular strategies.

LRU Cache (Least Recently Used)

Evicts the least recently accessed item when cache is full.

Operations

  • get(key): Return value if exists, mark as recently used
  • put(key, value): Insert/update key-value, mark as recently used, evict LRU if full

Time Complexity Requirements

Both operations should be $O(1)$.

LRU: Optimal Implementation

Data Structures

Hash Map + Doubly Linked List

Hash Map: key → node for $O(1)$ lookup

Doubly Linked List: Maintains access order

  • Head: Most recently used
  • Tail: Least recently used

Structure Visualization

Complete Go Implementation

  1package main
  2
  3import "fmt"
  4
  5// Node represents a doubly linked list node
  6type Node struct {
  7    key, value int
  8    prev, next *Node
  9}
 10
 11// LRUCache implements LRU cache with O(1) operations
 12type LRUCache struct {
 13    capacity int
 14    cache    map[int]*Node
 15    head     *Node // Dummy head (most recent)
 16    tail     *Node // Dummy tail (least recent)
 17}
 18
 19// Constructor creates a new LRU cache
 20func Constructor(capacity int) LRUCache {
 21    lru := LRUCache{
 22        capacity: capacity,
 23        cache:    make(map[int]*Node),
 24        head:     &Node{},
 25        tail:     &Node{},
 26    }
 27    lru.head.next = lru.tail
 28    lru.tail.prev = lru.head
 29    return lru
 30}
 31
 32// removeNode removes a node from the list
 33func (lru *LRUCache) removeNode(node *Node) {
 34    node.prev.next = node.next
 35    node.next.prev = node.prev
 36}
 37
 38// addToHead adds a node right after head (most recent)
 39func (lru *LRUCache) addToHead(node *Node) {
 40    node.next = lru.head.next
 41    node.prev = lru.head
 42    lru.head.next.prev = node
 43    lru.head.next = node
 44}
 45
 46// moveToHead moves an existing node to head
 47func (lru *LRUCache) moveToHead(node *Node) {
 48    lru.removeNode(node)
 49    lru.addToHead(node)
 50}
 51
 52// removeTail removes and returns the tail node (LRU)
 53func (lru *LRUCache) removeTail() *Node {
 54    node := lru.tail.prev
 55    lru.removeNode(node)
 56    return node
 57}
 58
 59// Get returns the value of the key if exists, otherwise -1
 60func (lru *LRUCache) Get(key int) int {
 61    if node, exists := lru.cache[key]; exists {
 62        lru.moveToHead(node) // Mark as recently used
 63        return node.value
 64    }
 65    return -1
 66}
 67
 68// Put inserts or updates the key-value pair
 69func (lru *LRUCache) Put(key int, value int) {
 70    if node, exists := lru.cache[key]; exists {
 71        // Update existing key
 72        node.value = value
 73        lru.moveToHead(node)
 74    } else {
 75        // Insert new key
 76        newNode := &Node{key: key, value: value}
 77        lru.cache[key] = newNode
 78        lru.addToHead(newNode)
 79        
 80        // Check capacity
 81        if len(lru.cache) > lru.capacity {
 82            // Remove LRU
 83            tail := lru.removeTail()
 84            delete(lru.cache, tail.key)
 85        }
 86    }
 87}
 88
 89// Example usage
 90func main() {
 91    lru := Constructor(2)
 92    
 93    lru.Put(1, 1)
 94    lru.Put(2, 2)
 95    fmt.Println(lru.Get(1))    // returns 1
 96    lru.Put(3, 3)              // evicts key 2
 97    fmt.Println(lru.Get(2))    // returns -1 (not found)
 98    lru.Put(4, 4)              // evicts key 1
 99    fmt.Println(lru.Get(1))    // returns -1 (not found)
100    fmt.Println(lru.Get(3))    // returns 3
101    fmt.Println(lru.Get(4))    // returns 4
102}

Complexity

Time: $O(1)$ for get and put
Space: $O(\text{capacity})$

Why Doubly Linked List?

Need to remove arbitrary nodes: When moving to head or evicting

Singly linked list: $O(n)$ to find previous node

Doubly linked list: $O(1)$ to remove node with direct reference

LFU Cache (Least Frequently Used)

Evicts the least frequently accessed item. If tie, evict least recently used among them.

Operations

  • get(key): Return value if exists, increment frequency
  • put(key, value): Insert/update, increment frequency, evict LFU if full

Time Complexity Requirements

Both operations should be $O(1)$.

LFU: Optimal Implementation

Data Structures

Three Hash Maps + Doubly Linked Lists

  1. cache: key → node
  2. freqMap: frequency → DLL of nodes
  3. minFreq: Track minimum frequency

Structure Visualization

Complete Go Implementation

  1package main
  2
  3import "fmt"
  4
  5// LFUNode represents a node in LFU cache
  6type LFUNode struct {
  7    key, value, freq int
  8    prev, next       *LFUNode
  9}
 10
 11// DoublyLinkedList for LFU
 12type DLL struct {
 13    head, tail *LFUNode
 14}
 15
 16func NewDLL() *DLL {
 17    dll := &DLL{
 18        head: &LFUNode{},
 19        tail: &LFUNode{},
 20    }
 21    dll.head.next = dll.tail
 22    dll.tail.prev = dll.head
 23    return dll
 24}
 25
 26func (dll *DLL) addToHead(node *LFUNode) {
 27    node.next = dll.head.next
 28    node.prev = dll.head
 29    dll.head.next.prev = node
 30    dll.head.next = node
 31}
 32
 33func (dll *DLL) remove(node *LFUNode) {
 34    node.prev.next = node.next
 35    node.next.prev = node.prev
 36}
 37
 38func (dll *DLL) removeTail() *LFUNode {
 39    if dll.isEmpty() {
 40        return nil
 41    }
 42    node := dll.tail.prev
 43    dll.remove(node)
 44    return node
 45}
 46
 47func (dll *DLL) isEmpty() bool {
 48    return dll.head.next == dll.tail
 49}
 50
 51// LFUCache implements LFU cache with O(1) operations
 52type LFUCache struct {
 53    capacity int
 54    minFreq  int
 55    cache    map[int]*LFUNode
 56    freqMap  map[int]*DLL
 57}
 58
 59func ConstructorLFU(capacity int) LFUCache {
 60    return LFUCache{
 61        capacity: capacity,
 62        minFreq:  0,
 63        cache:    make(map[int]*LFUNode),
 64        freqMap:  make(map[int]*DLL),
 65    }
 66}
 67
 68func (lfu *LFUCache) updateFreq(node *LFUNode) {
 69    oldFreq := node.freq
 70    
 71    // Remove from old frequency list
 72    lfu.freqMap[oldFreq].remove(node)
 73    
 74    // If this was minFreq and list is now empty, increment minFreq
 75    if oldFreq == lfu.minFreq && lfu.freqMap[oldFreq].isEmpty() {
 76        lfu.minFreq++
 77    }
 78    
 79    // Increment frequency
 80    node.freq++
 81    
 82    // Add to new frequency list
 83    if lfu.freqMap[node.freq] == nil {
 84        lfu.freqMap[node.freq] = NewDLL()
 85    }
 86    lfu.freqMap[node.freq].addToHead(node)
 87}
 88
 89func (lfu *LFUCache) Get(key int) int {
 90    if node, exists := lfu.cache[key]; exists {
 91        lfu.updateFreq(node)
 92        return node.value
 93    }
 94    return -1
 95}
 96
 97func (lfu *LFUCache) Put(key int, value int) {
 98    if lfu.capacity == 0 {
 99        return
100    }
101    
102    if node, exists := lfu.cache[key]; exists {
103        // Update existing key
104        node.value = value
105        lfu.updateFreq(node)
106    } else {
107        // Insert new key
108        if len(lfu.cache) >= lfu.capacity {
109            // Evict LFU (and LRU among LFU)
110            minFreqList := lfu.freqMap[lfu.minFreq]
111            evicted := minFreqList.removeTail()
112            delete(lfu.cache, evicted.key)
113        }
114        
115        // Create new node with freq = 1
116        newNode := &LFUNode{key: key, value: value, freq: 1}
117        lfu.cache[key] = newNode
118        
119        if lfu.freqMap[1] == nil {
120            lfu.freqMap[1] = NewDLL()
121        }
122        lfu.freqMap[1].addToHead(newNode)
123        lfu.minFreq = 1
124    }
125}
126
127// Example usage
128func main() {
129    lfu := ConstructorLFU(2)
130    
131    lfu.Put(1, 1)
132    lfu.Put(2, 2)
133    fmt.Println(lfu.Get(1))    // returns 1, freq(1) = 2
134    lfu.Put(3, 3)              // evicts key 2 (freq=1)
135    fmt.Println(lfu.Get(2))    // returns -1 (not found)
136    fmt.Println(lfu.Get(3))    // returns 3, freq(3) = 2
137    lfu.Put(4, 4)              // evicts key 1 (both have freq=2, 1 is LRU)
138    fmt.Println(lfu.Get(1))    // returns -1 (not found)
139    fmt.Println(lfu.Get(3))    // returns 3
140    fmt.Println(lfu.Get(4))    // returns 4
141}

Complexity

Time: $O(1)$ for get and put
Space: $O(\text{capacity})$

LRU vs. LFU Comparison

AspectLRULFU
EvictionLeast recently usedLeast frequently used
RecencyConsidersDoesn't consider (unless tie)
FrequencyDoesn't considerConsiders
ImplementationSimpler (1 list)Complex (multiple lists)
Space$O(n)$$O(n)$
Time$O(1)$$O(1)$
Use CaseTemporal localityAccess patterns

When to Use LRU

Temporal locality: Recently accessed items likely to be accessed again

  • Web page caching
  • Database query results
  • Memory paging

When to Use LFU

Frequency matters: Popular items should stay

  • CDN caching
  • DNS caching
  • Popular content caching

Problem with LFU: Old popular items may stay forever even if no longer accessed

Access Pattern Visualization

Practical Considerations

Thread Safety

For concurrent access:

  • Use locks (mutex/semaphore)
  • Or lock-free data structures
  • Trade-off: Performance vs. complexity
 1import "sync"
 2
 3type ThreadSafeLRU struct {
 4    lru  *LRUCache
 5    mu   sync.RWMutex
 6}
 7
 8func (tslru *ThreadSafeLRU) Get(key int) int {
 9    tslru.mu.Lock()
10    defer tslru.mu.Unlock()
11    return tslru.lru.Get(key)
12}
13
14func (tslru *ThreadSafeLRU) Put(key, value int) {
15    tslru.mu.Lock()
16    defer tslru.mu.Unlock()
17    tslru.lru.Put(key, value)
18}

Expiration (TTL)

Add Time To Live:

 1type NodeWithTTL struct {
 2    key, value int
 3    expireAt   time.Time
 4    prev, next *NodeWithTTL
 5}
 6
 7func (lru *LRUCacheWithTTL) Get(key int) int {
 8    if node, exists := lru.cache[key]; exists {
 9        if time.Now().After(node.expireAt) {
10            // Expired
11            lru.remove(node)
12            delete(lru.cache, key)
13            return -1
14        }
15        lru.moveToHead(node)
16        return node.value
17    }
18    return -1
19}

Common Interview Problems

1. Implement LRU Cache

Problem: Implement get and put in $O(1)$.

Solution: Hash map + doubly linked list.

2. Implement LFU Cache

Problem: Implement get and put in $O(1)$.

Solution: Hash map + frequency map + doubly linked lists.

3. LRU Cache with Expiration

Problem: Add TTL to LRU cache.

Solution: Store timestamp in node, check on access.

4. Design Browser History

Problem: Implement back, forward, visit.

Solution: Doubly linked list with current pointer.

Applications

Operating Systems

  • Page replacement: LRU for virtual memory
  • Disk cache: Buffer cache management

Databases

  • Query result cache: LRU for recent queries
  • Buffer pool: LRU for disk pages

Web Servers

  • HTTP cache: LRU for web pages
  • Session cache: LRU for user sessions

CDN

  • Content cache: LFU for popular content
  • Edge caching: Hybrid strategies

Browsers

  • Browser cache: LRU for web resources
  • DNS cache: LFU for popular domains

When to Use

Use LRU when:

  • Temporal locality is strong
  • Recent items likely to be accessed again
  • Simple implementation preferred

Use LFU when:

  • Frequency matters more than recency
  • Popular items should be retained
  • Access patterns are predictable

Don't use caching when:

  • Data changes frequently (high invalidation cost)
  • Memory is severely limited
  • Access patterns are completely random

Related Snippets

  • Binary Search
    Binary search is an efficient algorithm for finding a target value in a sorted …
  • Binary Tree
    A binary tree is a tree data structure where each node has at most two children, …
  • General Tree
    A tree is a hierarchical data structure consisting of nodes connected by edges, …
  • Heap Data Structure
    A heap is a specialized tree-based data structure that satisfies the heap …
  • Heap Sort
    Heap sort is a comparison-based sorting algorithm that uses a binary heap data …
  • Linked List Operations
    Optimal techniques for common linked list operations using pointer manipulation, …
  • Merge Sort
    Merge sort is a stable, comparison-based sorting algorithm that uses the …
  • Quadtree
    A Quadtree is a tree data structure where each internal node has exactly four …
  • Quick Sort
    Quick sort is an efficient, in-place, comparison-based sorting algorithm that …
  • String Similarity Algorithms
    Algorithms for measuring similarity between strings, used in spell checking, DNA …
  • Terrain & Heightmap Generation
    Procedural terrain generation creates realistic landscapes using mathematical …
  • Trie (Prefix Tree)
    A Trie (pronounced "try") is a tree-like data structure used to store …
  • Wave Function Collapse
    Wave Function Collapse (WFC) is a procedural generation algorithm that creates …