Implement LRU Cache

Prince Pereira
3 min readMar 6, 2022

LRU or Least Recently Used is the most prominent cache eviction policy used in cache systems.

What is the need of Cache ?

Microservice or Monolithic based architectures use databases to store information/data. And Databases stores the info in Disks. Accessing (Read/Write) the Disks are resource intensive and time consuming. So services prefer data to be stored in a temporary storage which is in Primary Memory (RAM) and we call it caching. Read/Write in primary memory is faster compared to secondary memory. So cache can improve the performance of service.

Why do we need a cache eviction policy ?

Cache is a temporary storage and we are storing everything in primary memory(RAM) which is limited in size. If we keep adding data to RAM with no limi9t, RAM can grow bigger which can end up in thrashing (continuous pagenation) and performance can degrade if data is more. So better to keep a threshold in Cache. So better to keep evict data from cache using some better algorithm.

What are different Cache eviction Strategies?

  • LRU Cache (Least Recently Used)
  • LFU (Least Frequently Used)
  • FIFO (First IN First Out)
  • And many more…

More about LRU Cache

Least recently used data will be evicted. It uses HashMap and Doubly Linked List to store data. HashMap provides constant O(1) access to retrieve element and Doubly Linked List maintains the order of Least recently Used data\.

Code in Golang

package mainimport (
"fmt"
)
type Node struct {
Key int
Value int
Prev *Node
Next *Node
}
type LRUCache struct {
Capacity int
Count int
HashMap map[int]*Node
Head *Node
Tail *Node
}
func CreateLRUcache(cap int) *LRUCache {
cache := new(LRUCache)
cache.HashMap = make(map[int]*Node)
cache.Head = new(Node)
cache.Tail = new(Node)
cache.Head.Next = cache.Tail
cache.Tail.Prev = cache.Head
cache.Capacity = cap
return cache
}
func (cache *LRUCache) delNodeFromList(key int) {
node := cache.Head
// Iterating to reach the required node
for node.Next.Key != key && node != cache.Tail {
node = node.Next
}
if node == cache.Tail {
return
}
// resetting pointers to eliminate node
prevNode := node.Next.Next
node.Next = node.Next.Next
prevNode.Prev = node
cache.Count--
}
func (cache *LRUCache) pushNodeToFrontOfList(node *Node) {
if cache.Head.Next == cache.Tail {
// Head is pointing to tail
node.Next = cache.Tail
node.Prev = cache.Head
cache.Head.Next = node
cache.Tail.Prev = node
} else {
node.Next = cache.Head.Next
node.Prev = cache.Head
cache.Head.Next = node
node.Next.Prev = node
}
cache.Count++
}
func (cache *LRUCache) delLastNode() {
if cache.Tail.Prev == cache.Head {
// List is empty
return
}
lastNode := cache.Tail.Prev
secLastnode := lastNode.Prev
secLastnode.Next = cache.Tail
cache.Tail.Prev = secLastnode
// Decrementing the counter on deleting last node
cache.Count--
}
func (cache *LRUCache) isCacheFull() bool {
if cache.Capacity == cache.Count {
return true
}
return false
}
func (cache *LRUCache) Get(key int) int {
ok := false
var val *Node
if val, ok = cache.HashMap[key]; !ok {
return -1
}
// here we have the val
cache.delNodeFromList(key) // Deleting from doubly ll
cache.pushNodeToFrontOfList(val)
return val.Value
}
func (cache *LRUCache) Put(key, val int) {
fmt.Println("Pushing key : ", key, " Val : ", val)
ok := false
var valNode *Node
if valNode, ok = cache.HashMap[key]; ok {
delete(cache.HashMap, key) // Deleting from hashmap
cache.delNodeFromList(key) // Deleting from doubly ll
cache.pushNodeToFrontOfList(valNode)
return
}
if cache.isCacheFull() {
fmt.Println("Cache full ")
delete(cache.HashMap, cache.Tail.Prev.Key)
cache.delLastNode()
}
node := new(Node)
node.Key = key
node.Value = val
cache.HashMap[key] = node
cache.pushNodeToFrontOfList(node)
}
func (cache *LRUCache) Iter() {
fmt.Println("\nPrinting map : ", cache.HashMap)
fmt.Print("Printing Double Linked List : ")
node := cache.Head.Next
for node != cache.Tail {
fmt.Print(node.Key, ":", node.Value, " ")
node = node.Next
}
fmt.Println()
}
func main() {
cache := CreateLRUcache(3)
fmt.Println("Get 1 : ", cache.Get(1))
cache.Iter()
cache.Put(2, 2)
cache.Iter()
cache.Put(3, 3)
cache.Iter()
cache.Put(4, 4)
cache.Iter()
fmt.Println("Get 2 : ", cache.Get(2))
cache.Put(5, 5)
cache.Iter()
fmt.Println("Get 2 : ", cache.Get(2))
}

Output

Get 1 :  -1

Printing map : map[]
Printing Double Linked List :
Pushing key : 2 Val : 2

Printing map : map[2:0xc0000b8060]
Printing Double Linked List : 2:2
Pushing key : 3 Val : 3

Printing map : map[2:0xc0000b8060 3:0xc0000b80a0]
Printing Double Linked List : 3:3 2:2
Pushing key : 4 Val : 4

Printing map : map[2:0xc0000b8060 3:0xc0000b80a0 4:0xc0000b80e0]
Printing Double Linked List : 4:4 3:3 2:2
Get 2 : 2
Pushing key : 5 Val : 5
Cache full

Printing map : map[2:0xc0000b8060 4:0xc0000b80e0 5:0xc0000b8120]
Printing Double Linked List : 5:5 2:2 4:4
Get 2 : 2

--

--

Prince Pereira

Senior Software Engineer - Microsoft | SDN | Java | Golang | DS & Algo | Microservices | Kubernetes | Docker | gRPC & Protocol Buffer