Wrong use of LinkedHashMap causes memory leak

Last week, i examined a severe memory leak problem on company product and found out that wrong use of LinkedHashMap, a class from Java Collection Framework, was the root cause. In this mid-night post, we are going to see how JVM memory leaks as LinkedHashMap represents a cache in multi-thread context.

LinkedHashMap offers a method named removeEldestEntry that allows developers to define their own policy to remove least-recently element. Javadoc of that method contains code snippet illustrating how LinkedHashMap is used as size-bounded cache.

     private static final int MAX_ENTRIES = 100;

     protected boolean removeEldestEntry(Map.Entry eldest) {
        return size() > MAX_ENTRIES;
     }

Many developers, who have neither deep knowledge of Java Collection nor experience in concurrency, often feel confident to reuse the code above without any concern about multi-thread context.

In following example, there are two threads inserting new entries concurrently into a LinkedHashMap cache, whose theoretical max-size is 100.

import java.util.LinkedHashMap;
import java.util.Map;
import java.util.concurrent.atomic.AtomicInteger;

/**
 * @author <a href="hoang281283@gmail.com">Minh Hoang TO</a>
 * @date 11/18/12
 */
public class LinkedHashMapTest
{
   public static void main(String[] args) throws Exception
   {
      //Expect to have a map whose size never exceed 100
      final LinkedHashMap map = new LinkedHashMap()
      {
         @Override
         protected boolean removeEldestEntry(Map.Entry eldest)
         {
            return size() > 100;
         }
      };

      final AtomicInteger counter = new AtomicInteger(0);

      final String blah = "BlahBlah";

      Thread t1 = new Thread()
      {
         @Override
         public void run()
         {
            for(int i = counter.incrementAndGet(); i < 1000; i = counter.incrementAndGet())
            {
               map.put(i, blah);
            }
         }
      };

      Thread t2 = new Thread()
      {
         @Override
         public void run()
         {
            for(int i = counter.incrementAndGet(); i < 1000; i = counter.incrementAndGet())
            {
               map.put(i, blah);
            }
         }
      };

      t1.start();
      t2.start();
      t1.join();
      t2.join();

      System.out.println("Current size of linked hash map is: " + map.size());
   }
}

The final map size is random but often much bigger than 100, which means severe memory leak. Anyway, that is predictable behavior of the collection with its actual least-recently eviction code.

    void addEntry(int hash, K key, V value, int bucketIndex) {
        createEntry(hash, key, value, bucketIndex);

        // Remove eldest entry if instructed, else grow capacity if appropriate
        Entry<K,V> eldest = header.after;
        if (removeEldestEntry(eldest)) {
            removeEntryForKey(eldest.key);
        } else {
            if (size >= threshold)
                resize(2 * table.length);
        }
    }

As a new entry is added into LinkedHashMap, the collection removes its head entry if its size exceeds configured max size. Due to the overlapping of addEntry and removeEntryForKey in multi-thread context, it is possible that many threads are adding new entries but they all attempt to remove the same head entry. That explains why the size could be arbitrarily large.

Advertisement

1 Comment

Filed under JavaSE

One response to “Wrong use of LinkedHashMap causes memory leak

  1. Pingback: JAVA LinkedHashMap (LRU cache实现类) 的线程安全问题 – 为忘却的记录

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s