a stack of RAM sticks

VM to JVM Memory Optimizations

By: Josiah Huckins - 8/25/2019
minute read


When allocating memory for a cloud environment, you have to consider the multiple layers.

Host memory, the virtual machine's OS memory and, most important of all, application memory need to be taken into account. Understanding the purpose at each layer will allow you to maximize the potential of your investment. Other resource allocations are important to consider, but today's focus is on memory. In this post, we'll take apart the use of memory to support Java applications.

Why the focus on Java you ask? Well, Java based apps continue to be very popular, nay industry standard. Many enterprise development frameworks are Java code. Some of the most popular application frameworks are Java code. Android apps are Java code. Access to and usage of most Java apps leverage a PaaS or SaaS based delivery model, meaning they're all hosted somewhere, in JVMs, running on VMs, running on hosts. It's critical to get the memory allocation right when planning for and implementing such java apps.

Let's break it down with a holistic example.

Assume you are running dev, qa and production VMs, each hosting an instance of a java framework. For the host/hypervisor, the total physical memory capacity available is 64GB.

Breaking down this 64GB chunk, you need at a minimum between 2GB to 4GB for the Host's OS, and the same amount for each VM's OS. If you average that to 3GB and add it all up, you've consumed 12GBs of memory (host + VM for dev + VM for qa + VM for production). The JVMs have 52GBs to work with, still a lot at this point, but let's continue.

In this example, each VM is running one JVM. If we divide up the remaining amount evenly among the dev, qa, and production JVMs they each have roughly 17GBs to work with. Each JVM is going to need memory allocated for the heap, the program stack, perm generation (that is, java methods), and any in-memory cache. In the case of certain implementations, the app will try to put as much of its data as possible in RAM (putting it in what the JVM refers to as "Direct" memory).

A general practice I follow is to allocate half of the available JVM memory to the heap and leave the rest for everything else. This is preferred for memory hungry frameworks like AEM or apps with in-memory databases. It promotes optimized performance, since it leaves room for application/database stuff to be stored in RAM as opposed to being stored in a much slower mass storage device. If we follow this technique, you have a heap size of 8.5GB and then 8.5GB for the stack, methods and application data. Not bad!

Benefits for all VMs

With this detailed view of memory allocation, we now know just how much memory each guest needs and is likely to consume at any given time. We can also realize the benefit of memory ballooning, as each VM has memory to spare which can be taken back by the hypervisor when it runs low or needs to allocate more memory to another VM. This cuts back on paging and improves performance for the entire set of VMs.

Closing Thoughts

As you can see, allocating memory for VMs in a cloud environment is more involved that simply dividing up the total among your guests. To truly optimize the environment you need to know the specific needs and use cases of each layer in your architecture. We examined a Java based environment, but .NET/Windows setups should undergo the same type of review.

Thank you for reading!


Comments