Elasticsearch is written in Java and depends on the JVM heap for its functionality. Amazon Elasticsearch Service configures Elasticsearch to use 50% of a node’s memory for the heap.  JVM pressure is the percent of heap space in use by Elasticsearch. Amazon recommends keeping JVM pressure below around 80% to avoid the possibility of Out Of Memory errors from Elasticsearch.  

    If JVM pressure exceeds 92% for 30 minutes, Amazon Elasticsearch starts blocking all writes in the cluster to prevent it from getting into a red state. Once the memory pressure has dropped below 80% for 5 minutes, this restriction is lifted.

    Reducing JVM Pressure

    When JVM pressure is high on a data node, first consider if the data that node is properly sized for the number of shards on it. A general guideline is a node with a 30GB heap (64GB of total memory for the AWS instance) should have around 600 active shards. Closing unused indices that no longer need to be queried will help with heap usage. If no indices can be closed, then adding nodes can help when shards get allocated to the new nodes.

    The other option for reducing JVM pressure is to scale vertical to get more heap space.  You can scale the AWS nodes vertically until the heap size caps out at 32GB; after that point you will have to scale horizontally by adding more nodes. The following instance types will get you closest to the 32GB heap without overpaying:

    • M4.4xlarge.elasticsearch (64GB)

    • c4.8xlarge.elasticsearch (60GB)

    • R4.2xlarge.elasticsearch (61GB)

    • R3.2xlarge.elasticsearch (61GB)

    • I3.2xlarge.elasticsearch (61GB)

    Note that larger instances may still be appropriate for your workload even if the heap size does not increase. Compute, storage, and network should all come into consideration in addition to memory when sizing your master and data nodes.