Elasticsearch powers search engines, log analytics platforms, and real-time data applications across the world. However, many performance issues don’t come from Elasticsearch itself — they come from poorly optimized infrastructure.
In this guide by BeStarHost, we break down how to properly optimize a dedicated server for Elasticsearch, covering hardware selection, operating system tuning, and Elasticsearch performance best practices.
Why Dedicated Servers Are Ideal for Elasticsearch
Elasticsearch is extremely sensitive to CPU availability, memory pressure, and disk latency. Shared or oversold environments often cause:
- Slow query response times
- Unstable indexing performance
- Frequent JVM garbage collection pauses
- Cluster health issues
A dedicated server for Elasticsearch provides isolated resources, predictable performance, and full control over system tuning — all essential for production workloads.
Elasticsearch Hardware Requirements
CPU Selection
Elasticsearch benefits more from multiple CPU cores than from extremely high clock speeds.
- Minimum: 8 CPU cores
- Recommended: 16–32 cores for production
- Use modern processors such as AMD EPYC or Intel Xeon
Memory (RAM) Best Practices
Memory plays a critical role in Elasticsearch performance.
- Minimum: 32 GB RAM
- Recommended: 64–128 GB RAM
- Set JVM heap size to 50% of RAM (maximum 32 GB)
Keeping heap below 32 GB ensures compressed object pointers remain enabled.
Storage and Disk Performance
Disk I/O is often the main bottleneck in Elasticsearch.
- Use NVMe SSD storage
- Avoid HDDs entirely
- Separate OS and data disks when possible
Operating System Optimization for Elasticsearch
Disable Swap
swapoff -a
Swapping causes severe performance degradation and should always be disabled.
Increase File Descriptor Limits
ulimit -n 65536
Kernel Parameter Configuration
vm.max_map_count=262144
This setting is mandatory for stable Elasticsearch operation.
Elasticsearch Performance Tuning Techniques
Shard Strategy
Too many small shards create unnecessary overhead.
- Target shard size: 20–50 GB
- Avoid over-sharding indices
Replica Configuration
Replicas improve availability but increase indexing load.
- Use 1 replica for most production environments
- Temporarily reduce replicas during bulk indexing
Refresh Interval Optimization
index.refresh_interval: 30s
Longer refresh intervals significantly improve indexing throughput.
Elasticsearch Cluster Optimization
- Use dedicated master nodes for large clusters
- Separate data, ingest, and coordinating nodes
- Balance shards evenly across nodes
- Continuously monitor cluster health
High-performance Elasticsearch starts with the right infrastructure. By selecting proper hardware, tuning the operating system, and applying Elasticsearch-specific optimizations, you can build a cluster that is fast, stable, and scalable.
At BeStarHost, our dedicated servers are purpose-built to handle demanding Elasticsearch workloads with ease.
