Java / JVMSpring Boot2026 Guide

Java Application Monitoring Guide: JVM Metrics, Spring Boot & APM (2026)

Java's JVM gives you more observability hooks than almost any other runtime — but knowing which metrics matter and how to wire them up takes experience. This guide covers JVM heap monitoring, GC analysis, Spring Boot Actuator setup, and APM tool comparison.

Updated April 202614 min readJava / Spring Boot
Staff Pick

📡 Monitor your APIs — know when they go down before your users do

Better Stack checks uptime every 30 seconds with instant Slack, email & SMS alerts. Free tier available.

Start Free →

Affiliate link — we may earn a commission at no extra cost to you

TL;DR — Java Monitoring Checklist

  • ✅ Enable Spring Boot Actuator + Micrometer Prometheus registry
  • ✅ Track JVM heap (used/max), GC pause time, thread count, Metaspace
  • ✅ Enable GC logging with -Xlog:gc* for post-incident analysis
  • ✅ Alert on old-gen heap > 80%, GC overhead > 10%, thread deadlock count > 0
  • ✅ Configure -XX:+HeapDumpOnOutOfMemoryError for production
  • ✅ Add external uptime check — JVM OOM crashes silently from the outside

JVM Memory Architecture

Understanding Java memory regions is prerequisite to reading JVM metrics correctly. The JVM heap is divided into generations, each with different GC behavior:

RegionContentsGC BehaviorAlert Signal
Young Gen (Eden + Survivor)Newly allocated objectsMinor GC (frequent, fast)High minor GC frequency = allocation rate too high
Old Gen (Tenured)Long-lived objects, survived GCsMajor GC (infrequent, slow)Steady growth = memory leak
MetaspaceClass metadata (Java 8+)Collected when classloader unloadsGrowth = classloader or dynamic class gen leak
Native / Off-heapDirectByteBuffers, JNI, NIOManaged by GC indirectlyRSS > Xmx+Xms+native estimate

Key rule: Old generation occupancy is your early warning system for memory leaks. If old-gen usage after each Full GC is higher than the previous cycle, you have a leak — objects are surviving GC when they shouldn't.

Spring Boot Actuator + Micrometer Setup

Spring Boot Actuator is the fastest path to JVM + application metrics. With the Prometheus registry, you get a /actuator/prometheus endpoint ready to scrape in minutes.

<!-- pom.xml -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-registry-prometheus</artifactId>
</dependency>

# application.properties
management.server.port=8081
management.endpoints.web.exposure.include=health,info,prometheus,metrics
management.metrics.export.prometheus.enabled=true
management.endpoint.health.show-details=always

# Prometheus scrape config
scrape_configs:
  - job_name: 'spring-boot-app'
    metrics_path: '/actuator/prometheus'
    static_configs:
      - targets: ['localhost:8081']
    scrape_interval: 15s

This automatically exposes: jvm_memory_used_bytes, jvm_gc_pause_seconds, jvm_threads_states_threads, http_server_requests_seconds, and 50+ more metrics.

Custom Metrics with Micrometer

@Service
public class OrderService {
    private final MeterRegistry meterRegistry;
    private final Counter ordersProcessed;
    private final Timer orderProcessingTime;

    public OrderService(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;
        this.ordersProcessed = Counter.builder("orders.processed")
            .description("Total orders processed")
            .tag("environment", "production")
            .register(meterRegistry);

        this.orderProcessingTime = Timer.builder("order.processing.duration")
            .description("Time to process an order")
            .publishPercentiles(0.5, 0.95, 0.99)
            .register(meterRegistry);
    }

    // @Timed annotation alternative
    @Timed(value = "order.processing.duration", percentiles = {0.95, 0.99})
    public Order processOrder(OrderRequest request) {
        ordersProcessed.increment();
        return orderRepository.save(/* ... */);
    }
}
📡
Recommended

Monitor your Java API endpoints with Better Stack

Better Stack runs synthetic checks on your Spring Boot APIs from 30+ global locations — with on-call alerting when the JVM crashes or goes slow.

Try Better Stack Free →

Garbage Collection Analysis

Enable GC logging for every Java service in production. GC logs are invaluable for post-incident analysis and capacity planning.

# Java 11+ JVM flags (add to JAVA_OPTS or Dockerfile CMD)
-Xms2g -Xmx4g
-XX:+UseG1GC
-XX:MaxGCPauseMillis=200
-Xlog:gc*:file=/var/log/app/gc.log:time,uptime:filecount=5,filesize=20m
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=/var/log/app/
-XX:+ExitOnOutOfMemoryError

# G1GC is default since Java 9 — good all-around choice
# For high-throughput services: -XX:+UseParallelGC
# For low-latency (< 10ms pauses): -XX:+UseZGC (Java 15+) or -XX:+UseShenandoahGC

# What to look for in GC logs:
# [GC pause (G1 Evacuation Pause) (young) 2048M->512M(4096M), 0.1234567 secs]
#                                                                ^^^^^^^^^^
#                           Pause time — alert if > 500ms
#
# Full GC events (non-young): usually a problem
# [Full GC (Ergonomics) 3904M->2048M(4096M), 5.234 secs]
#                       Old gen barely shrank — memory leak likely

Alert Pro

14-day free trial

Stop checking — get alerted instantly

Next time your Java applications goes down, you'll know in under 60 seconds — not when your users start complaining.

  • Email alerts for your Java applications + 9 more APIs
  • $0 due today for trial
  • Cancel anytime — $9/mo after trial

Thread Pool Monitoring

Thread contention is the second most common Java performance problem after memory leaks. Monitor thread pool saturation and deadlocks proactively.

# Spring Boot thread pool metrics (via Actuator)
# Tomcat thread pool (for web requests)
tomcat.threads.busy          # Active request-handling threads
tomcat.threads.current       # Total allocated threads
tomcat.threads.config.max    # Maximum configured (default: 200)

# Alert when:
# tomcat.threads.busy / tomcat.threads.config.max > 0.8

# Custom executor monitoring
@Configuration
public class ThreadPoolConfig {
    @Bean
    public ThreadPoolTaskExecutor asyncExecutor(MeterRegistry registry) {
        ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
        executor.setCorePoolSize(10);
        executor.setMaxPoolSize(50);
        executor.setQueueCapacity(100);
        executor.setThreadNamePrefix("async-");
        executor.initialize();

        // Wrap with Micrometer for monitoring
        return ExecutorServiceMetrics.monitor(
            registry, executor, "async_executor"
        );
    }
}

# Detect thread deadlocks via JMX
# jstack <pid> | grep -A3 "BLOCKED"
# Or programmatically:
ThreadMXBean tmx = ManagementFactory.getThreadMXBean();
long[] deadlockedThreads = tmx.findDeadlockedThreads();

Alert Rules for Java Applications

groups:
  - name: java-jvm
    rules:
      # Heap near OOM
      - alert: JVMHeapCritical
        expr: |
          jvm_memory_used_bytes{area="heap"} /
          jvm_memory_max_bytes{area="heap"} > 0.85
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "JVM heap > 85% — OOM risk"

      # Old generation growth (leak signal)
      - alert: JVMOldGenGrowth
        expr: |
          (jvm_memory_used_bytes{id="G1 Old Gen"}
          - jvm_memory_used_bytes{id="G1 Old Gen"} offset 1h)
          / jvm_memory_used_bytes{id="G1 Old Gen"} offset 1h > 0.2
        for: 30m
        labels:
          severity: warning
        annotations:
          summary: "JVM old gen grew >20% in 1h — potential memory leak"

      # GC overhead too high
      - alert: JVMGCOverhead
        expr: |
          sum(rate(jvm_gc_pause_seconds_sum[5m]))
          / sum(rate(jvm_gc_pause_seconds_count[5m])) > 0.1
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: "JVM spending >10% of time in GC"

      # Thread pool saturation
      - alert: TomcatThreadPoolSaturated
        expr: |
          tomcat_threads_busy_threads /
          tomcat_threads_config_max_threads > 0.8
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "Tomcat thread pool >80% used — add workers or scale"

      # High p95 latency
      - alert: SpringBootHighLatency
        expr: |
          histogram_quantile(0.95, rate(
            http_server_requests_seconds_bucket[5m]
          )) > 2
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "Spring Boot p95 latency > 2 seconds"

Java APM Tool Comparison

ToolJava DepthStandout FeaturePricing
DynatraceBest-in-classOneAgent auto-discovery, AI root cause, JVM deep dive$69/host/month
New Relic Java AgentExcellentGC profiling, thread profiling, free 100GB/moFree + $0.35/GB
Datadog APMExcellentContinuous profiler, JVM dashboard, Spring Boot auto-detect$31/host/month
Prometheus + GrafanaGoodFree, Micrometer-native, flexible dashboardsOpen source (self-hosted)
Better StackGoodUptime + log monitoring; simple Java log ingestionFree + $20/mo
JVM Mission ControlExcellentJFR continuous profiling, no overhead, Java nativeFree (Oracle JDK/OpenJDK)

FAQ

What are the most important JVM metrics to monitor?

The eight critical metrics: heap used vs max, GC pause time and frequency, old generation occupancy, thread count and states, class loading count, CPU usage, Metaspace usage, and JIT compilation time. Old generation growth after GC cycles is the most important signal for memory leaks — objects surviving GC when they shouldn't indicates a retention problem.

How do I set up Spring Boot Actuator for monitoring?

Add spring-boot-starter-actuator and micrometer-registry-prometheus to your dependencies. Configure management.endpoints.web.exposure.include=health,info,prometheus in application.properties. Set a separate management port (management.server.port=8081) for security. Prometheus then scrapes /actuator/prometheus every 15 seconds for 50+ JVM and application metrics.

How do I analyze Java garbage collection logs?

Enable GC logging with -Xlog:gc*:file=/var/log/app/gc.log:time,uptime:filecount=5,filesize=20m. Watch for: Full GC events > 1 second (heap too small or leak), increasing GC frequency (old gen filling faster), GC overhead limit exceeded (>98% time in GC). GCeasy.io analyzes GC logs visually without installing anything.

What is Micrometer and how does it differ from Spring Boot Actuator?

Actuator provides the HTTP management endpoints and lifecycle infrastructure. Micrometer is the metrics facade — it defines how you instrument code (counters, timers, gauges) and translates to your chosen backend (Prometheus, Datadog, New Relic, etc.). You use Micrometer APIs in your code; Actuator wires them to the /actuator/prometheus endpoint.

How do I detect memory leaks in a Java application?

Watch old generation heap occupancy after each Full GC. Growing baseline = leak. To investigate: enable -XX:+HeapDumpOnOutOfMemoryError, take manual dumps with jmap -dump:live,format=b,file=heap.hprof <pid>, open in Eclipse MAT or VisualVM. Run the Leak Suspects report. Common sources: static collections, ThreadLocal values not removed, event listener registrations, unclosed connection pool entries.

🛠 Tools We Use & Recommend

Tested across our own infrastructure monitoring 200+ APIs daily

Better StackBest for API Teams

Uptime Monitoring & Incident Management

Used by 100,000+ websites

Monitors your APIs every 30 seconds. Instant alerts via Slack, email, SMS, and phone calls when something goes down.

We use Better Stack to monitor every API on this site. It caught 23 outages last month before users reported them.

Free tier · Paid from $24/moStart Free Monitoring
1PasswordBest for Credential Security

Secrets Management & Developer Security

Trusted by 150,000+ businesses

Manage API keys, database passwords, and service tokens with CLI integration and automatic rotation.

After covering dozens of outages caused by leaked credentials, we recommend every team use a secrets manager.

OpteryBest for Privacy

Automated Personal Data Removal

Removes data from 350+ brokers

Removes your personal data from 350+ data broker sites. Protects against phishing and social engineering attacks.

Service outages sometimes involve data breaches. Optery keeps your personal info off the sites attackers use first.

From $9.99/moFree Privacy Scan
ElevenLabsBest for AI Voice

AI Voice & Audio Generation

Used by 1M+ developers

Text-to-speech, voice cloning, and audio AI for developers. Build voice features into your apps with a simple API.

The best AI voice API we've tested — natural-sounding speech with low latency. Essential for any app adding voice features.

Free tier · Paid from $5/moTry ElevenLabs Free
SEMrushBest for SEO

SEO & Site Performance Monitoring

Used by 10M+ marketers

Track your site health, uptime, search rankings, and competitor movements from one dashboard.

We use SEMrush to track how our API status pages rank and catch site health issues early.

From $129.95/moTry SEMrush Free
View full comparison & more tools →Affiliate links — we earn a commission at no extra cost to you

Related Guides

Alert Pro

14-day free trial

Stop checking — get alerted instantly

Next time your Java applications goes down, you'll know in under 60 seconds — not when your users start complaining.

  • Email alerts for your Java applications + 9 more APIs
  • $0 due today for trial
  • Cancel anytime — $9/mo after trial