RubyRuby on Rails2026 Guide

Ruby on Rails Monitoring Guide: Metrics, APM & Production Observability (2026)

Rails' opinionated stack, ActiveRecord ORM, and Sidekiq background jobs create specific monitoring requirements. This guide covers how to add Prometheus metrics with yabeda, instrument Rails with OpenTelemetry, monitor Sidekiq queue health, detect N+1 queries, and profile memory growth in production.

Updated April 202612 min readRuby / Rails / Sidekiq
Staff Pick

📡 Monitor your APIs — know when they go down before your users do

Better Stack checks uptime every 30 seconds with instant Slack, email & SMS alerts. Free tier available.

Start Free →

Affiliate link — we may earn a commission at no extra cost to you

TL;DR — Rails Monitoring Checklist

  • ✅ Add yabeda-rails + yabeda-prometheus for automatic request metrics
  • ✅ Monitor db_query_count per request — high count = N+1 queries
  • ✅ Track Sidekiq queue depth and dead job count
  • ✅ Watch Puma thread saturation — all busy = add workers
  • ✅ Monitor RSS memory per worker — growing RSS = memory leak
  • ✅ Use Bullet gem in staging to catch N+1 before production

Rails-Specific Monitoring Considerations

N+1 queries — the #1 Rails performance killer

ActiveRecord makes it trivially easy to trigger N+1 query patterns: loading a list of posts then calling post.author on each triggers one SQL query per post. With 50 posts that's 51 queries for what should be 2 (one for posts, one for authors with includes(:author)). The APM symptom: high db query count per request with fast individual query times. Use Bullet in development, and track average SQL queries per request in production.

Puma threading vs Unicorn forking

Puma (multi-threaded) and Unicorn (multi-process) have different monitoring profiles. Puma: monitor thread saturation (Puma.stats — pool_capacity/max_threads ratio). Unicorn: monitor worker count and request queue depth via unicorn_rails stats. The Ruby GIL limits CPU-bound parallelism in Puma — use more workers, not more threads, for CPU-heavy workloads.

Ruby GC — object retention and heap growth

Ruby's GC is generational (three generations) with a major GC that stops all threads. Major GC is triggered when heap space fills up. Monitor GC.stat[:major_gc_count] — frequent major GCs indicate retained objects filling the heap. Enable GC compaction (Ruby 3.1+) with GC.compact to reduce fragmentation.

Prometheus Metrics with Yabeda

Yabeda is the Rails-native Prometheus metrics framework. It provides automatic metrics for Rails requests, Sidekiq, ActionCable, and more:

# Gemfile
gem "yabeda-rails"
gem "yabeda-prometheus"
gem "yabeda-sidekiq"  # if using Sidekiq

# config/application.rb — expose /metrics for Prometheus scraping
module MyApp
  class Application < Rails::Application
    config.middleware.use Yabeda::Prometheus::Exporter, path: "/metrics"
  end
end

# This auto-creates:
# rails_requests_total{controller, action, status} — Counter
# rails_request_duration_seconds{controller, action, status} — Histogram
# sidekiq_jobs_executed_total{queue, worker, status} — Counter
# sidekiq_queue_size{queue} — Gauge
# sidekiq_job_runtime{queue, worker} — Histogram

# Custom business metrics
Yabeda.configure do
  group :orders do
    counter :total,
      comment: "Total orders placed",
      tags: [:payment_method, :plan_tier]

    histogram :amount_dollars,
      comment: "Order value in dollars",
      buckets: [10, 50, 100, 500, 1000],
      tags: [:payment_method]
  end
end

# In your controller/service
class OrdersController < ApplicationController
  def create
    order = Order.create!(order_params)
    Yabeda.orders.total.increment(
      tags: { payment_method: order.payment_method, plan_tier: current_user.plan }
    )
    Yabeda.orders.amount_dollars.measure(
      tags: { payment_method: order.payment_method }
    ) { order.amount_cents / 100.0 }
  end
end

# Prometheus alert rules
# High error rate:
# rate(rails_requests_total{status=~"5.."}[5m])
#   / rate(rails_requests_total[5m]) > 0.01

# Slow requests:
# histogram_quantile(0.95, rate(rails_request_duration_seconds_bucket[5m])) > 1

# Sidekiq backlog:
# sidekiq_queue_size{queue="critical"} > 100
📡
Recommended

Monitor your Rails application endpoints with Better Stack

Better Stack runs synthetic uptime checks on your Rails APIs from 30+ global locations — with on-call alerting when responses degrade.

Try Better Stack Free →

OpenTelemetry for Rails

# Gemfile
gem "opentelemetry-sdk"
gem "opentelemetry-instrumentation-rails"
gem "opentelemetry-instrumentation-active_record"
gem "opentelemetry-instrumentation-sidekiq"
gem "opentelemetry-instrumentation-net_http"
gem "opentelemetry-exporter-otlp"

# config/initializers/opentelemetry.rb
require "opentelemetry/sdk"
require "opentelemetry/exporter/otlp"
require "opentelemetry/instrumentation/all"

OpenTelemetry::SDK.configure do |c|
  c.service_name = "my-rails-app"
  c.service_version = ENV.fetch("APP_VERSION", "unknown")

  c.use_all()  # Auto-instruments Rails, AR, Sidekiq, Net::HTTP, Faraday, Redis

  c.add_span_exporter(
    OpenTelemetry::Exporter::OTLP::Exporter.new(
      endpoint: ENV.fetch("OTEL_EXPORTER_OTLP_ENDPOINT", "http://localhost:4318")
    )
  )
end

# Manual spans for business logic
class PaymentService
  def charge(order)
    tracer = OpenTelemetry.tracer_provider.tracer("PaymentService")
    tracer.in_span("PaymentService.charge") do |span|
      span.set_attribute("order.id", order.id)
      span.set_attribute("order.amount_cents", order.amount_cents)
      span.set_attribute("payment.provider", "stripe")

      begin
        result = Stripe::PaymentIntent.create(amount: order.amount_cents)
        span.set_attribute("stripe.payment_intent_id", result.id)
        result
      rescue Stripe::CardError => e
        span.record_exception(e)
        span.status = OpenTelemetry::Trace::Status.error("Card declined")
        raise
      end
    end
  end
end

Alert Pro

14-day free trial

Stop checking — get alerted instantly

Next time your Ruby on Rails applications goes down, you'll know in under 60 seconds — not when your users start complaining.

  • Email alerts for your Ruby on Rails applications + 9 more APIs
  • $0 due today for trial
  • Cancel anytime — $9/mo after trial

Sidekiq Monitoring

Sidekiq is the de facto Rails background job processor. Monitor queue depth, job latency, and failure rates:

# config/routes.rb — Mount Sidekiq web UI (protect with Devise/HTTP auth)
require "sidekiq/web"

authenticate :user, ->(u) { u.admin? } do
  mount Sidekiq::Web, at: "/sidekiq"
end

# Custom health check for Sidekiq queue depth
class SidekiqHealthCheck
  def self.healthy?
    stats = Sidekiq::Stats.new
    queues = Sidekiq::Queue.all

    # Alert thresholds
    return false if stats.dead_size > 100           # Dead queue filling up
    return false if stats.failed > 1000             # High failure count
    return false if queues.any? { |q| q.name == "critical" && q.size > 50 }

    true
  end

  def self.stats_summary
    stats = Sidekiq::Stats.new
    {
      enqueued: stats.enqueued,
      processed: stats.processed,
      failed: stats.failed,
      dead: stats.dead_size,
      queues: Sidekiq::Queue.all.map { |q| { name: q.name, size: q.size, latency: q.latency } }
    }
  end
end

# Key Sidekiq Prometheus metrics via yabeda-sidekiq:
# sidekiq_queue_size{queue}           — jobs waiting
# sidekiq_queue_latency{queue}        — seconds oldest job has waited
# sidekiq_jobs_executed_total{queue, worker, status}
# sidekiq_job_runtime{queue, worker}  — histogram (seconds)

# Alert rules:
# sidekiq_queue_latency{queue="critical"} > 60  # job waiting >60s
# sidekiq_queue_size{queue="default"} > 5000
# rate(sidekiq_jobs_executed_total{status="failed"}[5m]) > 0

Ruby on Rails APM Tools Comparison

ToolRails SupportStandout FeaturePricing
Datadog APMExcellentN+1 detection, AR query trace, Sidekiq support$31/host/month
New Relic Ruby AgentExcellent100GB/month free, Sidekiq + Resque supportFree + $0.35/GB
Scout APMExcellentRails-first, N+1 alerts built-in, simpler than DatadogFree + $19/mo
AppSignalExcellentRuby-native, Sidekiq + Delayed::Job, host metrics$18/mo starter
Sentry for RubyGoodException tracking with full request contextFree 5K errors/mo + $26/mo
Better StackGoodUptime monitoring + log shipping from Rails/LogrageFree + $20/mo

FAQ

What metrics should I monitor in a Ruby on Rails application?

Key Rails metrics: request throughput (rpm), response time (p50/p95/p99), HTTP error rate, database query count and duration per request (N+1 signal), Sidekiq queue depth and failure rate, Puma thread saturation, Ruby GC stats (major_gc_count, heap_live_slots), and RSS memory per worker. N+1 queries and memory growth are the two most common Rails production issues.

How do I add Prometheus metrics to a Rails application?

Use yabeda-rails + yabeda-prometheus. Add both gems to your Gemfile and add use Yabeda::Prometheus::Exporter, at: "/metrics" to config/application.rb. This auto-creates rails_requests_total (counter) and rails_request_duration_seconds (histogram) by controller/action/status. Add yabeda-sidekiq for Sidekiq queue metrics. Custom business metrics use Yabeda.configure { group :orders { counter :total } }.

How do I detect N+1 queries in Rails production?

Use gem "bullet" in development/staging — it logs N+1 patterns and missing .includes() calls. In production: track db_query_count per request in APM (Datadog/Scout/AppSignal). More than 10 queries per typical request usually indicates N+1. The fix: add .includes(:association) or .preload(:association) to your ActiveRecord queries. Use Rails 7's strict_loading! to raise errors on lazy loads in specific scopes.

How do I monitor Sidekiq in production?

Mount Sidekiq::Web at /sidekiq for the built-in dashboard. For metrics: yabeda-sidekiq emits sidekiq_queue_size, sidekiq_queue_latency, and sidekiq_jobs_executed_total per queue and worker class. Alert on: queue latency > 60s for critical queues, dead queue size > 100, and failed job rate > 5%. Monitor Redis connection pool — Sidekiq is Redis-dependent; Redis unavailability stops all job processing.

How do I profile memory usage in a Rails application?

Track RSS memory per Puma worker — growing RSS over 24h is the primary leak signal. Tools: rack-mini-profiler with memory_profiler plugin shows per-request allocations in staging. derailed_benchmarks (derailed exec perf:mem) shows memory per route. Common Rails memory leak sources: module-level caches accumulating entries, ActionMailer previews holding references, gems that patch core classes, and poorly scoped ActiveRecord objects held in instance variables.

🛠 Tools We Use & Recommend

Tested across our own infrastructure monitoring 200+ APIs daily

Better StackBest for API Teams

Uptime Monitoring & Incident Management

Used by 100,000+ websites

Monitors your APIs every 30 seconds. Instant alerts via Slack, email, SMS, and phone calls when something goes down.

We use Better Stack to monitor every API on this site. It caught 23 outages last month before users reported them.

Free tier · Paid from $24/moStart Free Monitoring
1PasswordBest for Credential Security

Secrets Management & Developer Security

Trusted by 150,000+ businesses

Manage API keys, database passwords, and service tokens with CLI integration and automatic rotation.

After covering dozens of outages caused by leaked credentials, we recommend every team use a secrets manager.

OpteryBest for Privacy

Automated Personal Data Removal

Removes data from 350+ brokers

Removes your personal data from 350+ data broker sites. Protects against phishing and social engineering attacks.

Service outages sometimes involve data breaches. Optery keeps your personal info off the sites attackers use first.

From $9.99/moFree Privacy Scan
ElevenLabsBest for AI Voice

AI Voice & Audio Generation

Used by 1M+ developers

Text-to-speech, voice cloning, and audio AI for developers. Build voice features into your apps with a simple API.

The best AI voice API we've tested — natural-sounding speech with low latency. Essential for any app adding voice features.

Free tier · Paid from $5/moTry ElevenLabs Free
SEMrushBest for SEO

SEO & Site Performance Monitoring

Used by 10M+ marketers

Track your site health, uptime, search rankings, and competitor movements from one dashboard.

We use SEMrush to track how our API status pages rank and catch site health issues early.

From $129.95/moTry SEMrush Free
View full comparison & more tools →Affiliate links — we earn a commission at no extra cost to you

Related Guides

Alert Pro

14-day free trial

Stop checking — get alerted instantly

Next time your Ruby on Rails applications goes down, you'll know in under 60 seconds — not when your users start complaining.

  • Email alerts for your Ruby on Rails applications + 9 more APIs
  • $0 due today for trial
  • Cancel anytime — $9/mo after trial