Blog/Error Tracking Guide

Error Tracking Guide: Tools, Best Practices & Setup (2026)

How to capture, triage, and fix application errors fast — without drowning in alert noise.

By API Status Check·Updated April 2026·12 min read

Every production application throws errors. The question is whether you find out before your users do — or after they've already left. Error tracking closes that gap by automatically capturing exceptions, grouping them by root cause, and alerting your team in real time.

This guide covers everything you need to set up effective error tracking: how it works, which tools to use, how to configure alerting without noise, and how to connect error data to your SLOs.

What Is Error Tracking?

Error tracking (also called exception monitoring or crash reporting) is the practice of automatically capturing runtime errors in your application and centralizing them for triage. When your code throws an unhandled exception, the error tracking SDK intercepts it, enriches it with context (stack trace, request headers, user ID, environment), and sends it to a central platform.

Unlike raw logging, error trackers:

Error Tracking vs. Logging vs. APM

Tool TypeBest ForWeaknessExample Tools
Error TrackingException capture, grouping, alertingDoesn't cover performance issuesSentry, Rollbar, Bugsnag
LoggingAudit trails, event history, debuggingRequires manual searching; noisyDatadog Logs, Loggly, Papertrail
APMLatency, throughput, performance tracesHigher cost; overkill for small teamsDatadog, New Relic, Dynatrace
Uptime MonitoringAvailability, endpoint healthCatches outages, not code errorsBetter Stack, UptimeRobot, ASC

The most effective observability stacks use all four layers together. Error tracking and uptime monitoring are the minimum viable combination for any production application.

📡
Recommended

Pair error tracking with uptime monitoring

Better Stack monitors your API endpoints 24/7 from 30+ global locations and alerts you the moment a service goes down — before error trackers even fire. Combine with Sentry for full-stack visibility.

Try Better Stack Free →

Error Tracking Tool Comparison (2026)

Sentry

Sentry is the de facto standard for error tracking. With SDKs for 100+ platforms (JavaScript, Python, Go, Rust, Ruby, Java, iOS, Android, and more), it's the most universally applicable option. Sentry's performance monitoring feature bridges the gap between error tracking and APM by capturing transaction traces alongside errors.

Rollbar

Rollbar differentiates with its deploy tracking and flexible error grouping rules. You can write custom fingerprinting logic to group errors exactly as your team expects. Rollbar also integrates deeply with GitHub, GitLab, and Jira for linking errors to commits and issues.

Bugsnag

Bugsnag is the top choice for mobile-first teams. Its iOS and Android SDKs are best-in-class, with support for native crashes, ANRs (Application Not Responding), and battery/memory impact tracking. The stability score metric — percentage of sessions without errors — is a useful product health KPI.

Datadog Error Tracking

If you're already in the Datadog ecosystem, their error tracking feature integrates directly with APM traces, logs, and dashboards. You get correlation across all signals — an error can link directly to the slow DB query that caused a timeout, for example. It's expensive standalone but free if you're paying for APM.

Glitchtip (Open Source)

Glitchtip is a Sentry-compatible open-source alternative. It accepts Sentry SDK events (so migration is zero-code), provides the same error grouping and alerting, and can be self-hosted for free. Trade-off: fewer integrations, slower feature development, and you own the ops burden.

ToolFree TierStarting PriceMobileSelf-HostBest For
Sentry5K events/mo$26/moMost stacks
Rollbar5K events/mo$12/moDeploy tracking
Bugsnag14-day trial$47/mo✅ (best)Mobile apps
DatadogWith APM$31/host/moDatadog users
GlitchtipSelf-hosted$9/moVia Sentry SDKBudget / OSS

Setting Up Error Tracking: A Practical Example

JavaScript / Node.js (Sentry)

npm install @sentry/node @sentry/profiling-node

// sentry.ts — initialize before any other imports
import * as Sentry from '@sentry/node';
import { nodeProfilingIntegration } from '@sentry/profiling-node';

Sentry.init({
  dsn: process.env.SENTRY_DSN,
  environment: process.env.NODE_ENV,
  release: process.env.GIT_SHA,           // links errors to commits
  integrations: [nodeProfilingIntegration()],
  tracesSampleRate: process.env.NODE_ENV === 'production' ? 0.1 : 1.0,
  profilesSampleRate: 0.1,
  beforeSend(event) {
    // Filter known bot errors before they count against your quota
    if (event.request?.url?.includes('/healthcheck')) return null;
    return event;
  },
});

Python (Sentry)

pip install sentry-sdk

import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
from sentry_sdk.integrations.redis import RedisIntegration

sentry_sdk.init(
    dsn=os.environ["SENTRY_DSN"],
    integrations=[
        DjangoIntegration(transaction_style="url"),
        RedisIntegration(),
    ],
    traces_sample_rate=0.1,
    send_default_pii=False,  # GDPR compliance
    environment=os.environ.get("ENVIRONMENT", "production"),
    release=os.environ.get("GIT_SHA"),
)

Go

go get github.com/getsentry/sentry-go

import "github.com/getsentry/sentry-go"

func main() {
    if err := sentry.Init(sentry.ClientOptions{
        Dsn:              os.Getenv("SENTRY_DSN"),
        Environment:      os.Getenv("ENVIRONMENT"),
        Release:          os.Getenv("GIT_SHA"),
        TracesSampleRate: 0.1,
        BeforeSend: func(event *sentry.Event, hint *sentry.EventHint) *sentry.Event {
            // Scrub PII from request bodies
            if event.Request != nil {
                event.Request.Data = "[redacted]"
            }
            return event
        },
    }); err != nil {
        log.Fatalf("sentry.Init: %v", err)
    }
    defer sentry.Flush(2 * time.Second)
}

Error Alert Strategy

Raw error volume is a terrible alert signal. A spike in errors after a deploy is expected during a canary rollout. An error that's been happening for six months at 0.001% rate doesn't need a 3 AM page. Alert on meaningful change, not raw counts.

Alert Types That Actually Work

Alert TypeWhen to UseExample Threshold
First occurrenceNew error never seen beforeAlert immediately (always)
RegressionPreviously resolved error returnsAlert after 1 occurrence
Frequency spikeKnown error's rate increases abnormallyAlert at 10x baseline rate in 5 min
Error rate %Errors as % of total requestsAlert at > 1%, page at > 5%
Error budget burnSLO-based alertingAlert when 2% budget burned in 1hr

Reducing Alert Noise

Alert fatigue is the biggest failure mode in error tracking. Teams stop responding to alerts when every deploy triggers a wave of notifications. Strategies to reduce noise:

Alert Pro

14-day free trial

Stop checking — get alerted instantly

Next time your application goes down, you'll know in under 60 seconds — not when your users start complaining.

  • Email alerts for your application + 9 more APIs
  • $0 due today for trial
  • Cancel anytime — $9/mo after trial

Error Context: What to Capture

A stack trace tells you where the error happened. Context tells you why. The richer your context, the faster your triage.

Always Capture

Be Careful With

// Attach context at the request level
app.use((req, res, next) => {
  Sentry.setUser({ id: req.user?.id });
  Sentry.setTag('request_id', req.headers['x-request-id']);
  Sentry.setContext('request', {
    url: req.url,
    method: req.method,
    // DO NOT include: req.body (may contain PII)
  });
  next();
});

// Manual error capture with extra context
try {
  await processPayment(order);
} catch (err) {
  Sentry.withScope((scope) => {
    scope.setTag('payment.provider', 'stripe');
    scope.setContext('order', { id: order.id, amount: order.total });
    Sentry.captureException(err);
  });
  throw err;
}

Connecting Errors to SLOs

Error tracking data becomes strategically valuable when connected to service level objectives. Instead of treating every error as equally important, you measure what fraction of user interactions result in errors and track that against your SLO.

A typical error rate SLO: “99.5% of API requests must return a non-5xx response.” This means your error budget is 0.5% of requests per month — about 3.6 hours of total 100% error rate time, or ~216 minutes of 50% error rate.

SLOError Budget (%)Monthly BudgetFast Burn Alert
99.9%0.1%~43 min downtime equiv14.4x burn rate
99.5%0.5%~3.6 hours6x burn rate
99%1%~7.3 hours3x burn rate

Configure error budget burn rate alerts: if you're burning your monthly budget at 14x the normal rate, you'll exhaust it in 2 hours — that's a page-level alert. At 3x the rate, you have 10 days — Slack notification only.

Error Tracking Best Practices

FAQ: Error Tracking

What is error tracking?

Error tracking automatically captures runtime exceptions in your application, groups similar errors by root cause, attaches stack traces and user context, and alerts your team in real time. Unlike logs, error trackers deduplicate and prioritize so you focus on what matters.

Is Sentry free?

Yes — Sentry has a free tier covering 5,000 errors/month and 10,000 performance transactions/month. It's also open-source and self-hostable for free. Paid plans start at $26/month and scale based on event volume and features.

What's the difference between Sentry and Rollbar?

Both are excellent error trackers with similar core features. Sentry has broader SDK support (100+ platforms) and better performance monitoring. Rollbar is slightly cheaper, has better deploy-based error attribution, and offers more flexible custom error grouping rules. For most teams, Sentry is the default choice; Rollbar is worth evaluating if deploy tracking is a priority.

How do I reduce error tracking costs?

Apply sampling to high-volume, well-understood errors (e.g., rate-limit 429s from known scrapers). Filter out known bot traffic in your beforeSend hook. Ignore errors from third-party scripts. Set up error grouping rules to prevent slight variations from creating separate issues. For very high-traffic apps, consider self-hosting Glitchtip or Sentry.

🛠 Tools We Use & Recommend

Tested across our own infrastructure monitoring 200+ APIs daily

Better StackBest for API Teams

Uptime Monitoring & Incident Management

Used by 100,000+ websites

Monitors your APIs every 30 seconds. Instant alerts via Slack, email, SMS, and phone calls when something goes down.

We use Better Stack to monitor every API on this site. It caught 23 outages last month before users reported them.

Free tier · Paid from $24/moStart Free Monitoring
1PasswordBest for Credential Security

Secrets Management & Developer Security

Trusted by 150,000+ businesses

Manage API keys, database passwords, and service tokens with CLI integration and automatic rotation.

After covering dozens of outages caused by leaked credentials, we recommend every team use a secrets manager.

OpteryBest for Privacy

Automated Personal Data Removal

Removes data from 350+ brokers

Removes your personal data from 350+ data broker sites. Protects against phishing and social engineering attacks.

Service outages sometimes involve data breaches. Optery keeps your personal info off the sites attackers use first.

From $9.99/moFree Privacy Scan
ElevenLabsBest for AI Voice

AI Voice & Audio Generation

Used by 1M+ developers

Text-to-speech, voice cloning, and audio AI for developers. Build voice features into your apps with a simple API.

The best AI voice API we've tested — natural-sounding speech with low latency. Essential for any app adding voice features.

Free tier · Paid from $5/moTry ElevenLabs Free
SEMrushBest for SEO

SEO & Site Performance Monitoring

Used by 10M+ marketers

Track your site health, uptime, search rankings, and competitor movements from one dashboard.

We use SEMrush to track how our API status pages rank and catch site health issues early.

From $129.95/moTry SEMrush Free
View full comparison & more tools →Affiliate links — we earn a commission at no extra cost to you