LaunchDarkly Outage History
50 incidents reported. Data sourced from the official LaunchDarkly status page.
50
Total Incidents
7
Major/Critical
36
Minor
50
Resolved
May 2026
Experiment and multi-armed bandit results delayed
minorMay 4, 07:47 PM→May 4, 08:25 PMresolved
May 4, 08:25 PM
resolved — All experiment and multi-armed bandit result data is caught up. No data was lost.
May 4, 08:19 PM
monitoring — We have implemented a fix and are catching up on backlogged data. We estimate that we will be fully caught up in about 1 hour.
May 4, 07:50 PM
identified — We have identified the issue and are continuing our work to resolve it.
+1 more updates
Elevated Streaming Errors in Europe
minorMay 4, 10:14 AM→May 4, 04:23 PMresolved
May 4, 04:23 PM
resolved — This incident has been resolved.
May 4, 03:47 PM
monitoring — We've implemented a fix and have mitigated impact as of 8:35am PT. Streaming SDKs should be able to connect successfully at this time.
May 4, 02:30 PM
identified — We're continuing to address the root cause of the elevated error rate. Server SDKs may require additional time to initialize while requests are retried.
+2 more updates
Delay in Experimentation Results and Experimentation Warehouse Data Export
minorMay 2, 12:55 PM→May 2, 07:15 PMresolved
May 2, 07:15 PM
resolved — Experimentation results processing and data warehouse export was caught up as of 12:28 pm PT.
May 2, 01:51 PM
monitoring — We have implemented a fix and are catching up on backlogged experimentation results and data export. We estimate that we will be fully caught up in about 8 hours.
May 2, 01:17 PM
identified — We've identified the root cause of the experimentation results processing and data export delays and are working on implementing a fix.
+1 more updates
Elevated Server SDK Initialization Errors in APAC region
noneMay 1, 02:52 AM→May 1, 02:52 AMresolved
May 1, 04:06 AM
resolved — We've identified elevated error rates with server-side streaming initialization requests in the APAC region between 19:52 and 19:56 PT. Some customers may have experienced initialization timeouts desp...
April 2026
Issue with Experiment Results in case of Flag Variation edits
minorApr 29, 08:19 PM→May 1, 12:09 AMresolved
May 1, 12:09 AM
resolved — We have validated a fix to resolve all remaining experiments with inaccurate sample ratio mismatches (SRMs) or empty experiment results. The vast majority of experiments and experimentation customers ...
Apr 30, 12:04 AM
monitoring — We've released a fix to prevent this issue in new experiments and iterations. We're working to resolve this issue for experiment data recently processed.
Apr 29, 08:49 PM
identified — We've identified the issue and are implementing a fix for this behavior.
+1 more updates
Streaming Data Export Delay
minorApr 30, 11:14 AM→Apr 30, 11:58 AMresolved
Apr 30, 11:58 AM
resolved — We've caught up on the streaming data export backlog.
Apr 30, 11:45 AM
monitoring — We've identified latency in Pub/Sub export that were causing streaming data export delays that has since resolved. We're catching up on the export backlog now.
Apr 30, 11:14 AM
investigating — We are investigating a delay with streaming data export delivery. No data is lost, but customers using streaming data export may notice a multi-minute lag in receiving data.
Errors on streaming SDK initialization
noneApr 29, 11:45 PM→Apr 29, 11:45 PMresolved
Apr 30, 12:30 AM
resolved — We've identified an infrastructure issue that caused some 503 responses in feature flagging streaming client initialization on the US LaunchDarkly instance between 16:45 and 17:05 PT. SDKs should auto...
Issues with Observability Alert Notification
minorApr 28, 06:34 PM→Apr 28, 07:25 PMresolved
Apr 28, 07:25 PM
resolved — Observability alerts are functioning and delayed alerts have been redelivered.
Apr 28, 07:10 PM
monitoring — A fix has been deployed for Observability alerting and we are working to redeliver delayed notifications.
Apr 28, 06:38 PM
identified — We've identified the root cause delaying Observability alert notifications and are deploying a fix. All other notification types are operational.
+1 more updates
Delayed Metrics ingest in EU
minorApr 27, 03:39 PM→Apr 27, 05:37 PMresolved
Apr 27, 05:37 PM
resolved — We've fully caught up on the metrics data backlog.
Apr 27, 04:43 PM
monitoring — We're catching up on backlogged metric data in EU and are monitoring progress. No metric data has been lost.
Apr 27, 03:45 PM
identified — We've identified that metrics ingest was delayed between 11:11 - 11:27am ET. No metric data has been lost. We're working on catching up on the delayed data.
+1 more updates
Issue with Warehouse Data Export
minorApr 23, 07:00 PM→Apr 23, 07:00 PMresolved
Apr 29, 10:22 AM
resolved — We've addressed an issue that prevented warehouse data export from 12pm to 6pm PT on April 23. All missing data has been retroactively pushed to customer warehouses.
Observability Alerts are not Delivering Slack Notifications
minorApr 20, 09:27 PM→Apr 20, 11:44 PMresolved
Apr 20, 11:44 PM
resolved — The issue has been resolved.
Apr 20, 10:01 PM
identified — The issue has been identified and a fix is being implemented.
Apr 20, 09:27 PM
investigating — We are currently investigating an issue where observability alerts are not delivering for Slack notifications to all channels. DM alerts are still functional.
Docs site is down
criticalApr 20, 10:04 PM→Apr 20, 10:27 PMresolved
Apr 20, 10:27 PM
resolved — This incident has been resolved.
Apr 20, 10:13 PM
identified — The issue has been identified and a fix is being implemented.
Apr 20, 10:13 PM
investigating — We have linked this issue with an outage on our doc provider's side.
+1 more updates
Degraded connectivity for server-side SDKs in EU region
noneApr 10, 04:30 AM→Apr 10, 04:30 AMresolved
Apr 10, 05:41 AM
resolved — Between approximately 9:37 PM and 9:46 PM PT on April 9, 2026, some customers using server-side SDKs in the EU region may have experienced longer than normal initialization times or timeouts when conn...
Delayed processing of Session Replay and Error data
minorApr 6, 05:58 PM→Apr 6, 07:02 PMresolved
Apr 6, 07:02 PM
resolved — The delayed processing of Session Replay and Error event data has been resolved. All queued events have been processed and data ingestion is operating normally. No data was lost during this incident. ...
Apr 6, 06:40 PM
monitoring — The delayed processing of Session Replay and Error event data has been resolved. All queued events have been processed and data ingestion is operating normally. No data was lost during this incident. ...
Apr 6, 06:15 PM
identified — We have identified the cause of the delayed Session Replay and Error event processing and a fix is in progress. The backlog is actively decreasing. No data has been lost — all queued events will be pr...
+1 more updates
March 2026
Experiment Data Missing for Some Experiment Iterations
minorMar 18, 02:07 AM→Mar 20, 12:29 AMresolved
Mar 20, 12:29 AM
resolved — Our team completed the data restoration for previously affected experiment iterations.
Complete data is now available for all experiment iterations in the UI.
Mar 19, 04:52 AM
identified — Our team continues to work on restoring data for previously affected experiment iterations.
Complete data restoration expected by tomorrow.
Mar 18, 10:37 PM
identified — Our team continues to work on restoring data for previously affected experiment iterations.
We will provide a further update once full historical data recovery is complete.
+3 more updates
Server-side streaming rejected new connections across all commercial regions
majorMar 19, 04:58 PM→Mar 19, 04:58 PMresolved
Mar 19, 04:58 PM
resolved — Server-side streaming began rejecting new connections across all commercial regions, causing 500/503 errors for customers attempting to establish new SDK streaming connections for a brief period of ti...
Event processing delays
minorMar 10, 10:59 PM→Mar 11, 04:07 AMresolved
Mar 11, 04:07 AM
resolved — The issue with Event processing has been resolved. Impacted services have returned to normal operation.
Flag Delivery was not impacted.
There was no data loss experienced.
Mar 10, 11:42 PM
monitoring — We are continuing to monitor for any further issues.
Mar 10, 11:38 PM
monitoring — A fix has been implemented and we are monitoring the results.
+1 more updates
Observability Usage Reporting
noneMar 3, 08:40 PM→Mar 4, 01:13 AMresolved
Mar 4, 01:13 AM
resolved — We've corrected the issues affecting the Observability "Errors" usage reporting for February and are finalizing reporting corrections for March.
Mar 3, 09:15 PM
identified — We've identified the root cause affecting the Observability "Errors" usage for some customers and are working to correct it.
Mar 3, 08:40 PM
investigating — We're investigating reports of inaccurate usage reporting for some Observability products.
February 2026
Investigating Issues with LaunchDarkly Application and Authentication
majorFeb 26, 03:43 PM→Feb 26, 05:06 PMresolved
Feb 26, 05:06 PM
resolved — The issue with our application and authentication has been resolved. Performance has remained stable following mitigation.
Feb 26, 05:04 PM
monitoring — The issue with our application and authentication has been resolved. Performance has remained stable following mitigation.
Feb 26, 04:42 PM
monitoring — The issue with our application and authentication has been identified and a fix has been implemented. We are continuing to monitor the performance of impacted services. We will continue to update this...
+2 more updates
Degraded Observability Ingest
noneFeb 20, 10:02 PM→Feb 20, 11:04 PMresolved
Feb 20, 11:04 PM
resolved — Ingest performance has remained stable following mitigation. We are no longer observing impact.
Feb 20, 10:13 PM
monitoring — Mitigation has been applied and ingest performance has stabilized. We are monitoring to ensure continued stability.
Feb 20, 10:12 PM
identified — We have identified the cause of the degraded ingest performance and are applying mitigation. We are seeing signs of stabilization and continuing to monitor recovery.
+1 more updates
Observability Data Delay
minorFeb 19, 03:39 PM→Feb 19, 04:09 PMresolved
Feb 19, 04:09 PM
resolved — We've caught up on the session/error data backlog.
Feb 19, 03:54 PM
identified — We've identified the root cause and have deployed a fix. We're catching up on the session/error data backlog and should be caught up in ~15 minutes. No data has been lost.
Feb 19, 03:39 PM
investigating — Sessions and errors may be delayed by up to 1 hours. We are investigating the root cause. No data is lost.
Data attribution issues with Experimentation
majorFeb 13, 12:30 AM→Feb 13, 03:39 AMresolved
Feb 13, 03:39 AM
resolved — We have fixed the data attribution issues where running experiment iterations were receiving incorrectly attributed data. This incident is now resolved and results for active experiments are now accur...
Feb 13, 02:38 AM
identified — We have fixed the data attribution issues where certain experiment iterations are receiving incorrectly attributed data for any new experiments that are created.
We are fixing the data attribution er...
Feb 13, 01:32 AM
identified — The team has identified the root cause and is working on a fix.
+1 more updates
Observability – Degraded OTel Telemetry Processing
minorFeb 11, 01:03 PM→Feb 12, 03:53 AMresolved
Feb 12, 03:53 AM
resolved — This incident has been resolved. Telemetry data was dropped between 5:03 PM and 7:30 PM PT. All systems are now operating normally.
Feb 12, 03:50 AM
investigating — We are currently investigating an issue affecting OTel telemetry ingestion in LaunchDarkly Observability. Some telemetry data may not be processed as expected. Our team is actively working to identify...
Degraded Observability for Customers Using AWS CloudWatch Metric Streams or Firehose Log Export
minorFeb 10, 07:30 AM→Feb 10, 09:01 AMresolved
Feb 10, 09:01 AM
resolved — This issue has been resolved. Observability ingestion from AWS CloudWatch Metric Streams and CloudWatch Firehose log export has returned to normal. Data sent during the incident window may not have be...
Feb 10, 08:42 AM
investigating — Some customers using LaunchDarkly Observability with AWS CloudWatch Metric Stream and/or CloudWatch Firehose log export may experience delayed ingestion of metrics/logs into Observability. The impact ...
Increased error rate in our UI and API endpoints
minorFeb 6, 11:55 AM→Feb 6, 12:44 PMresolved
Feb 6, 12:44 PM
resolved — Customers are seeing expected behavior when accessing the LaunchDarkly UI and API.
Feb 6, 11:55 AM
investigating — We are investigating an issue where a small number of customers may experience errors when accessing the LaunchDarkly UI and API. Feature flag delivery is not impacted. We will provide updates as the ...
January 2026
Data ingestion delays
minorJan 31, 10:02 AM→Jan 31, 12:20 PMresolved
Jan 31, 12:20 PM
resolved — This incident has been resolved and all data processing pipelines are fully caught up. No data was lost.
Jan 31, 11:35 AM
monitoring — A fix has been implemented and our event processing pipeline for Observability and Opentelemetry are fully caught up. We're continuing to monitor as our event processing pipeline catches up for Flag ...
Jan 31, 10:23 AM
identified — We have identified the issue and are continuing our work to resolve it.
+1 more updates
Elevated error rate when configuring Okta SCIM
minorJan 26, 05:54 PM→Jan 26, 10:13 PMresolved
Jan 26, 10:13 PM
resolved — This incident has been resolved.
Jan 26, 08:29 PM
monitoring — We believe the issue is resolved for all customers. We're continuing to monitor the situation.
Jan 26, 07:56 PM
identified — Some customers using Okta SCIM are encountering errors when provisioning and managing LaunchDarkly members. We continue to work on a remediation and have engaged with Okta's support team.
+1 more updates
Unable to edit JSON flag variations
minorJan 12, 05:48 PM→Jan 12, 07:44 PMresolved
Jan 12, 07:44 PM
resolved — This incident has been resolved.
Jan 12, 07:16 PM
monitoring — A fix has been implemented for the issue preventing editing some JSON flag variations.
Jan 12, 06:49 PM
identified — We've identified a front-end issue that is causing issues editing certain JSON flag variations and are working on a fix.
+1 more updates
Guarded releases event ingestion delays
minorJan 8, 06:46 PM→Jan 8, 10:00 PMresolved
Jan 8, 10:00 PM
resolved — This incident has been resolved.
Jan 8, 07:08 PM
monitoring — Events are caught up.
Jan 8, 06:57 PM
monitoring — A fix has been implemented and we are monitoring the results. We expect to catch up on all events within the next 15 minutes, no data loss is expected.
+1 more updates
December 2025
Delay in Observability product data ingest
minorDec 17, 06:33 PM→Dec 17, 08:51 PMresolved
Dec 17, 08:51 PM
resolved — This incident has been resolved.
Dec 17, 08:20 PM
monitoring — A fix has been implemented and we are monitoring the results.
Dec 17, 07:35 PM
identified — We have identified the cause of the ingest delay and are catching up on the backlogged messages. We expect to be caught up on all delayed sessions and errors in the next hour. Data loss is not expecte...
+1 more updates
Investigating - Increase in SDK errors
minorDec 15, 02:14 PM→Dec 15, 04:38 PMresolved
Dec 15, 04:38 PM
resolved — This incident has been resolved.
Dec 15, 04:20 PM
monitoring — A fix has been implemented and we are monitoring the results.
Dec 15, 03:43 PM
identified — We are observing a reduction in SDK errors. We are continuing to work on a fix.
+5 more updates
Event Processing Delays - Experiment Results Utilizing Attribute Filtering affected
minorDec 4, 09:16 PM→Dec 5, 03:47 AMresolved
Dec 5, 03:47 AM
resolved — We have recovered from delays in experimentation results that are sliced by attributes. No data has been lost.
Dec 4, 09:16 PM
investigating — We are investigation an issue with delays in experimentation results that are sliced by attributes. No data has been lost.
Delays in publishing data export events
minorDec 3, 10:52 PM→Dec 4, 01:05 PMresolved
Dec 4, 01:05 PM
resolved — All of the delayed data has been processed and this incident is resolved.
Dec 4, 09:50 AM
identified — We are continuing to process the delayed data. The data is now updated through 2025-12-04, 01:00:00 UTC.
Dec 4, 05:09 AM
identified — We are continuing to process the data and data is current as of 2025-12-03, 08:00:00 UTC. We'll continue to update as we process data.
+1 more updates
Elevated error rates in apac region for server side sdks
minorDec 3, 04:00 PM→Dec 3, 04:00 PMresolved
Dec 3, 08:32 PM
resolved — Elevated error rates in apac region for server side sdks attempting to make new connections to the streaming service from 8:05 AM PT to 8:11 AM PT. The issue is now resolved
November 2025
Intermittent issues accessing flag details
minorNov 26, 08:13 PM→Nov 26, 08:55 PMresolved
Nov 26, 08:55 PM
resolved — This incident has been resolved.
Nov 26, 08:45 PM
monitoring — We are no longer seeing any errors, and the issue was contained to the euw1 region. We'll continue to monitor and update this as necessary.
Nov 26, 08:13 PM
investigating — We are currently investigating an issue intermittently preventing our flags details pages from loading.
Delayed Event Processing
minorNov 24, 07:14 AM→Nov 24, 10:45 AMresolved
Nov 24, 10:45 AM
resolved — This incident has been resolved.
Nov 24, 10:35 AM
monitoring — A fix has been implemented and our event processing pipeline is fully caught up. We're continuing to monitor.
Nov 24, 09:31 AM
identified — We are continuing to work on a fix for this issue, and remain at an approximate 20 minute delay in flag event processing.
+1 more updates
Investigating elevated latency
minorNov 13, 09:49 PM→Nov 14, 12:36 AMresolved
Nov 14, 12:36 AM
resolved — The issue with the AI Configs list page has been resolved. Impacted services have returned to normal operation.
Nov 13, 09:49 PM
investigating — We detected elevated latencies loading the flag list and AI configs list pages. The flag list’s performance has recovered, and we continue to investigate remediation on the AI configs list page.
Elevated error rates for a small number of customers
minorNov 12, 04:00 PM→Nov 12, 04:00 PMresolved
Nov 12, 08:38 PM
resolved — Between 7:37am and 8:19am PT, a small number of customers in the us-east-1 region encountered elevated error rates with Polling SDK and API requests. This was caused by a minor issue affecting a CDN P...
Customers unable to edit custom rules on flags
minorNov 5, 04:42 PM→Nov 5, 06:02 PMresolved
Nov 5, 06:02 PM
resolved — This incident has been resolved.
Nov 5, 04:51 PM
monitoring — A fix has been implemented and we are monitoring the results.
Nov 5, 04:48 PM
identified — The issue has been identified and a fix is being implemented.
+1 more updates
AI Configs monitoring page tab failing to load
minorNov 3, 05:11 PM→Nov 3, 06:22 PMresolved
Nov 3, 06:22 PM
resolved — This incident has been resolved.
Nov 3, 06:18 PM
monitoring — A fix has been implemented and we are monitoring the results.
Nov 3, 05:12 PM
identified — We are continuing to work on a fix for this issue.
+2 more updates
Delayed flag updates for small number of customers
minorNov 1, 04:05 PM→Nov 1, 04:05 PMresolved
Nov 1, 05:34 PM
resolved — A limited number of customers (primarily in EU regions) with Polling SDK connections experienced elevated latency and errors rates between 9:05am and 9:58am PT, caused by a service incident in our CDN...
October 2025
Live Events not loading
noneOct 28, 05:04 PM→Oct 28, 05:53 PMresolved
Oct 28, 05:53 PM
resolved — We've resolved an issue causing Live Events to not load.
Oct 28, 05:40 PM
identified — We've identified an issue that was causing Live Events to not load (starting Oct 23 11:03am PT) and are resolving the issue.
Oct 28, 05:04 PM
investigating — We've received reports of Live Events not loading and are investigating.
Experiment results and metrics unavailable
majorOct 28, 02:17 AM→Oct 28, 02:50 AMresolved
Oct 28, 02:50 AM
resolved — We've resolved an issue causing Experiment results to fail to load.
Oct 28, 02:38 AM
identified — We've identified an issue affecting the display of Experiment results and are working on a fix.
Oct 28, 02:28 AM
investigating — We are investigating reports of experiment results and metrics failing to load.
Elevated latencies and delays
majorOct 20, 07:25 AM→Oct 21, 10:00 AMresolved
Oct 21, 10:00 AM
resolved — This incident has been resolved.
One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update IP allowlists in their firewa...
Oct 21, 09:53 AM
monitoring — One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update IP allowlists in their firewalls or proxy servers in order for ...
Oct 21, 09:19 AM
monitoring — One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update the IP allowlists in their firewalls or proxy servers to ensure...
+27 more updates
Delays in event data
minorOct 10, 06:09 PM→Oct 10, 06:56 PMresolved
Oct 10, 06:56 PM
resolved — The issue with delays in event data has been resolved. Event data is up to date and impacted services have returned to normal operation
Oct 10, 06:11 PM
identified — Customers are experiencing up to 21 minute delays with product features using event data. We have identified the issue and are continuing our work to resolve it.
Data loss is not expected.
Customer...
Oct 10, 06:09 PM
investigating — All customers are experiencing up to 21 minute delays with product features using event data. We are investigating and will provide updates as they become available.
Data loss is not expected.
Delayed flag updates for small number of customers
noneOct 7, 10:40 PM→Oct 7, 10:50 PMresolved
Oct 7, 10:50 PM
resolved — The issue has been resolved. Flag updates have returned to normal operation.
Oct 7, 10:40 PM
monitoring — A small number of customers experienced delayed flag updates made between 15:24 and 15:34 PT. The issue has been mitigated and we will continue monitoring.
Errors generating new client libraries
minorOct 3, 06:40 PM→Oct 3, 08:03 PMresolved
Oct 3, 08:03 PM
resolved — Users are now able to generate new client libraries.
Oct 3, 06:40 PM
investigating — We're aware of intermittent difficulties generating new client libraries. We're investigating.
Delay in event processing
majorOct 1, 03:12 PM→Oct 1, 03:46 PMresolved
Oct 1, 03:46 PM
resolved — This incident has been resolved.
Oct 1, 03:28 PM
monitoring — We've implemented a fix and are monitoring the results. Impact to Data Export was limited to our streaming data export product.
Oct 1, 03:21 PM
investigating — We've mitigated the impact on processing events for all features outside of Data Export. We're continuing to investigate.
+1 more updates
September 2025
Self-serve legacy customers are unable to check out or modify plan
minorSep 30, 05:00 PM→Sep 30, 07:53 PMresolved
Sep 30, 07:53 PM
resolved — The issue with legacy self-serve check out has been resolved.
Sep 30, 06:44 PM
monitoring — The issue with legacy self-serve plans has been identified and a fix has been implemented. We are continuing to monitor the performance of impacted services. We will continue to update this page until...
Sep 30, 05:00 PM
identified — Customers on legacy plans (such as Starter, Professional) are unable to check out or modify the plan. We have identified a fix and will provide an update as soon as the fix is ready. Please contact Su...
Increased error rate on flag status API
minorSep 22, 06:30 PM→Sep 22, 06:30 PMresolved
Sep 22, 07:08 PM
resolved — From 11:38 am PT - 11:46 am PT we experienced an elevated error rate on the flag evaluation and flag status APIs, used by flag list, flag targeting, and feature monitoring endpoints.
Related Incident Histories
Get LaunchDarkly Outage Alerts
Be the first to know when LaunchDarkly go down.