LaunchDarkly Outage History
Past incidents and downtime events
Complete history of LaunchDarkly outages, incidents, and service disruptions. Showing 50 most recent incidents.
May 2026(4 incidents)
Experiment and multi-armed bandit results delayed
4 updates
All experiment and multi-armed bandit result data is caught up. No data was lost.
We have implemented a fix and are catching up on backlogged data. We estimate that we will be fully caught up in about 1 hour.
We have identified the issue and are continuing our work to resolve it.
We are currently investigating delays in experiment and multi-armed bandit results. Data is currently delayed by 1-2 hours. No data loss is expected.
Elevated Streaming Errors in Europe
5 updates
This incident has been resolved.
We've implemented a fix and have mitigated impact as of 8:35am PT. Streaming SDKs should be able to connect successfully at this time.
We're continuing to address the root cause of the elevated error rate. Server SDKs may require additional time to initialize while requests are retried.
We're identified an infrastructure issue causing elevated error rates and are working on a fix.
We are investigating elevated error rates for streaming initialization in the Europe region. Some customers using streaming server-side SDKs may have experienced longer than normal initialization times or timeouts when connecting to LaunchDarkly.
Delay in Experimentation Results and Experimentation Warehouse Data Export
4 updates
Experimentation results processing and data warehouse export was caught up as of 12:28 pm PT.
We have implemented a fix and are catching up on backlogged experimentation results and data export. We estimate that we will be fully caught up in about 8 hours.
We've identified the root cause of the experimentation results processing and data export delays and are working on implementing a fix.
We are investigating an issue causing delayed experimentation results processing and delayed experimentation data warehouse export.
Elevated Server SDK Initialization Errors in APAC region
1 update
We've identified elevated error rates with server-side streaming initialization requests in the APAC region between 19:52 and 19:56 PT. Some customers may have experienced initialization timeouts despite SDK retries.
April 2026(10 incidents)
Issue with Experiment Results in case of Flag Variation edits
4 updates
We have validated a fix to resolve all remaining experiments with inaccurate sample ratio mismatches (SRMs) or empty experiment results. The vast majority of experiments and experimentation customers were not impacted.
We've released a fix to prevent this issue in new experiments and iterations. We're working to resolve this issue for experiment data recently processed.
We've identified the issue and are implementing a fix for this behavior.
We are aware of an issue affecting some customers that may result in sample ratio mismatches (SRMs) or empty experiment results if flag variations are edited between experiment iterations.
Streaming Data Export Delay
3 updates
We've caught up on the streaming data export backlog.
We've identified latency in Pub/Sub export that were causing streaming data export delays that has since resolved. We're catching up on the export backlog now.
We are investigating a delay with streaming data export delivery. No data is lost, but customers using streaming data export may notice a multi-minute lag in receiving data.
Errors on streaming SDK initialization
1 update
We've identified an infrastructure issue that caused some 503 responses in feature flagging streaming client initialization on the US LaunchDarkly instance between 16:45 and 17:05 PT. SDKs should automatically retry initialization in most cases.
Issues with Observability Alert Notification
4 updates
Observability alerts are functioning and delayed alerts have been redelivered.
A fix has been deployed for Observability alerting and we are working to redeliver delayed notifications.
We've identified the root cause delaying Observability alert notifications and are deploying a fix. All other notification types are operational.
We are currently investigating an issue prevent Observability alert notifications from being delivered.
Delayed Metrics ingest in EU
4 updates
We've fully caught up on the metrics data backlog.
We're catching up on backlogged metric data in EU and are monitoring progress. No metric data has been lost.
We've identified that metrics ingest was delayed between 11:11 - 11:27am ET. No metric data has been lost. We're working on catching up on the delayed data.
We are investigating an issue causing under-reporting of customer metrics in the EU region.
Issue with Warehouse Data Export
1 update
We've addressed an issue that prevented warehouse data export from 12pm to 6pm PT on April 23. All missing data has been retroactively pushed to customer warehouses.
Observability Alerts are not Delivering Slack Notifications
3 updates
The issue has been resolved.
The issue has been identified and a fix is being implemented.
We are currently investigating an issue where observability alerts are not delivering for Slack notifications to all channels. DM alerts are still functional.
Docs site is down
4 updates
This incident has been resolved.
The issue has been identified and a fix is being implemented.
We have linked this issue with an outage on our doc provider's side.
We are currently investigating this issue.
Degraded connectivity for server-side SDKs in EU region
1 update
Between approximately 9:37 PM and 9:46 PM PT on April 9, 2026, some customers using server-side SDKs in the EU region may have experienced longer than normal initialization times or timeouts when connecting to LaunchDarkly. The issue resolved. No action is required from customers. Server-side SDKs automatically reconnect and recover from transient connectivity issues.
Delayed processing of Session Replay and Error data
4 updates
The delayed processing of Session Replay and Error event data has been resolved. All queued events have been processed and data ingestion is operating normally. No data was lost during this incident. Thank you for your patience.
The delayed processing of Session Replay and Error event data has been resolved. All queued events have been processed and data ingestion is operating normally. No data was lost during this incident. Thank you for your patience.
We have identified the cause of the delayed Session Replay and Error event processing and a fix is in progress. The backlog is actively decreasing. No data has been lost — all queued events will be processed.
We are currently experiencing a delay in processing Session Replay and Error event data. Incoming data is being queued and will be processed — no data has been lost. Customers may observe a temporary lag in session and error data appearing in the LaunchDarkly UI. We have identified the root cause as elevated database utilization and are actively working to resolve the backlog. We will provide updates as processing returns to normal.
March 2026(4 incidents)
Experiment Data Missing for Some Experiment Iterations
6 updates
Our team completed the data restoration for previously affected experiment iterations. Complete data is now available for all experiment iterations in the UI.
Our team continues to work on restoring data for previously affected experiment iterations. Complete data restoration expected by tomorrow.
Our team continues to work on restoring data for previously affected experiment iterations. We will provide a further update once full historical data recovery is complete.
We have implemented a mitigation that restores data availability for all current experiment iterations. Affected accounts should now see up-to-date reporting and exposure data in the UI. Our team continues to work on restoring data for previously affected experiment iterations. We will provide a further update once full historical data recovery is complete.
The issue has been identified and a fix is being implemented.
We are aware of an issue affecting experiment reporting data for a subset of accounts. Affected users may see incomplete data for some experiment iterations in the UI.
Server-side streaming rejected new connections across all commercial regions
1 update
Server-side streaming began rejecting new connections across all commercial regions, causing 500/503 errors for customers attempting to establish new SDK streaming connections for a brief period of time. Detailed timelines of the incident: - us-east-1 : 7:53 AM PST to 8:24 AM PST - ap-southeast-1: 7:53 AM PST to 7:55 AM PST - eu-west-1: 7:53 AM PST to 8:35 AM PST The team was quickly able to deploy the mitigation before an incident could be declared in our status pages, this is why we're posting it, retroactively. By 8:35am PST connections were re-established successfully in all commercial regions.
Event processing delays
4 updates
The issue with Event processing has been resolved. Impacted services have returned to normal operation. Flag Delivery was not impacted. There was no data loss experienced.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
Event processing is currently delayed for and will show stale data for several product areas, including: - Autogenerated metric creation - Data Export - Experimentation No data has been lost.
Observability Usage Reporting
3 updates
We've corrected the issues affecting the Observability "Errors" usage reporting for February and are finalizing reporting corrections for March.
We've identified the root cause affecting the Observability "Errors" usage for some customers and are working to correct it.
We're investigating reports of inaccurate usage reporting for some Observability products.
February 2026(7 incidents)
Investigating Issues with LaunchDarkly Application and Authentication
5 updates
The issue with our application and authentication has been resolved. Performance has remained stable following mitigation.
The issue with our application and authentication has been resolved. Performance has remained stable following mitigation.
The issue with our application and authentication has been identified and a fix has been implemented. We are continuing to monitor the performance of impacted services. We will continue to update this page until it is resolved.
Some customers are experiencing issues with accessing the web app and authentication. Some customers may see a low number of errors with flag evaluation, as well, but generally our Flag Delivery Network is functional. We have identified the issue and are continuing our work to resolve it.
Some customers are experiencing issues with accessing the web app and authentication. We are investigating and will provide updates as they become available.
Degraded Observability Ingest
4 updates
Ingest performance has remained stable following mitigation. We are no longer observing impact.
Mitigation has been applied and ingest performance has stabilized. We are monitoring to ensure continued stability.
We have identified the cause of the degraded ingest performance and are applying mitigation. We are seeing signs of stabilization and continuing to monitor recovery.
Beginning around 1:25 PM PST, we are investigating degraded ingest performance. Some customers may experience delays or gaps in observability data. Updates to follow.
Observability Data Delay
3 updates
We've caught up on the session/error data backlog.
We've identified the root cause and have deployed a fix. We're catching up on the session/error data backlog and should be caught up in ~15 minutes. No data has been lost.
Sessions and errors may be delayed by up to 1 hours. We are investigating the root cause. No data is lost.
Data attribution issues with Experimentation
4 updates
We have fixed the data attribution issues where running experiment iterations were receiving incorrectly attributed data. This incident is now resolved and results for active experiments are now accurate.
We have fixed the data attribution issues where certain experiment iterations are receiving incorrectly attributed data for any new experiments that are created. We are fixing the data attribution error for active experiments.
The team has identified the root cause and is working on a fix.
We are investigating an issue where certain experiment iterations are receiving incorrectly attributed data.
Observability – Degraded OTel Telemetry Processing
2 updates
This incident has been resolved. Telemetry data was dropped between 5:03 PM and 7:30 PM PT. All systems are now operating normally.
We are currently investigating an issue affecting OTel telemetry ingestion in LaunchDarkly Observability. Some telemetry data may not be processed as expected. Our team is actively working to identify the root cause and mitigate impact.
Degraded Observability for Customers Using AWS CloudWatch Metric Streams or Firehose Log Export
2 updates
This issue has been resolved. Observability ingestion from AWS CloudWatch Metric Streams and CloudWatch Firehose log export has returned to normal. Data sent during the incident window may not have been ingested, and customers may see gaps in metrics or logs for that period.
Some customers using LaunchDarkly Observability with AWS CloudWatch Metric Stream and/or CloudWatch Firehose log export may experience delayed ingestion of metrics/logs into Observability. The impact is limited to Observability data ingestion and related dashboards/alerts.
Increased error rate in our UI and API endpoints
2 updates
Customers are seeing expected behavior when accessing the LaunchDarkly UI and API.
We are investigating an issue where a small number of customers may experience errors when accessing the LaunchDarkly UI and API. Feature flag delivery is not impacted. We will provide updates as the investigation continues.
January 2026(4 incidents)
Data ingestion delays
4 updates
This incident has been resolved and all data processing pipelines are fully caught up. No data was lost.
A fix has been implemented and our event processing pipeline for Observability and Opentelemetry are fully caught up. We're continuing to monitor as our event processing pipeline catches up for Flag Status, Evaluations, and Contexts.
We have identified the issue and are continuing our work to resolve it.
All customers are experiencing data ingestion delays with the following: - Observability sessions and errors - Opentelemetry logs, traces, and metrics - Flag status - Evaluations - Contexts We are investigating and will provide updates as they become available. No data loss is expected.
Elevated error rate when configuring Okta SCIM
4 updates
This incident has been resolved.
We believe the issue is resolved for all customers. We're continuing to monitor the situation.
Some customers using Okta SCIM are encountering errors when provisioning and managing LaunchDarkly members. We continue to work on a remediation and have engaged with Okta's support team.
Some customers are experiencing errors when configuring LaunchDarkly with Okta SCIM. We have identified the issue and are continuing our work to resolve it.
Unable to edit JSON flag variations
4 updates
This incident has been resolved.
A fix has been implemented for the issue preventing editing some JSON flag variations.
We've identified a front-end issue that is causing issues editing certain JSON flag variations and are working on a fix.
Customers may experience issues editing JSON flag variation. We are investigating the root cause and will provide updates shortly.
Guarded releases event ingestion delays
4 updates
This incident has been resolved.
Events are caught up.
A fix has been implemented and we are monitoring the results. We expect to catch up on all events within the next 15 minutes, no data loss is expected.
We are currently experiencing delays with guarded releases event ingestion. We are investigating and will provide updates as they become available.
December 2025(5 incidents)
Delay in Observability product data ingest
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We have identified the cause of the ingest delay and are catching up on the backlogged messages. We expect to be caught up on all delayed sessions and errors in the next hour. Data loss is not expected.
Sessions and errors may be delayed by up to 3 hours. We are investigating the root cause.
Investigating - Increase in SDK errors
8 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are observing a reduction in SDK errors. We are continuing to work on a fix.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are also observing a small percentage timeouts when modifying feature flags via our API or UI. We are continuing to investigate this error.
We are continuing to investigate this issue.
We are investigating an increase in SDK error rates affecting a small portion of requests, currently estimated at less than one percent. SDKs will automatically retry these errors, so the primary customer impact is expected to be longer SDK initialization times rather than request failures. We believe the issue is related to an ongoing incident affecting one of our vendors. Our team is actively working to mitigate the impact and will provide additional updates as more information becomes available.
Event Processing Delays - Experiment Results Utilizing Attribute Filtering affected
2 updates
We have recovered from delays in experimentation results that are sliced by attributes. No data has been lost.
We are investigation an issue with delays in experimentation results that are sliced by attributes. No data has been lost.
Delays in publishing data export events
4 updates
All of the delayed data has been processed and this incident is resolved.
We are continuing to process the delayed data. The data is now updated through 2025-12-04, 01:00:00 UTC.
We are continuing to process the data and data is current as of 2025-12-03, 08:00:00 UTC. We'll continue to update as we process data.
Some customers who have configured Snowflake, BigQuery, or Redshift data export destinations may be experiencing delays in events published. There is no data loss. Exported data events are currently 32 hours behind. We are recovering steadily and will continue to send updates
Elevated error rates in apac region for server side sdks
1 update
Elevated error rates in apac region for server side sdks attempting to make new connections to the streaming service from 8:05 AM PT to 8:11 AM PT. The issue is now resolved
November 2025(7 incidents)
Intermittent issues accessing flag details
3 updates
This incident has been resolved.
We are no longer seeing any errors, and the issue was contained to the euw1 region. We'll continue to monitor and update this as necessary.
We are currently investigating an issue intermittently preventing our flags details pages from loading.
Delayed Event Processing
4 updates
This incident has been resolved.
A fix has been implemented and our event processing pipeline is fully caught up. We're continuing to monitor.
We are continuing to work on a fix for this issue, and remain at an approximate 20 minute delay in flag event processing.
We have identified a delay in our event processing pipeline, and are working to mitigate the issue. Features that show flag usage metrics are affected, and data is approximately 20 minutes delayed right now.
Investigating elevated latency
2 updates
The issue with the AI Configs list page has been resolved. Impacted services have returned to normal operation.
We detected elevated latencies loading the flag list and AI configs list pages. The flag list’s performance has recovered, and we continue to investigate remediation on the AI configs list page.
Elevated error rates for a small number of customers
1 update
Between 7:37am and 8:19am PT, a small number of customers in the us-east-1 region encountered elevated error rates with Polling SDK and API requests. This was caused by a minor issue affecting a CDN POP, which has since been resolved.
Customers unable to edit custom rules on flags
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
AI Configs monitoring page tab failing to load
5 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We are continuing to work on a fix for this issue.
The issue has been identified and a fix is being implemented.
Delayed flag updates for small number of customers
1 update
A limited number of customers (primarily in EU regions) with Polling SDK connections experienced elevated latency and errors rates between 9:05am and 9:58am PT, caused by a service incident in our CDN provider.
October 2025(7 incidents)
Live Events not loading
3 updates
We've resolved an issue causing Live Events to not load.
We've identified an issue that was causing Live Events to not load (starting Oct 23 11:03am PT) and are resolving the issue.
We've received reports of Live Events not loading and are investigating.
Experiment results and metrics unavailable
3 updates
We've resolved an issue causing Experiment results to fail to load.
We've identified an issue affecting the display of Experiment results and are working on a fix.
We are investigating reports of experiment results and metrics failing to load.
Elevated latencies and delays
30 updates
This incident has been resolved. One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update IP allowlists in their firewalls or proxy servers in order for their services to continue establishing streaming connections from server side SDKs to LaunchDarkly without disruption. Please refer to documentation at https://docs.launchdarkly.com/home/advanced/public-ip-list for more information. Refer to https://app.launchdarkly.com/api/v2/public-ip-list for complete list of public IPs. Customers who switched from streaming to polling mode as a workaround are clear to revert back to streaming mode.
One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update IP allowlists in their firewalls or proxy servers in order for their services to continue establishing streaming connections from server side SDKs to LaunchDarkly without disruption. Please refer to documentation at https://docs.launchdarkly.com/home/advanced/public-ip-list for more information. Refer to https://app.launchdarkly.com/api/v2/public-ip-list for complete list of public IPs. We will continue to actively monitor our services and provide updates if anything changes. We recommend that customers who switched from streaming to polling mode as a workaround remain in polling mode for now. We will continue to provide updates to this recommendation. We’ll provide another update within 60 minutes. The following stable IPs were added: - 52.22.11.124/32 - 98.90.74.184/32 - 44.214.199.141/32 - 54.158.1.193/32 - 52.20.244.244/32 - 3.222.86.128/32 - 3.209.231.150/32 - 98.87.97.132/32 - 54.243.249.198/32 - 52.205.29.16/32 - 52.200.155.176/32 - 72.44.54.239/32 - 44.193.41.212/32 - 44.193.145.213/32 - 3.230.174.47/32 - 34.193.141.46/32 - 54.145.215.104/32 - 54.83.149.69/32 - 54.167.133.6/32 - 98.86.214.67/32 - 3.210.111.117/32 - 44.198.65.246/32 - 3.223.193.186/32 - 54.164.149.203/32 - 52.202.164.129/32 - 54.211.161.195/32 - 52.44.175.163/32 - 54.87.94.27/32 - 34.196.162.28/32 - 3.229.200.95/32 - 34.206.243.165/32 - 44.198.216.81/32 - 98.85.64.100/32 - 34.193.205.73/32 - 54.82.179.12/32 - 35.169.61.114/32 - 3.225.212.129/32 - 44.214.230.241/32 - 44.197.94.28/32 - 54.225.42.164/32 - 3.232.151.250/32 - 98.88.212.98/32 - 44.206.106.7/32 - 44.219.171.95/32 - 54.81.117.83/32 - 3.212.29.247/32 - 52.207.48.173/32 - 52.21.24.75/32 - 44.209.163.213/32 - 3.212.26.71/32 - 3.232.245.239/32 - 44.214.85.107/32 - 54.85.9.44/32 - 3.212.63.158/32 - 44.214.25.250/32 - 34.225.52.183/32 - 54.144.244.40/32 - 13.216.151.182/32 - 34.205.184.16/32 - 54.243.39.147/32 - 52.21.118.82/32 - 44.208.247.20/32 - 44.209.6.233/32 - 98.85.24.70/32 - 52.206.193.249/32 - 52.203.145.124/32 - 34.207.21.226/32 - 52.6.144.34/32 - 3.221.55.92/32 - 54.160.1.221/32 - 54.236.171.5/32 - 3.210.143.243/32 - 18.204.254.23/32 - 34.224.206.32/32 - 54.152.40.39/32 - 52.201.30.87/32 - 98.86.87.228/32 - 52.70.143.213/32 - 34.199.166.40/32 - 54.225.71.167/32 - 100.26.67.253/32 - 13.219.10.149/32 - 52.203.44.182/32 - 3.215.17.57/32 - 3.217.93.49/32 - 3.215.154.205/32 - 3.224.166.159/32 - 44.205.194.1/32 - 54.162.82.157/32 - 54.175.84.251/32 - 54.211.58.167/32 - 52.22.199.197/32 - 35.169.162.188/32 - 44.205.162.192/32 - 54.224.162.1/32 - 50.16.48.228/32 - 52.203.187.144/32 - 52.22.34.71/32 - 52.44.226.138/32 - 35.169.87.104/32 - 50.17.142.209/32 - 34.226.53.28/32 - 50.16.209.122/32 - 54.173.173.176/32 - 54.197.143.76/32 - 52.45.14.195/32 - 54.84.144.50/32 - 52.205.140.231/32 - 52.1.64.188/32 - 23.22.17.50/32 - 44.213.219.16/32 - 54.211.63.220/32 - 34.236.195.69/32 - 100.29.106.41/32 - 107.20.48.118/32 - 107.22.84.205/32 - 107.23.47.163/32 - 174.129.120.2/32 - 174.129.25.155/32 - 18.204.101.179/32 - 18.207.77.1/32 - 18.214.59.159/32 - 3.208.63.99/32 - 3.209.142.240/32 - 3.210.8.83/32 - 3.211.0.174/32 - 3.211.171.106/32 - 3.211.40.100/32 - 3.211.78.169/32 - 3.212.153.172/32 - 3.212.215.241/32 - 3.212.69.145/32 - 3.215.132.92/32 - 3.215.85.74/32 - 3.217.156.217/32 - 3.217.33.194/32 - 3.222.172.85/32 - 3.225.49.136/32 - 3.226.201.70/32 - 3.232.113.99/32 - 3.81.156.201/32 - 3.94.227.253/32 - 34.192.228.56/32 - 34.196.53.78/32 - 34.197.220.63/32 - 34.197.229.208/32 - 34.198.5.248/32 - 34.205.180.137/32 - 34.206.142.57/32 - 34.225.210.63/32 - 34.225.44.159/32 - 34.232.120.176/32 - 34.235.101.237/32 - 34.237.149.109/32 - 34.237.7.234/32 - 35.153.62.144/32 - 35.171.42.112/32 - 35.172.28.29/32 - 35.175.51.91/32 - 44.193.160.19/32 - 44.193.176.64/32 - 44.193.192.114/32 - 44.195.178.165/32 - 44.205.130.196/32 - 44.205.142.202/32 - 44.205.242.41/32 - 44.207.32.19/32 - 44.208.215.105/32 - 44.210.2.163/32 - 44.221.72.252/32 - 44.223.189.67/32 - 50.16.53.115/32 - 52.0.20.18/32 - 52.1.126.54/32 - 52.20.44.107/32 - 52.200.10.183/32 - 52.201.19.0/32 - 52.202.18.147/32 - 52.205.199.141/32 - 52.205.74.149/32 - 52.206.123.108/32 - 52.21.16.31/32 - 52.22.120.141/32 - 52.22.75.64/32 - 52.23.189.51/32 - 52.3.131.52/32 - 52.3.164.32/32 - 52.3.203.3/32 - 52.4.17.19/32 - 52.55.197.16/32 - 52.6.134.5/32 - 52.7.81.224/32 - 54.147.67.241/32 - 54.156.155.61/32 - 54.158.114.255/32 - 54.158.201.166/32 - 54.167.202.203/32 - 54.235.4.229/32 - 54.243.165.178/32 - 54.243.220.97/32 - 54.243.227.67/32 - 54.243.238.143/32 - 54.243.34.157/32 - 54.243.54.147/32 - 54.243.58.248/32 - 54.243.79.193/32 - 54.80.39.21/32 - 54.81.213.212/32 - 54.84.21.101/32 - 54.84.245.230/32 - 98.82.52.30/32 - 98.82.55.107/32
One of our mitigation steps involved adding new IPs for stream.launchdarkly.com to our public IP list. Some customers may need to update the IP allowlists in their firewalls or proxy servers to ensure that their services can continue establishing streaming connections from server-side SDKs to LaunchDarkly without disruption. Approximately 88% of traffic to stream.launchdarkly.com will continue to be routed to existing stable IPs. We are working with AWS to provide a list of additional stable IPs and will post another update as soon as they become available. We will continue to actively monitor our services and provide updates if anything changes. We recommend that customers who switched from streaming to polling mode as a workaround remain in polling mode for now. We will continue to provide updates to this recommendation. We’ll provide another update within 60 minutes.
Server-side streaming is healthy. The load balancer upgrade, along with the addition of another load balancer, has restored our service to healthy levels. We will continue to actively monitor our services and provide updates if anything changes. We recommend that customers who switched from streaming to polling mode as a workaround remain in polling mode for now. We will continue to provide updates to this recommendation. We’ll provide another update within 60 minutes.
We're seeing signs of recovery, reported error rates for server-side SDKs are dropping significantly. The initial load balancer unit was upgrading and has begun handling traffic successfully. The additional load balancer is online and is beginning to handle traffic. Customers may still experience delayed flag updates. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage. An additional load balancer has been brought online and is being configured to receive traffic. When we confirm that this is successful, we'll bring the other additional load balancer units online to handle the increased volume in traffic and restore service to our customers. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage. We are in the process of deploying additional load balancer units that are about to go online. We expect them to successfully handle the increased volume in traffic and restore service to our customers. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage. We're still working on creating additional load balancer units to distribute and handle the increased volume in traffic. AWS is providing active support to LaunchDarkly as we work to restore service to our customers. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage and the reported error rates for server-side SDKs are reducing. We've added an additional load balancer unit to distribute the traffic which is helping. Based on the volume of traffic, we're going to add five additional load balancer units to give our service enough capacity to handle it. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage and the error rates for server-side SDKs are still high. We've escalated the recovery process with our AWS technical support team to accelerate the redeployment of our ALB for SDK connections to restore service. They are updating our ALB load balance capacity units (LCU) to accommodate increased levels of inbound traffic to our platform. Customers may still experience timeouts and 5xx errors when connecting to the server-side SDK endpoints. We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage and the error rates for server-side SDKs are still high. We're working with our AWS technical support team to accelerate the redeployment of our ALB for SDK connections to restore service. As a temporary workaround, we recommend switching server-side SDK configs from streaming to polling. Customers connecting their server-side SDKs directly to LD's streaming capabilities can reconfigure their SDKs to use polling to mitigate. Node: - Set LDOptions.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-nodejs-server-side-code-sample - https://launchdarkly.github.io/js-core/packages/sdk/server-node/docs/interfaces/LDOptions.html#stream Python - Set Config.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-python-code-sample - https://launchdarkly-python-sdk.readthedocs.io/en/latest/api-main.html#ldclient.config.Config.stream Java: - Use Components.pollingDataSource() instead of the default Components.streamingDataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-java-code-sample - https://launchdarkly.github.io/java-core/lib/sdk/server/com/launchdarkly/sdk/server/LDConfig.Builder.html#dataSource-com.launchdarkly.sdk.server.subsystems.ComponentConfigurer- .NET: - create a builder with PollingDataSource(), change its properties with the methods of this class, and pass it to DataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-net-server-side-code-sample - https://launchdarkly.github.io/dotnet-server-sdk/pkgs/sdk/server/api/LaunchDarkly.Sdk.Server.Integrations.PollingDataSourceBuilder.html Enterprise customers connecting their server-side SDKs to a Relay Proxy cluster can reconfigure their Relay Proxy to be in Offline Mode to mitigate. https://launchdarkly.com/docs/sdk/relay-proxy/offline We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage in our main US region and we're continuing our efforts to restore service. We're redirecting traffic to an EU region to help distribute the load to healthy servers while we work to restore our primary region. Customers connecting their server-side SDKs directly to LD’s streaming capabilities can reconfigure their SDKs to use polling to mitigate. Node: - Set LDOptions.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-nodejs-server-side-code-sample - https://launchdarkly.github.io/js-core/packages/sdk/server-node/docs/interfaces/LDOptions.html#stream Python - Set Config.stream to false - https://launchdarkly.com/docs/sdk/features/config#expand-python-code-sample - https://launchdarkly-python-sdk.readthedocs.io/en/latest/api-main.html#ldclient.config.Config.stream Java: - Use Components.pollingDataSource() instead of the default Components.streamingDataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-java-code-sample - https://launchdarkly.github.io/java-core/lib/sdk/server/com/launchdarkly/sdk/server/LDConfig.Builder.html#dataSource-com.launchdarkly.sdk.server.subsystems.ComponentConfigurer- .NET: - create a builder with PollingDataSource(), change its properties with the methods of this class, and pass it to DataSource() - https://launchdarkly.com/docs/sdk/features/config#expand-net-server-side-code-sample - https://launchdarkly.github.io/dotnet-server-sdk/pkgs/sdk/server/api/LaunchDarkly.Sdk.Server.Integrations.PollingDataSourceBuilder.html Enterprise customers connecting their server-side SDKs to a Relay Proxy cluster can reconfigure their Relay Proxy to be in Offline Mode to mitigate. https://launchdarkly.com/docs/sdk/relay-proxy/offline We'll provide another update within 60 minutes.
Server-side streaming API is still experiencing a Partial outage and the error rates for server-side SDKs are still high. We're redeploying our ALB for SDK connections to restore service. As a temporary workaround, we recommend switching server-side SDK configs from streaming to polling. Error rates for client side streaming SDKs are low, but flag updates are still delayed. All other service component are fully recovered and we've updated their status to Operational. We will provide our next update within 60 minutes.
We're redeploying parts of our service to address the high error rates for client and server side SDK connections that we continue to see. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 60 minutes.
Server-side streaming connections continue to be impacted by this incident. The event ingestion pipeline is fully functional again. This means that the following product areas are functional for all customers while data sent between Sunday Oct 19 11:45pm PT and Monday Oct 20 2:45pm PT may be unrecoverable: - AI Configs Insights - Contexts - Data Export - Error Monitoring - Event Explorer - Experimentation - Flag Insights - Guarded rollouts - Live Events Additionally, Observability functionality has recovered as mentioned in our previous update. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 30 minutes.
The LaunchDarkly web application is fully recovered for customer traffic. Flag Delivery traffic has been scaled back up to 100% and connection error rates are decreasing but non-zero. Active streaming connections should receive flag updates once successfully connected. If disconnected, these connections will automatically retry in accordance with our SDK behavior until being able to connect successfully. We've currently enabled 7.5% of traffic for the event ingestion pipeline and will continue to enable it progressively. As of 1:40pm PT Observability data is successfully flowing again and we are catching up on data backlog. Observability data between 1:50am PT and 1:40pm PT is unrecoverable due to an outage in the ingest pipeline. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 60 minutes.
We've hit our target of healthy, stable nodes that are available for LaunchDarkly web application and are increasing traffic from 10% to 20%. We'll continue to monitor as we scale the web application back up. Recovering the Flag Delivery service for all customers is our top priority. We're working on stabilizing the Flag Delivery Network. We are beginning to progressively enable the event ingestion pipeline for the LaunchDarkly service. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. We will provide our next update within 60 minutes.
The impacted AWS region continues to recover and make resources available which we are using to improve the availability of the LaunchDarkly platform. As we continue to recover and scale up, so do our customers. This increase in traffic is slowing our ability to reduce the impact of the outage. For customers who are using the LaunchDarkly SDKs, we do not recommend making changes to your SDK configuration at this time as doing so will impact our ability to continue service during our recovery. For Flag Delivery, server-side streaming is back online and no longer impacted by the incident for most customers. Customers using big segments or payload filtering are still impacted. The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. The event ingestion pipeline will remain disabled to limit the traffic volume within LaunchDarkly's services during our recovery. We will provide our next update within 60 minutes.
We've made significant progress on our recovery from this incident. Our engineers are continuing to bring the LaunchDarkly web application into a healthy state and have more than tripled the number of healthy nodes to serve our customers. The status of many service components has been upgraded from Major outage to Partial Outage. The following components are still experiencing a Major Outage: - Experiment Results Processing - Global Metrics - Feature Management Context Processing - Feature Management Data Export - Feature Management Flag Usage Metric The EU and Federal LaunchDarkly instances continue to not be impacted by this incident at this time. The event ingestion pipeline will remain disabled to limit the traffic volume within LaunchDarkly's services during our recovery. We will provide our next update within 30 minutes.
We continue to work towards recovering from this incident. We're actively working towards restoring the LaunchDarkly service into a healthy state. We now have 58% of the LaunchDarkly web application in a healthy state. The EU and FedRAMP LaunchDarkly instances are not impacted by this incident. While working towards a resolution for our customers, we disabled the event ingestion pipeline to limit the traffic volume within LaunchDarkly's services. This means that the following product areas have unrecoverable data loss: - AI Configs Insights - Contexts - Data Export - Error Monitoring - Event Explorer - Experimentation - Flag Insights - Guarded rollouts - Live Events - Observability While recovering, there is continued impact to customers using our SDKs to connect to our Flag Delivery network. Our engineers are continuing to recover our service in our main region. We will provide our next update within 30 minutes.
While we continue to resolve the ongoing impact, we want to clarify the ongoing impact to our Flag Delivery Network and SDKs: - Customers using client-side or server-side SDKs should continue to see the last known flag values if a local cache exists, or fall back to in-code values. - Customers using our Relay Proxy should continue to see last known flag values if a local cache exists. - Customers using our Edge SDKs should continue to see last known flag values. Additionally, our event ingestion pipeline is dropping events that power product features such as flag insights, experimentation, observability, and context indexing.
We're continuing to work on resolving the immediate impact from this incident. We're actively working on recovering within our AWS us-east-1 region while also working on options to move traffic to a healthier region.
We are continuing to work on a fix for this issue.
We are aware that our web app and API are experiencing high error rates due to scaling issues in AWS us-east-1 region.
We are still experiencing delays in flag updates and event ingestion pipeline, affecting experimentation, data export, flag status metrics and others. Additionally we are experiencing elevated error rate on client side SDK streaming API in us-east-1 region due to scaling issues in that AWS region.
We are still experiencing delays in flag updates and event ingestion pipeline, affecting experimentation, data export, flag status metrics and others. Additionally, observability data (session replays, errors, logs, and traces) has also been impacted starting ~1:50am PT.
We are seeing initial recovery for the following services - Flag updates - SDK requests for environments using Big Segments We are monitoring for the recovery of the rest of the services.
We are continuing to work on the issue. Additionally impacted services - Delayed flag updates to SDKs - Dropped SDK events impacting Experimentation, Data Export
We have identified issue with elevated error rates and event pipelines. Currently impacted services are - SDK and Relay Proxy requests for environments using Big Segments in us-east-1 region - Guarded rollouts - Scheduled flag changes - Experimentation - Data export - Flag usage metrics - Emails and notifications - Integrations web hooks
We are investigating elevated latencies and delays in multiple services including scheduled flag changes, flag updates and events processing. We will post updates as they are available.
Delays in event data
3 updates
The issue with delays in event data has been resolved. Event data is up to date and impacted services have returned to normal operation
Customers are experiencing up to 21 minute delays with product features using event data. We have identified the issue and are continuing our work to resolve it. Data loss is not expected. Customers may begin seeing recovery of affected services at this time.
All customers are experiencing up to 21 minute delays with product features using event data. We are investigating and will provide updates as they become available. Data loss is not expected.
Delayed flag updates for small number of customers
2 updates
The issue has been resolved. Flag updates have returned to normal operation.
A small number of customers experienced delayed flag updates made between 15:24 and 15:34 PT. The issue has been mitigated and we will continue monitoring.
Errors generating new client libraries
2 updates
Users are now able to generate new client libraries.
We're aware of intermittent difficulties generating new client libraries. We're investigating.
Delay in event processing
4 updates
This incident has been resolved.
We've implemented a fix and are monitoring the results. Impact to Data Export was limited to our streaming data export product.
We've mitigated the impact on processing events for all features outside of Data Export. We're continuing to investigate.
We are currently investigating an issue recording events, some flag, metric, and experimentation events won't be showing in the UI.
September 2025(2 incidents)
Self-serve legacy customers are unable to check out or modify plan
3 updates
The issue with legacy self-serve check out has been resolved.
The issue with legacy self-serve plans has been identified and a fix has been implemented. We are continuing to monitor the performance of impacted services. We will continue to update this page until it is resolved.
Customers on legacy plans (such as Starter, Professional) are unable to check out or modify the plan. We have identified a fix and will provide an update as soon as the fix is ready. Please contact Support if you need to make an immediate change to your plan.
Increased error rate on flag status API
1 update
From 11:38 am PT - 11:46 am PT we experienced an elevated error rate on the flag evaluation and flag status APIs, used by flag list, flag targeting, and feature monitoring endpoints.
📡 Tired of checking LaunchDarkly status manually?
Better Stack monitors uptime every 30 seconds and alerts you instantly when LaunchDarkly goes down.