F

Fly.io Outage History

Past incidents and downtime events

Complete history of Fly.io outages, incidents, and service disruptions. Showing 50 most recent incidents.

May 2026(1 incident)

minorresolvedMay 4, 06:57 PM — Resolved May 4, 08:42 PM

Log search unavailable

3 updates
resolvedMay 4, 08:42 PM

This incident has been resolved.

monitoringMay 4, 08:00 PM

We have a mitigation in place and are monitoring results.

investigatingMay 4, 06:57 PM

Log search in Grafana is currently unavailable. You may see `failed to make http request: 502` errors when accessing logs from fly-metrics.net at this time. App logs continue to be available using the `fly logs` command and in the Fly.io dashboard.

April 2026(13 incidents)

minorresolvedApr 28, 11:50 PM — Resolved Apr 29, 12:40 AM

flyctl deploy creating new app instances

4 updates
resolvedApr 29, 12:40 AM

This incident has been resolved.

monitoringApr 29, 12:31 AM

A fix has been implemented and we are monitoring the results.

identifiedApr 29, 12:07 AM

The issue has been identified and a fix is being implemented.

investigatingApr 28, 11:50 PM

We're investigating an issue where fly deploy is creating new Fly machine instances rather than updating existing ones, leading to apps with a mixed state. We're currently investigating the issue. As a workaround, please try removing the "processes = [ "app" ]" line from your fly.toml configuration file and redeploying. Another workaround is to downgrade flyctl to 0.4.40 - this should resolve the issue in the meantime.

minorresolvedApr 24, 10:45 PM — Resolved Apr 24, 11:31 PM

Slow machines operations in IAD region

5 updates
resolvedApr 24, 11:31 PM

This incident has been resolved.

monitoringApr 24, 11:19 PM

Network packet loss has returned to normal levels. We are monitoring the Machines API for stability.

investigatingApr 24, 11:18 PM

We are continuing to investigate this issue.

investigatingApr 24, 10:58 PM

We are deploying a partial mitigation while we continue investigating.

investigatingApr 24, 10:45 PM

We are currently investigating the issue. Only a portion of machines within the region are impacted.

minorresolvedApr 23, 03:05 PM — Resolved Apr 23, 04:26 PM

Errors when adding or editing Github integrations for deployments

5 updates
resolvedApr 23, 04:26 PM

This incident has been resolved.

monitoringApr 23, 03:39 PM

A fix has been implemented and we are monitoring the results.

identifiedApr 23, 03:22 PM

We are continuing to work on a fix for this issue.

identifiedApr 23, 03:22 PM

The issue has been identified and a fix is being implemented.

investigatingApr 23, 03:05 PM

We're investigating reports of "500" errors when trying to add a new Github integration or edit an existing Github integration in Fly.io/dashboard. This only affects "Launch an app from Github" or trying to change settings for an app set up this way. Existing integrations continue to work normally. It does not affect deploys done with `flyctl` or existing, running apps.

majorresolvedApr 23, 11:17 AM — Resolved Apr 23, 11:50 AM

Errors (5xx, timeouts) in Fly.io dashboard

4 updates
resolvedApr 23, 11:50 AM

This incident has been resolved.

monitoringApr 23, 11:45 AM

A fix has been implemented and we are monitoring the results.

identifiedApr 23, 11:35 AM

The issue has been identified and a fix is being implemented.

investigatingApr 23, 11:17 AM

We are investigating issues with web dashboard.

minorresolvedApr 20, 02:29 PM — Resolved Apr 20, 05:38 PM

Increased latency in SIN

2 updates
resolvedApr 20, 05:38 PM

This incident has been resolved.

identifiedApr 20, 03:29 PM

We are currently working on resolving increased latencies in our Singapore region.

majorresolvedApr 17, 01:06 PM — Resolved Apr 18, 08:42 PM

TLS certificate issues

3 updates
resolvedApr 18, 08:42 PM

This incident has been resolved.

monitoringApr 17, 03:34 PM

A fix has been implemented and we are monitoring the results.

investigatingApr 17, 01:06 PM

We are investigating an issue with the Vault server that stores TLS certificates. Provisioning new TLS certificates may fail, and connecting to domains whose existing certificate has not yet been cached may fail.

noneresolvedApr 15, 11:08 AM — Resolved Apr 16, 10:59 AM

Network issues in SYD

3 updates
resolvedApr 16, 10:59 AM

This incident has been resolved.

monitoringApr 15, 11:40 AM

We've identified the issue and applied a fix. All services should be working as normal.

investigatingApr 15, 11:08 AM

We're currently investigating some networking issues in SYD. This is affecting a number of our central services.

noneresolvedApr 12, 06:50 PM — Resolved Apr 12, 11:03 PM

Heightened latency in ORD

3 updates
resolvedApr 12, 11:03 PM

This incident has been resolved.

monitoringApr 12, 07:26 PM

A fix has been implemented and we are monitoring the results.

investigatingApr 12, 06:50 PM

We are currently investigating heightened network latency in ORD.

minorresolvedApr 10, 06:42 PM — Resolved Apr 10, 09:48 PM

Managed Postgres control plane instability in NRT (Tokyo)

4 updates
resolvedApr 10, 09:48 PM

This incident has been resolved.

monitoringApr 10, 08:32 PM

A fix has been implemented and we are seeing MPG performance in NRT normalize. We are continuing to monitor to ensure a stable recovery

identifiedApr 10, 08:13 PM

The issue has been identified and a fix is being implemented. Users with clusters in NRT may continue to see instability at this time

investigatingApr 10, 06:42 PM

We are investigating instability in the MPG control plane in the NRT (Toyko, Japan) region causing unexpected cluster failovers. Clusters return to health shortly after, but some users with clusters in NRT may see dropped connections or degraded performance at this time.

majorresolvedApr 9, 07:29 PM — Resolved Apr 9, 08:14 PM

Unavailable hosts in ORD region

2 updates
resolvedApr 9, 08:14 PM

This incident has been resolved.

investigatingApr 9, 07:29 PM

Some hosts in our Chicago (ORD) region are currently inaccessible. We are working with our provider to resolve this issue. To see if you are affected, please visit the personalized status page: https://fly.io/status A small amount of Managed Postgres clusters may also be inaccessible at this time.

majorresolvedApr 9, 03:50 AM — Resolved Apr 9, 05:30 AM

Managed Postgres Control Plane Issues in SYD

4 updates
resolvedApr 9, 05:30 AM

This incident has been resolved.

monitoringApr 9, 05:20 AM

Control plane operations in SYD have returned to normal and all clusters are healthy at this time. We're continuing to monitor to ensure stable recovery.

identifiedApr 9, 04:12 AM

We are seeing an improvement in control plane performance in the SYD region. Some clusters in the region currently are showing degraded standby nodes and we are working to bring those back to full health.

investigatingApr 9, 03:50 AM

We are investigating elevated control plane issues for Managed Postgres clusters in SYD. The majority of clusters appear to be running fine, but new creates, backup restores, and upgrades may show errors or take longer than usual to complete. Some clusters will have seen a failover event from primary to standby.

majorresolvedApr 8, 08:34 AM — Resolved Apr 8, 12:23 PM

Metrics currently experiencing issues

4 updates
resolvedApr 8, 12:23 PM

This incident has been resolved.

monitoringApr 8, 11:02 AM

We are continuing to monitor for any further issues.

monitoringApr 8, 11:00 AM

We have implemented a fix. We're monitoring the cluster for further issues.

investigatingApr 8, 08:34 AM

We are currently investigating an issue with our metrics cluster.

criticalresolvedApr 7, 03:08 PM — Resolved Apr 7, 06:17 PM

GraphQL API / Dashboard Issues

4 updates
resolvedApr 7, 06:17 PM

This incident has been resolved.

monitoringApr 7, 03:39 PM

A fix has been implemented and we are monitoring the results.

identifiedApr 7, 03:17 PM

We have restored GraphQL and dashboard availability, but some actions (e.g. app state updates) may still be delayed.

investigatingApr 7, 03:08 PM

We are investigating issues with our GraphQL API and web dashboard

March 2026(20 incidents)

noneresolvedMar 29, 03:00 PM — Resolved Mar 29, 04:01 PM

Low Capacity in SIN and AMS regions

6 updates
resolvedMar 29, 04:01 PM

This incident has been resolved.

monitoringMar 29, 03:35 PM

We've freed up additional room in the SIN and AMS regions and are monitoring capacity.

monitoringMar 29, 03:33 PM

We've freed up additional room in the SIN and AMS regions and are monitoring capacity.

identifiedMar 29, 03:19 PM

We are currently investigating capacity issues in SIN and AMS regions that are affecting: - Machine Create and Start events - Deployments, due to affected, degraded Remote Builders - Sprite startup from cold state

identifiedMar 29, 03:13 PM

This may also affect: - Remote builders in AMS and SIN regions, which could currently be experiencing degraded performance or failures. - Sprites starting from a cold state, which may experience failures in starting

identifiedMar 29, 03:00 PM

We are currently investigating elevated errors when creating and starting machines in the SIN and AMS regions. Choosing other regions to create or deploy may help in the meantime

minorresolvedMar 27, 06:08 PM — Resolved Mar 27, 09:51 PM

Low capacity in IAD

5 updates
resolvedMar 27, 09:51 PM

This incident has been resolved.

monitoringMar 27, 09:09 PM

With the additional capacity we've brought online, machine start failure rates in IAD have now recovered. We'll continue to monitor IAD capacity.

identifiedMar 27, 07:21 PM

We've brought some additional capacity online in IAD and are seeing improvements, and we're continuing to work on adding more and freeing up additional room.

investigatingMar 27, 06:47 PM

We're continuing to evaluate our options for increasing short-term capacity in the IAD region.

investigatingMar 27, 06:08 PM

We're currently investigating capacity issues in IAD that is preventing machine starts (machine creates are currently unaffected). This may result in deploys failing to complete (even for apps outside of the IAD region). As a workaround, using legacy Fly builders explicitly located in another region (i.e., `FLY_REMOTE_BUILDER_REGION=lhr fly deploy --depot=false --recreate-builder`) may help in the meantime.

majorresolvedMar 26, 03:21 PM — Resolved Mar 26, 05:54 PM

Machine Creates Failing in ORD Region

5 updates
resolvedMar 26, 05:54 PM

This incident has been resolved.

monitoringMar 26, 05:28 PM

We've implemented a fix and have seen error rates for machine creates in ORD drop off. We're continuing to monitor the results.

identifiedMar 26, 04:50 PM

We've identified the cause of this increased failure rate and a fix is in progress. We are seeing most creates in ORD succeed at this time, though failure rate is still above baseline.

investigatingMar 26, 04:08 PM

We are continuing to investigate this issue. We are seeing 408 errors decreasing in ORD, though still above baseline.

investigatingMar 26, 03:21 PM

We are currently investigating elevated errors creating machines in the ORD (Chicago, Illinois) region. Users may see `failed to launch VM: request returned non-2xx status: 408` errors when creating, updating, or scaling machines in ORD. Existing, already running machines in the ORD region continue to run as normal.

criticalresolvedMar 26, 12:37 PM — Resolved Mar 26, 02:19 PM

Network issues in FRA region

4 updates
resolvedMar 26, 02:19 PM

This incident has been resolved.

identifiedMar 26, 01:16 PM

Some Managed Postgres clusters in FRA region are still unreachable, we are investigating this issue.

monitoringMar 26, 01:14 PM

Apps and Managed Postgres clusters in FRA region should be back online at this time. We are monitoring for any further issues.

investigatingMar 26, 12:37 PM

We are investigating network issues in FRA region. Apps and/or Managed Postgres clusters in the region may be inaccessible at this time.

noneresolvedMar 23, 03:18 PM — Resolved Mar 23, 04:27 PM

Backend errors when trying to use Grafana to view logs

4 updates
resolvedMar 23, 04:27 PM

This incident is resolved, Grafana logs are now working properly.

monitoringMar 23, 03:55 PM

We've deployed a fix and are monitoring the results. Logs are now be visible on Grafana.

identifiedMar 23, 03:41 PM

Using the Logs panel in Grafana at https://fly-metrics.net/ will show a 502 error from the backend and won't show any logs. You can use `fly logs` or the live log viewer directly on https://fly.io/dashboard to view streaming logs for the time being.

investigatingMar 23, 03:18 PM

Using the Logs panel in Grafana at https://fly-metrics.net/ will show a 502 error from the backend and won't show any logs. You can use `fly logs` or the live log viewer directly on https://fly.io/dashboard to view streaming logs for the time being.

minorresolvedMar 20, 07:26 AM — Resolved Mar 23, 01:19 PM

Machines failing to start in DFW

5 updates
resolvedMar 23, 01:19 PM

This incident has been resolved.

monitoringMar 21, 08:26 AM

Machine start success rates in DFW have improved but we are continuing to monitor and make further adjustments. We will provide updates as the situation progresses.

monitoringMar 20, 12:45 PM

In addition to freeing up existing capacity, the team has provisioned new capacity in DFW and we are monitoring the results.

monitoringMar 20, 08:08 AM

We freed up some capacity on our workers to allow for successful Machine starts.

investigatingMar 20, 07:26 AM

The Machines start failure rate is elevated in DFW.

criticalresolvedMar 19, 06:28 AM — Resolved Mar 19, 10:37 AM

Metrics currently experiencing issues

3 updates
resolvedMar 19, 10:37 AM

This incident has been resolved. We're unable to recover the lost metrics from that one hour.

monitoringMar 19, 07:12 AM

We have implemented a fix. There has been approximately 1h of lost metrics from 06:07UTC. We're monitoring the cluster for further issues

investigatingMar 19, 06:28 AM

We are currently investigating an issue with our metrics cluster.

majorresolvedMar 18, 09:58 AM — Resolved Mar 18, 06:53 PM

Machines failing to start in DFW

4 updates
resolvedMar 18, 06:53 PM

This incident has been resolved. Machine creates in DFW continue to work normally.

monitoringMar 18, 12:40 PM

A fix has been implemented and we are monitoring the results.

identifiedMar 18, 11:44 AM

The team is currently rolling out additional capacity in DFW which should help ease Machine start failures across the region.

investigatingMar 18, 09:58 AM

We are investigating reports of machines failing to start in the DFW (Dallas) region with "insufficient memory" errors. This may cause deployment failures for applications running in DFW. Our team is actively working to restore full capacity in the region. If you are affected, deploying to an alternate region may serve as a temporary workaround. We will provide updates as the situation progresses.

majorresolvedMar 18, 04:12 PM — Resolved Mar 18, 05:02 PM

IPv6 networking issues in SJC region

3 updates
resolvedMar 18, 05:02 PM

This incident has been resolved.

monitoringMar 18, 04:31 PM

A fix has been implemented and we are monitoring the results.

investigatingMar 18, 04:12 PM

We are investigating intermittent network issues in SJC region impacting outbound public IPv6 access from Machines. Connecting to IPv6 internet resources from apps hosted in SJC region may be slow or fail at this time. IPv4 access, as well as 6PN private networking, are unaffected.

minorresolvedMar 18, 02:07 PM — Resolved Mar 18, 02:18 PM

Connection Issues in SJC

2 updates
resolvedMar 18, 02:18 PM

This incident has been resolved.

monitoringMar 18, 02:07 PM

Between 13:55 and 14:03 UTC machines and MPG clusters hosted in the SJC region saw elevated connection errors. Users may have seen errors connecting to or from most machines in the region, as well as with deployments or updates to machines in the region. Networking has returned to normal in the region, and we are continuing to monitor closely to ensure stable recovery.

minorresolvedMar 18, 02:12 PM — Resolved Mar 18, 02:18 PM

Fly ssh console command failing

3 updates
resolvedMar 18, 02:18 PM

This incident has been resolved.

monitoringMar 18, 02:17 PM

A fix has been implemented and we are seeing `ssh console` commands succeed as normal.

identifiedMar 18, 02:12 PM

We have identified an issue causing new `fly ssh console` connections to fail with 500 errors. A fix is in progress.

noneresolvedMar 14, 04:20 AM — Resolved Mar 14, 02:05 PM

Sprites Operations: 401 errors for certain organizations

2 updates
resolvedMar 14, 02:05 PM

This incident has been resolved.

monitoringMar 14, 01:55 PM

Organizations with names prefixed with numerical digits may experience 401 errors. Affected operations include actions such as Sprite creation, listing, etc... A fix has been implemented since 2026-03-14 12:30 UTC and we are monitoring the results!

majorresolvedMar 11, 09:19 AM — Resolved Mar 11, 11:37 AM

Setting secrets and creating apps is degraded

4 updates
resolvedMar 11, 11:37 AM

This incident has been resolved.

monitoringMar 11, 11:03 AM

While the secret storage service was in a read-only state, app creation requests queued up, due to the retry logic and insufficient request concurrency limits in our GraphQL API. This prevented our GraphQL API from serving any other requests. We have scaled up the GraphQL API and are continuing to monitor the situation.

monitoringMar 11, 10:14 AM

A fix has been implemented and we are monitoring the results.

identifiedMar 11, 09:19 AM

An ongoing data migration in our secret storage service is causing degraded Machines API functionality.

majorresolvedMar 7, 02:42 PM — Resolved Mar 7, 03:56 PM

Private networking issues in SYD region

3 updates
resolvedMar 7, 03:56 PM

This incident has been resolved.

monitoringMar 7, 03:10 PM

A fix has been implemented and we are monitoring the results.

investigatingMar 7, 02:42 PM

We are investigating a private networking failure between SYD and other regions. Apps continue to run, and private networking within SYD is unaffected.

noneresolvedMar 5, 07:24 PM — Resolved Mar 5, 07:50 PM

Routing issues in NA regions

3 updates
resolvedMar 5, 07:50 PM

This incident has been resolved. Due to a BGP issue, we saw some North American traffic routed to edges in Singapore (sin). Users in North America would have seen additional request latency during this period.

monitoringMar 5, 07:38 PM

A fix has been implemented and we are monitoring the results.

investigatingMar 5, 07:24 PM

We're aware of routing issues affecting some customers in North America regions, and we're actively investigating.

majorresolvedMar 3, 08:18 PM — Resolved Mar 3, 09:15 PM

Elevated GraphQL API errors

3 updates
resolvedMar 3, 09:15 PM

This incident was caused by a failed Redis node that powers our GraphQL API. We were able to recreate the Redis node and restore service. We are still investigating the root cause of the failure. In the mean time, all API endpoints now appear to be stable and errors have dropped to baseline level.

monitoringMar 3, 08:36 PM

A fix has been implemented and we are monitoring the results.

investigatingMar 3, 08:18 PM

We're investigating elevated GraphQL errors that affect some API endpoints.

minorresolvedMar 3, 10:50 AM — Resolved Mar 3, 12:10 PM

Cost Explorer fails to load

2 updates
resolvedMar 3, 12:10 PM

This incident has been resolved.

investigatingMar 3, 10:50 AM

We are currently investigating this issue. The page currently displays: "We’re having trouble loading the cost breakdown."

noneresolvedMar 3, 12:54 AM — Resolved Mar 3, 12:54 AM

Certificates issues affecting API and proxy

1 update
resolvedMar 3, 02:05 AM

Between 19:54 and 20:06 UTC, our Vault cluster serving app certificates was unavailable. This caused various API requests to fail, mainly operations on certificates but also app creates and IP assignments. As the failure mode was Vault requests hanging rather than failing immediately, TLS requests through fly-proxy for domains where the certificate was not cached on the local node remained open for a long time while proxy attempted to fetch the certificate; this caused some connections to fail as too many connection slots were taken up by requests waiting on Vault. The root cause of this incident was a partially completed update to the Vault cluster. We will be implementing safeguards in the proxy for this failure mode, as well as improving certificate storage longer-term.

majorresolvedMar 2, 05:42 PM — Resolved Mar 2, 10:49 PM

Machines failing to boot in EWR

4 updates
resolvedMar 2, 10:49 PM

This incident has been resolved.

monitoringMar 2, 08:35 PM

A fix has been implemented and we are monitoring the results.

identifiedMar 2, 06:21 PM

The issue has been identified and a fix is being implemented.

investigatingMar 2, 05:42 PM

We are currently investigating this issue.

minorresolvedMar 2, 09:19 PM — Resolved Mar 2, 09:50 PM

Issues with the Machines API

4 updates
resolvedMar 2, 09:50 PM

This incident has been resolved.

monitoringMar 2, 09:47 PM

A fix has been implemented and we are monitoring the results.

identifiedMar 2, 09:39 PM

The issue has been identified and a fix is being implemented.

investigatingMar 2, 09:19 PM

We're currently investigating issues with the Machines API. Customer deployments and the Fly dashboard may be affected.

February 2026(16 incidents)

majorresolvedFeb 27, 06:50 PM — Resolved Feb 27, 08:21 PM

Slow API requests

9 updates
resolvedFeb 27, 08:21 PM

This incident has been resolved. All platform and API operations are working normally.

monitoringFeb 27, 08:05 PM

API and platform operations have normalized. We are continuing to monitor to ensure full and stable recovery. Background jobs are almost fully caught up. Users may still see slightly slower requests creating new apps / orgs, but they should complete successfully. Sprite and MPG cluster creations are processing as normal.

identifiedFeb 27, 07:41 PM

A second fix has been deployed and database load has returned to normal, resulting in API response times beginning to normalize. Most Machines API requests should succeed as normal, and deploys to existing apps should also work. We are working through a backlog of background jobs. New app / organization creations and other other operations that use these will continue to see increased latency or failures while we work thorough these. New MPG cluster and new Sprite creation continues to be impacted.

identifiedFeb 27, 07:23 PM

An initial fix has been deployed and we are seeing improvements in load and API performance. Some operations that rely on the Graphql API, such as new app creations and some deployments, will continue to fail at this time. We are continuing to work on restoring full availability.

identifiedFeb 27, 07:05 PM

We are currently seeing full API failures for requests to our Graphql API and elevated failures for the machines API. Direct calls to these apis may fail, along with many flyctl commands. We have identified the cause of the issue and are continuing to work on a fix. Existing running machines and apps should continue to be reachable, but creates, deploys, or other features relying on platform API calls will fail at this time.

identifiedFeb 27, 06:59 PM

New Sprite creations are also timing out or failing at this time. We are continuing to work on a fix for this issue.

identifiedFeb 27, 06:53 PM

We are continuing to work on a fix for this issue.

identifiedFeb 27, 06:52 PM

We have identified the cause of the increased latency and are working on a fix. The most common errors we are seeing is timeouts when users attempt to perform an action against a newly created app / machine resource. Those may timeout or fail with an `app|machine not found` error

investigatingFeb 27, 06:50 PM

We are investigating increased in API request latency and timeouts with the main platform API. This is impacting multiple operations, including creating, querying or performing actions against machines, as well as platform level operations like adding payment methods.

minorresolvedFeb 27, 03:34 PM — Resolved Feb 27, 05:54 PM

Capacity issues in iad and dfw

3 updates
resolvedFeb 27, 05:54 PM

This incident has been resolved.

monitoringFeb 27, 05:31 PM

We have provisioned additional capacity in dfw and iad and are monitoring to ensure machine and builder starts are succeeding consistently.

identifiedFeb 27, 03:34 PM

These regions (Dallas, TX dfw and Ashburn, VA iad) are currently low on capacity. New machine creates in these regions might fail temporarily, and Depot builders may be unavailable, causing deploys to hang in "Waiting for Depot builder". If you are having issues with Depot builders, consider moving them to a different non-iad, non-dfw region in your fly.io dashboard's "Settings" page under "App builders", or try `--depot=false`.

noneresolvedFeb 26, 05:00 PM — Resolved Feb 26, 10:28 PM

Capacity isssues in iad and dfw

6 updates
resolvedFeb 26, 10:28 PM

This incident has been resolved.

monitoringFeb 26, 08:19 PM

We're continuing to monitor after having added more capacity to our DFW and IAD regions. Deploys or machine starts using existing volumes in these regions may still hit a capacity issue. Users should use `fly volume fork --vm-memory ` to fork the volume to a host with more capacity, then retry the deploy or start command using the new volume.

identifiedFeb 26, 06:57 PM

We have added additional capacity in DFW and IAD regions and are monitoring the impact. New machine creates and deploys without volumes are seeing improved success rates. Deploys using depot builders in those regions are also improving, with much quicker builder start times. Deploys or machine starts using existing volumes in these regions may still hit a capacity issue. Users should use `fly volume fork --vm-memory ` to fork the volume to a host with more capacity, then retry the deploy or start command using the new volume.

identifiedFeb 26, 05:18 PM

We've identified some newly created Managed Postgres clusters are failing to come up healthy in these regions.

identifiedFeb 26, 05:05 PM

New machine creates in these regions might fail temporarily, and Depot builders may be unavailable. If you are having issues with Depot builders, consider moving them to a different region, or try `--depot=false`.

identifiedFeb 26, 05:00 PM

We have identified the problem and are working on a fix.

noneresolvedFeb 24, 05:23 PM — Resolved Feb 24, 05:51 PM

Sprites API degradation

3 updates
resolvedFeb 24, 05:51 PM

This incident has been resolved.

identifiedFeb 24, 05:24 PM

A slow deploy is causing Sprites API degradation. We are implementing a fix.

identifiedFeb 24, 05:23 PM

A slow deploy is causing Sprites API degradation. We are implementing a fix.

minorresolvedFeb 24, 04:33 AM — Resolved Feb 24, 11:06 AM

Metrics are degraded

5 updates
resolvedFeb 24, 11:06 AM

Metrics processing has caught up, and we don't see any data loss.

monitoringFeb 24, 09:35 AM

Delayed metrics are still being processed.

monitoringFeb 24, 06:46 AM

Metrics are coming back online, but it will take a little time to process what's backed up in the queues.

identifiedFeb 24, 05:49 AM

We're continuing to work with VictoriaMetrics support on a fix for this issue.

identifiedFeb 24, 04:33 AM

In some cases data is missing or lagging. We've identified the problem and are working on a fix.

minorresolvedFeb 24, 09:39 AM — Resolved Feb 24, 10:44 AM

Sprite creations failing

3 updates
resolvedFeb 24, 10:44 AM

This incident has been resolved.

monitoringFeb 24, 10:25 AM

A fix has been implemented and we are monitoring the results.

investigatingFeb 24, 09:39 AM

We are currently investigating issues creating new Sprites.

noneresolvedFeb 23, 03:00 PM — Resolved Feb 23, 08:30 PM

Degraded Managed Postgres Control Plane

2 updates
resolvedFeb 24, 12:31 AM

This incident has been resolved as of 20:30 UTC.

investigatingFeb 23, 03:00 PM

We are currently investigating issues with the MPG control plane. Users may experience delays or hanging when creating or deleting databases via the dashboard or CLI.

minorresolvedFeb 20, 04:14 PM — Resolved Feb 20, 08:49 PM

Deploys hanging at waiting for Depot Builder

5 updates
resolvedFeb 20, 08:49 PM

This incident has been resolved.

monitoringFeb 20, 07:38 PM

The fix has been rolled out and we are seeing deploys using depot builder succeeding normally. We continue to monitor to ensure full recovery. Depot builders have been reenabled as the default option for new deploys

identifiedFeb 20, 05:59 PM

A fix is being rolled out. Fly builders continue to be the default while this is deployed

identifiedFeb 20, 04:39 PM

We are again seeing elevated latency provisioning depot builders on new deploys. Users may see deploys using Depot builders hang or timeout at the "Waiting for Depot Builder" step. We are working on a fix. We are switching all deploys to use the default Fly builders in the meantime. If desired users can manually switch back to depot builders using `fly deploy --depot=true` but may continue to see latency issues at this time.

monitoringFeb 20, 04:14 PM

We have seen elevated latency provisioning Depot builders during deployments over the past hour. This caused some deploys to hang or timeout at the "Waiting for Depot Builder" step in this period. Latency has improved and builder provision times are back to normal. We're continuing to monitor to ensure latency remains normal.

minorresolvedFeb 20, 10:52 AM — Resolved Feb 20, 11:57 AM

Networking issues for users connecting through lhr

3 updates
resolvedFeb 20, 11:57 AM

Network traffic in LHR has been stable for some time now, we are not seeing any further issues.

monitoringFeb 20, 11:21 AM

A fix has been implemented and we are monitoring the results.

investigatingFeb 20, 10:52 AM

We’re currently investigating this issue.

minorresolvedFeb 19, 09:14 PM — Resolved Feb 20, 12:05 AM

Investigating registry issues affecting deploys

5 updates
resolvedFeb 20, 12:05 AM

This incident has been resolved.

identifiedFeb 19, 10:24 PM

While we have seen some improvement from the previous fix, we are still seeing elevated rates of Registry connection issues. Users may continue to see slower machine creates and deploys due to slow image pulls. Deploys may succeed on a retry. We are continuing to work on restoring normal registry performance

monitoringFeb 19, 09:49 PM

A fix has been implemented and we are monitoring the results.

identifiedFeb 19, 09:43 PM

The issue has been identified and a fix is being implemented.

investigatingFeb 19, 09:14 PM

We are currently investigating this issue.

majorresolvedFeb 18, 04:22 PM — Resolved Feb 18, 04:44 PM

Control plane state delayed on some hosts possibly causing network or deployment disruption

4 updates
resolvedFeb 18, 04:44 PM

This incident has been resolved.

monitoringFeb 18, 04:28 PM

A fix has been implemented and we are monitoring the results.

identifiedFeb 18, 04:23 PM

We are continuing to work on a fix for this issue.

identifiedFeb 18, 04:22 PM

The issue has been identified and a fix is being implemented.

majorresolvedFeb 17, 01:06 PM — Resolved Feb 17, 02:24 PM

flyctl deploy timeouts

3 updates
resolvedFeb 17, 02:24 PM

Earlier today, an issue caused elevated rate limiting and some deployment timeouts. A fix is in place and deployments are back to normal.

monitoringFeb 17, 01:42 PM

A fix has been implemented and we are monitoring the results.

identifiedFeb 17, 01:06 PM

We’re investigating elevated 429 errors from flaps causing deployment timeouts. Affected deploys are failing with: ✖ Failed: error waiting for release_command machine XX to finish running: timeout reached waiting for machine's state to change Your machine never reached the state "destroyed".

majorresolvedFeb 14, 11:33 AM — Resolved Feb 14, 02:27 PM

Degraded Managed Postgres Control Plane in ORD

5 updates
resolvedFeb 14, 02:27 PM

This incident has been resolved.

monitoringFeb 14, 02:07 PM

A fix has been implemented and we are seeing full recovery of the control plane in ORD. With that recovery we are seeing impacted replicas catching up and clusters returning to normal health. We're continuing to monitor for full recovery.

identifiedFeb 14, 01:47 PM

We are continuing to work on a fix for this issue.

identifiedFeb 14, 11:47 AM

The issue has been identified and we are working on a fix. The majority of MPG clusters in ORD continue to run normally, though some users may still see degraded replicas at this time. Some clusters in the region will have experienced a primary -> replica failover.

investigatingFeb 14, 11:33 AM

We are currently investigating issues with the MPG control plane in ORD. A small number of clusters in the region may be seeing replication lag or PGBouncers connectivity issues at this time.

minorresolvedFeb 11, 08:44 PM — Resolved Feb 11, 09:30 PM

Issues with deploying apps using Depot builders for new accounts

4 updates
resolvedFeb 11, 09:30 PM

This incident has been resolved.

monitoringFeb 11, 09:24 PM

A fix has been implemented and we are monitoring the results.

identifiedFeb 11, 08:57 PM

The issue has been identified and a fix is being implemented.

investigatingFeb 11, 08:44 PM

Some new Fly.io users may encounter an "upgrade your organization" error message when attempting to deploy apps for the first time. We're currently working with Depot to figure out what's causing the issue. In the meantime, you should be able to work around the issue by using Fly builders with `fly deploy --depot=false`.

minorresolvedFeb 11, 06:07 AM — Resolved Feb 11, 07:22 AM

Creating new sprites is degraded

6 updates
resolvedFeb 11, 07:22 AM

This incident has been resolved.

monitoringFeb 11, 06:57 AM

Sprite creation appears to be back to normal operation now.

identifiedFeb 11, 06:52 AM

We've identified the cause of the delay following creates and we're deploying a fix.

investigatingFeb 11, 06:09 AM

We are continuing to investigate this issue.

investigatingFeb 11, 06:08 AM

We are continuing to investigate this issue.

investigatingFeb 11, 06:07 AM

Sprite creation generates an error that the sprite "is not assigned to compute." Eventually the sprite transitions from an unknown state to warm, so there is a delay before the sprite is usable.

minorresolvedFeb 10, 07:00 PM — Resolved Feb 10, 08:44 PM

Degraded MPG clusters in IAD

5 updates
resolvedFeb 10, 08:44 PM

This incident has been resolved.

monitoringFeb 10, 08:00 PM

We've rolled out a fix for the remaining impacted clusters, and we're now monitoring the results.

identifiedFeb 10, 07:53 PM

We've rolled out a fix for some additional impacted clusters, and we're continuing to work on the remaining clusters.

identifiedFeb 10, 07:15 PM

We've identified the issue - some MPG clusters in IAD should be seeing improvements, and we're working on rolling out a fix for the remaining impacted clusters.

investigatingFeb 10, 07:00 PM

We're currently looking into an issue with MPG clusters in the IAD region.

📡 Tired of checking Fly.io status manually?

Better Stack monitors uptime every 30 seconds and alerts you instantly when Fly.io goes down.

Start Free Monitoring →