DigitalOcean Outage History
Past incidents and downtime events
Complete history of DigitalOcean outages, incidents, and service disruptions. Showing 50 most recent incidents.
April 2026(4 incidents)
Droplet Availability in All Regions
2 updates
Our Engineering team has confirmed full resolution of the issue with creating Droplets in all regions. Users should be able to create Droplets without issue. We apologize for the inconvenience. If you continue to face any issues, please open a support ticket from within your account.
Subject: Droplet Availability in All Regions Our Engineering team has identifed an issue with Droplet creates in all regions. A root cause has been found, a fix has been put in place and we are currently monitoring the situation to ensure full resolution. Users should be able to create new Droplets at this time. We will continue to monitor and we will post an update as soon as it is fully resolved. We apologize for the inconvenience.
Control Plane
1 update
Our Engineering team has resolved the control plane disruption that occurred from 17:06 to 17:18 UTC. During this time, users may have experienced intermittent issues with managing their resources through the Cloud Control Panel or DigitalOcean API. The root cause of the disruption was identified and addressed, and all services are now operating normally. If you continue to experience any problems, please open a ticket with our Support team. We apologize for any inconvenience this may have caused.
Serverless Inference - High error rates for open source models ( Qwen 3 32B)
3 updates
Service has been fully restored, and the model is now operating normally. We have implemented improvements to enhance stability and reduce the likelihood of similar issues in the future.
We are currently investigating reports of elevated latency affecting requests to this model when using Serverless Inference and Agents. Earlier observations indicated increased error rates for the open-source Qwen 3 32B model. The Ray dashboard also showed multiple workers in a pending state, suggesting capacity constraints. Our analysis determined that the model was experiencing higher-than-expected request volume without sufficient resources to scale accordingly. To address this, the node pool size has been increased to improve available capacity. However, there are still insufficient nodes to fully support the desired number of model replicas. Following the node pool expansion, a new pod-related error has been identified. Our Engineering team is actively working to resolve this issue and restore full service performance.
Serverless inference for alibaba-qwen3-32b (Qwen 3 32B) in tor1 is experiencing high error rates starting at 10:46 UTC.
Serverless Inference Issue
3 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
Our Engineering team is investigating an issue with Serverless inference. At this time, users may experience high error rates for open source models (llama 3.3 70b). We apologize for the inconvenience and will share an update once we have more information.
March 2026(11 incidents)
Gradient AI Platform agents and services Accessibility
5 updates
Our Engineering team has implemented a fix, the issues impacting Gradient AI Platform have been resolved. All agents are back up and healthy. Service has been fully restored.
A fix has been implemented and services have been restored. We are continuing to monitor the system to ensure stability. We will provide further updates if needed.
We've identified the issue and are actively working to restore the affected services. We're making steady progress and closely monitoring the situation. Further updates will be shared as they become available.
We’ve identified the issue and are currently working on restoring the services. We’ll continue to provide updates as progress is made.
We are currently investigating issue affecting the accessibility of agents and services on the Gradient AI Platform. Users may experience failures or unresponsiveness when attempting to use these features. Our engineering team is actively working to identify the root cause and restore full functionality. We apologize for the inconvenience and will share an update once we have more information.
App platform seeing delays in deployments across FRA1 region
3 updates
The issue impacting delays in App Platform deployments has been confirmed to be resolved. Between approximately 00:08am UTC & 11:46am UTC, users may have noticed delays while creating or updating apps, or may have encountered failed deployments. For failed deployments, please trigger a redeploy, which should successfully resolve the issue. We confirmed that the service is functioning as expected. Once again, we sincerely apologize for the inconvenience caused and appreciate your understanding. However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation. We'll be happy to assist you.
Our Engineering team has deployed a fix to resolve the issue impacting new App Platform deployments using Dedicated Egress IP in FRA1 region. We are actively monitoring the situation to ensure stability and will provide an update once the incident has been fully resolved. Thank you for your patience and we apologize for the inconvenience.
Our engineers are currently investigating an issue impacting new App Platform deployments using Dedicated Egress IP in FRA1 region. During this time, some users may experience delay when creating new App Platform apps or deploying existing apps. Existing apps are not affected and should continue to function normally. We apologize for any inconvenience, and we'll share more information as it becomes available.
Gradient AI model availability
2 updates
Our Engineering team has implemented a fix, the issues impacting model availability and performance have been resolved. All models, including those previously degraded, are back up and healthy. Service has been fully restored.
Our Engineering team is investigating reports of Gradient AI model availability issues impacting multiple models. Users may experience issues with models availability, including Llama3.1-8b and Qwen3-32b, as well as embedding models such as GTE Large (v1.5), All-MiniLM-L6-v2, Multi-QA-mpnet-base-dot-v1, and Qwen3 Embedding 0.6B. Additionally, Guardrails are not available, affecting associated agents, and users attempting to run inference on the Llama3.3-70b model will see degraded performance. We apologize for the inconvenience and will share an update once we have more information.
Cloud Control Panel and API
1 update
From 16:14 to 16:38 UTC, Our Engineering team observed an issue impacting Cloud control panel and API. During this time, users may experienced errors when trying to access the Cloud control panel and when trying to use the API. Our team has fully resolved the issues as of 16:38 UTC. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
Degraded performance with BYOK Anthropic models
2 updates
The issue is now resolved, all Anthropic BYOK models in Gradient AI should work normally. Contact support if issues persist.
Our Engineering team is investigating an issue related to all Gradient AI agents and serverless inference that require BYOK Anthropic modles. Impacted users may experience degraded performance. We will provide an update as soon as possible
Delay in App Platform Deployments
4 updates
As of 23:00 UTC, our Engineering team has confirmed that the issue causing delays in App Platform deployments has been fully resolved. The fix implemented earlier has been successful, and we are no longer seeing any delays or errors with deployments. Users should now be able to deploy their apps successfully and without any issues. We apologize again for the inconvenience caused. However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation.
After working with our upstream provider, our Engineering team has implemented a fix to resolve the issue that was causing delays in the deployment of new apps, and they are currently monitoring the situation. During this time, users should no longer experience issues with creating new apps and all the stalled creation events should provision completely. We will post an update as soon as the issue is fully resolved.
Our Engineering team is starting to see delays once again with new App Platform deployments. During this time, users may still experience delays with deploying new apps. We're working with our upstream provider to resolve the issue. We again apologize for the inconvenience. We will post further updates once we have more information.
Starting at 20:40 UTC, users may have seen delays with deploying new apps on App Platform. At this time, our Engineering team is seeing signs of recovery, and users should be able to deploy new apps without issue. We're currently monitoring the situation to ensure full recovery. We apologize for the inconvenience. We'll post an update once the issue has been confirmed to be resolved.
Newly Created Managed Kubernetes Nodes
4 updates
Our Engineering team has confirmed the resolution of the issue impacting DNS timeouts for newly provisioned Managed Kubernetes nodes. At this time all cluster services should now be functioning normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has implemented a fix to address the issue causing DNS timeouts for newly provisioned Managed Kubernetes nodes. Further investigation has confirmed that this issue primarily affected customers utilizing a NAT Gateway within their VPC and running a VPC-native cluster. We are actively monitoring the situation to ensure overall stability. We appreciate your patience and will provide a further update once the issue is fully confirmed to be resolved.
Our Engineering team is investigating an issue impacting newly provisioned Managed Kubernetes nodes. During this time, Only customers who run a NAT Gateway in their VPC and a VPC-native clusters are affected and may experience DNS timeouts. We apologize for the inconvenience and will share an update once we have more information.
Our Engineering team is investigating an issue impacting newly provisioned Managed Kubernetes nodes. During this time, new nodes may experience DNS timeouts, which could temporarily affect cluster services. We apologize for the inconvenience and will share an update once we have more information.
Ubuntu/Debian Package Mirror Failure
1 update
From 17:50 to 19:06 UTC, Our Engineering team observed an issue with mirrors.digitalocean.com. During this time, users may have experienced errors when trying to update packages on Debian and Ubuntu Images. Our team has fully resolved the issues as of 19:06 UTC. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
HTTP 522 Error on App Platform
1 update
Our Engineering team identified an issue affecting the App Platform. During the incident, users may have experienced HTTP 522 (Connection Timed Out) errors when accessing their apps. The issue seems to be resolved now. We apologize for the inconvenience caused. If you continue to experience any related errors, please contact our Support team by opening a ticket at https://www.digitalocean.com/support/contact/.
App Platform Deployments
3 updates
As of 00:22 UTC, our Engineering team has confirmed that the issue causing delays in App Platform deployments has been fully resolved. The fix implemented earlier has been successful, and we are no longer seeing any delays or errors with deployments. Users should now be able to deploy their apps successfully and without any issues. We apologize again for the inconvenience caused. However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation.
Our Engineering team has implemented a fix to address the issue causing delays in App Platform deployments. We are actively monitoring the situation to ensure overall stability. We appreciate your patience and will provide a further update once the issue is fully confirmed to be resolved.
Our Engineering team is currently investigating an issue impacting App Platform deployments. During this time, users may experience a delay or failure when deploying new and existing App Platform apps. We apologize for any inconvenience, and we'll share more information as it becomes available.
Internal Load Balancers Connectivity
3 updates
From 19:57 UTC to 01:03 UTC, customers may have experienced connectivity issues between Internal Load Balancers and their associated target droplets, which could have resulted in service disruption or traffic routing failures. Our Engineering team has confirmed full resolution of the issue, and Internal Load Balancers should now be functioning normally. If you continue to experience any problems, please open a ticket with our Support team. We apologize for any inconvenience caused.
Our Engineering team has implemented mitigation measures to address the connectivity issues affecting Internal Load Balancers and their associated target droplets. We are actively monitoring the situation to ensure stability and to prevent any recurrence. We will provide a further update once we confirm the issue is fully resolved.
Our Engineering team is investigating an issue affecting Internal Load Balancers. Customers may experience connectivity loss between Internal Load Balancers and their associated target droplets. We apologize for the inconvenience and will share an update as soon as more information becomes available.
February 2026(9 incidents)
App Platform Deployments
2 updates
Our Engineering team has confirmed that the issue impacting build failures on App Platform has been resolved. Between approximately 14:30 UTC on the 26th and 00:01 UTC on the 27th, users may have experienced errors when attempting to build or deploy applications using older versions of the Node.js buildpack. A fix has been implemented, and build and deployment operations have been restored to normal. All App Platform builds are now succeeding as expected. Customers who previously encountered build failures should now be able to deploy their applications without further issues. If you continue to experience any problems, please open a ticket with our support team. Thank you for your patience, and we apologize for any inconvenience.
As of 14:30 UTC, our Engineering team is investigating reports of build failures on App Platform for customers using older version of the Node.js buildpack. Users may experience errors when attempting to build their applications, resulting in failed deployments. Our Engineering team is working to fix the issue and will share an update once we have more information. In the meantime, as a workaround, we recommend that customers upgrade to the latest version of Node.js build packs. This may help to resolve the build failures and allow for successful deployments. To upgrade, please follow the instructions outlined here: https://docs.digitalocean.com/products/app-platform/how-to/migrate-nodejs-buildpack/ We apologize for the inconvenience this issue may be causing and appreciate your patience as we work to resolve it.
Intermittent Errors with Llama 3.3-70B
3 updates
Issue resolved. Cause: A few requests made to the Llama 3.3-70B model caused issues. Impact: Intermittent errors when interacting with the model through serverless inference and/or with agents created using this model. Contact support if issues persist.
Fix deployed. Monitoring resources related to the Llama 3.3-70B. Users should no longer experience intermittent errors when making serverless inference requests via APIs and Agents . Awaiting confirmation before closure.
We are currently investigating an issue affecting the Llama 3.3-70B model. Symptoms: Users may encounter intermittent errors when making serverless inference requests via APIs and Agents. Current Status: Our engineering team is actively investigating the issue to determine the root cause.
Control Panel Visibility
3 updates
The issue impacting the visibility of Cloud Panel has been confirmed to be resolved. Between approximately 16:08 PM & 17:56 PM UTC, users may have noticed unusual behavior when accessing the console, performing resizing operations, or experiencing issues with the Cloud Panel visibility. Our team has taken necessary corrective measures to restore the service, and we can confirm that it is now functioning as expected. We sincerely apologize for the inconvenience caused and truly appreciate your understanding throughout this process. However, if you continue to experience any issues, please do not hesitate to raise a support ticket for further investigation. We’ll be happy to assist you.
Our team has implemented a fix to address the issue affecting the visibility of the Cloud Panel. We are actively monitoring the situation to ensure overall stability. Users should no longer encounter abnormalities when accessing the console, resizing the Droplet, misalignments within the Cloud Panel, etc. We will provide a further update once the issue is fully confirmed to be resolved.
Our team is currently investigating an issue impacting the visibility of the Control Panel. Users may notice unexpected behavior, such as being prompted for login credentials when accessing the console, being unable to select radio buttons for any plans during resize, columns appearing squished, etc. We apologize for the inconvenience and will share further updates as soon as more information becomes available.
Spaces Availability in NYC3
2 updates
Our Engineering team has confirmed that the issue impacting the availability of Spaces and the Container Registry in the NYC3 region has been fully resolved. A fix was implemented, and services have been restored. All operations are now succeeding normally. If you experience any further issues, please contact Support by creating a Support ticket from within your account. Thank you for your patience while we worked to resolve this issue.
Between 5:34 and 6:32 UTC, our Engineering team identified the issue that was impacting the availability of Spaces and the Container Registry in the NYC3 region. A fix has been implemented to resolve the issue. During this time users may have experienced errors while interacting with Spaces. Additionally, CRUD operations (create, read, update, delete) within the Container Registry may have failed or returned errors during this time. We are now monitoring the platform to ensure services remain stable and operating as expected. We will provide a final update once the issue is fully resolved. If you continue to experience any issues, please contact our Support team.
Droplet Limit Increase Feature
3 updates
Our Engineering team has confirmed that the issue affecting the Droplet limit increase feature within the Cloud Control Panel has been fully resolved. Requests to increase Droplet limits submitted through the Control Panel are now being processed correctly, and Support tickets are being generated as expected. If you experience any further issues, please contact Support by creating a Support ticket from within your account. Thank you for your patience while we worked to resolve this issue.
The issue affecting the Droplet limit increase feature within the Cloud Control Panel has been identified and a fix has been implemented. Requests submitted through the Control Panel are now generating Support tickets as expected. Our team is continuing to monitor the system to ensure full functionality and stability. If you experience any further issues with Droplet limit increase requests, please contact Support directly by creating a Support ticket from within your account.
Our Engineering team is currently investigating an issue affecting the Droplet limit increase feature within the Cloud Control Panel. At this time requests to increase Droplet limits submitted through the Control Panel are not being processed. Customer submissions to increase limits are not generating support tickets as expected. We are actively working to identify the root cause and restore normal functionality as quickly as possible. If you urgently require a Droplet limit increase, please contact Support directly by creating a Support ticket from within your account.
Delay in App Platform Deployments
3 updates
As of 18:55 UTC, our Engineering team has confirmed that the issue causing delays in App Platform deployments has been fully resolved. The fix implemented earlier has been successful, and we are no longer seeing any delays or errors with deployments. Users should now be able to deploy their apps successfully and without any issues. We apologize again for the inconvenience caused. However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation. We'll be happy to assist you.
Our Engineering team has implemented a fix to address the issue causing delays in App Platform deployments. We are actively monitoring the situation to ensure overall stability. Users may already notice improvements while deploying apps. We appreciate your patience throughout the process and will provide a further update once the issue is fully confirmed to be resolved.
Our engineers are currently investigating an issue impacting new App Platform deployments. During this time, some users may experience delay when creating new App Platform apps. Existing apps are not affected and should continue to function normally. We apologize for any inconvenience, and we'll share more information as it becomes available.
MongoDB Cluster Creation
5 updates
Our Engineering team has confirmed the full resolution of the issue with MongoDB Clusters. Thank you for your patience, and we apologize for any inconvenience. If you continue to experience any issues, please open a Support ticket right away.
Our Engineering team has implemented a fix to resolve the issue with MongoDB clusters and at this time, services should be functioning as expected. We're monitoring the situation and will post a final update once we confirm this is fully resolved.
Our engineering team has identified the cause of the issue with create, fork and resize events failure for MongoDB clusters in all of our regions and is actively working on a fix. We will post an update as soon as additional information is available.
Our engineering team continues to investigate the issue with create, fork and resize events failure for MongoDB clusters in all of our regions. We appreciate your patience and will post an update as soon as additional information is available.
Our Engineering team is investigating an issue with all events for MongoDB clusters in all of our regions. During this time, users may face issues with creation, fork and resize operations in the MongoDB clusters. We apologize for the inconvenience and will share an update once we have more information.
App platform seeing delays in deployments across all regions.
3 updates
The issue impacting delays in App Platform deployments has been confirmed to be resolved. Between approximately 08:52 UTC & 13:01 UTC, users may have noticed delays while creating or updating apps, or may have encountered failed deployments. For failed deployments, please trigger a redeploy, which should successfully resolve the issue. We confirmed that the service is functioning as expected. Once again, we sincerely apologize for the inconvenience caused and appreciate your understanding. However, if you continue to experience any issues, please don't hesitate to raise a support ticket for further investigation. We’ll be happy to assist you.
Our team has implemented a fix to address the issue causing delays in App Platform deployments. We are actively monitoring the situation to ensure overall stability. Users may already notice improvements while deploying apps. We appreciate your patience throughout the process and will provide a further update once the issue is fully confirmed to be resolved.
Our engineers are currently investigating an issue impacting new App Platform deployments. During this time, some users may experience delay when creating new App Platform apps. Existing apps are not affected and should continue to function normally. We apologize for any inconvenience, and we'll share more information as it becomes available.
Cloud Control Panel
5 updates
As of 17:55 UTC, our Engineering team has resolved the timeouts affecting the Cloud Control Panel and API. The issue was caused by a temporary overload on our infrastructure, resulting in 5xx errors for API requests and gateway timeouts for Cloud Control Panel users. If you continue to experience any problems, please open a ticket with our Support team. We apologize for any inconvenience this may have caused and appreciate your patience and understanding.
Our Engineering team has implemented a fix for the timeouts affecting the Cloud Control Panel and API, and is currently monitoring the situation. Services have recovered, and users should no longer experience 5xx errors when using the API or gateway timeouts when accessing the Cloud Control Panel. We will continue to monitor the situation to ensure that all services are stable and functioning as expected. We will post an update as soon as the issue is fully resolved.
Our Engineering team has identified the cause of the issue impacting the Cloud Control Panel and API and is actively working on deploying a fix. At this time, users will continue to see timeouts/5xx errors, but may intermittently see requests succeeding. We will post further updates as soon as the fix is deployed or there is more information to share.
Our Engineering team is investigating an issue impacting our Cloud Control Panel and API. Users attempting to make API requests could see 5xx errors and users attempting to access the Cloud Control Panel may see gateway timeouts or page timeouts. We apologize for the inconvenience and will share an update once we have more information.
Our Engineering team is investigating an issue impacting our Cloud Control Panel and API. Users attempting to make API requests could see 5xx errors and users attempting to access the Cloud Control Panel may see gateway timeouts or page timeouts. We apologize for the inconvenience and will share an update once we have more information.
January 2026(5 incidents)
Kubernetes Clusters and Droplets in FRA1 region
4 updates
Our Engineering team has resolved the issue affecting Kubernetes clusters and Droplet events in the FRA1 region. Between approximately 00:17 UTC and 14:30 UTC, customers may have experienced issues provisioning Kubernetes clusters and mounting volumes. All services should now be functioning normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has implemented a fix to address the issue affecting Kubernetes clusters and Droplet events in the FRA1 region and is actively monitoring the situation. Customers should no longer experience issues provisioning Kubernetes clusters or mounting volumes. We will provide an update as soon as the issue is fully resolved.
Our Engineering team has identified the root cause of the issue impacting Kubernetes clusters and Droplet events in the FRA1 region and is actively working on a fix. In the meantime, users may continue to experience issues. We appreciate your patience and will share updates as more information becomes available.
Our Engineering team is investigating an issue with Kubernetes clusters in the FRA1 region. During this time, subset of users may experience an issue while provisioning Kubernetes clusters and mounting volumes. Additionally, users may also notice the droplet events appearing to be stuck or delayed in this region. We apologize for the inconvenience and will share an update once we have more information.
Droplet Based Events in FRA1
3 updates
Our Engineering team has confirmed that the issue impacting our Droplet-based products in the FRA1 region has been completely mitigated. Users should no longer see issues with their Droplets and Droplet-related services. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has identified the cause of the issue impacting our Droplet-based products in the FRA1 region and applied a fix. The impact has started to mitigate and users should be able to connect to their Droplets and also start to see events getting processed successfully. We're now monitoring the fix for stability and will post an update once we are confident it is successful.
Our Engineering team is currently investigating an issue affecting events in FRA1. During this time, customers may experience delays or errors when creating or deleting Droplets, as well as when using Droplet-based products such as Load Balancers, Kubernetes Clusters, or Databases. Our teams are actively working to identify the root cause and restore full service as quickly as possible. We apologize for the inconvenience and will provide updates as more information becomes available.
Cloud Control Panel and API
3 updates
From 20:45 UTC to 21:06 UTC, users may have experienced issue affecting the Cloud Control Panel, API, and related services. Our Engineering team has confirmed that the issue is fully resolved, and all systems are now operating normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has implemented a fix for the issue affecting the Cloud Control Panel, API, and related services. We are observing recovery, and users should now be able to access their accounts and use the API without errors. We are continuing to monitor the situation closely and will provide an update once full resolution is confirmed.
Our Engineering team is investigating an issue impacting multiple services including the Cloud Control Panel and API. Users may encounter errors when accessing their accounts or using the API. We are actively working to resolve this issue and will provide updates as soon as more information becomes available.
App Platform Deployments
2 updates
The issue impacting App Platform deployments has been successfully resolved. Users should no longer encounter delays during the build phase or have deployments getting stuck. All services are now confirmed to be stable and operating normally. We appreciate your patience throughout this but if you continue to experience any issues, please create a support ticket for further analysis.
Our Engineering team has implemented necessary changes to address the issue impacting both new and in-progress App Platform Deployments. Our team is currently monitoring the situation. Users should now notice improvements in deployment performance. We appreciate your patience. We'll update once the issue is confirmed to be resolved.
Account access and Payment
2 updates
Our Engineering team has resolved the issue with payments failure using PayNow. Users should not see any issues with making payments via PayNow and logging into the accounts on our platform. Services should now be operating normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our engineering team is investigating an issue with the suspended user's accounts being unable to access the accounts. During this time users may have experience issues signing in or accessing accounts and the failure in the payments. We apologize for the inconvenience and will share an update once we have more information.
December 2025(10 incidents)
App Platform Static Websites in NYC3 Region
3 updates
Our Engineering has confirmed resolution of the issue. Users should no longer experience errors with attempting to deploy new static sites in NYC3 on App Platform. If you experience any further problems or have any questions, please open a support ticket within your account.
Our Engineering team has deployed a fix for the issue. Users should no longer experience errors when attempting to deploy new static sites now in NYC3 for App Platform. We will post an update once we've confirmed that the issue is fully resolved.
As of 17:45 UTC, our Engineering team is investigating reports of static site deployment failures in the NYC3 region on App Platform. Users may experience errors when attempting to deploy new static sites, resulting in failed deployments. The existing static sites are still accessible and functioning normally. Our team is actively working on identifying the root cause and implementing the fix. We apologize for the inconvenience and will share an update once we have more information.
API and Cloud Requests
2 updates
Our Engineering team has confirmed full resolution of the issue. Users should no longer experience errors when making requests to the Cloud Control Panel or API. If you experience any further problems or have any questions, please open a support ticket within your account.
As of 17:47 UTC, our Engineering team is investigating reports of intermittent 504 errors when making requests to api.digitalocean.com and cloud.digitalocean.com. Users may experience sporadic errors, resulting in a 504 response code, when attempting to interact with our API or Cloud services. At this point, the issue appears to be intermittent, and not all requests are being affected. We apologize for the inconvenience and will share an update once we have more information.
Spaces Access Keys and DigitalOcean Container Registry
1 update
From 08:28 to 13:03 UTC, Our Engineering team observed an issue with Spaces Access keys for DOCR in AMS3 region. During this time, users encountered an error with "403 (InvalidAccessKeyId): The access key ID you provided does not exist in our records" while accessing spaces keys. Our team has fully resolved the issues as of 13:03 UTC. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
Recovery Console Accessibility
3 updates
From 18:57 UTC to 22:05 UTC, customers may have experienced issues accessing the Recovery Console due to a service interruption. During this time, Droplet functionality remained unaffected, and customers were still able to use the Recovery ISO option via SSH. Our Engineering team has confirmed that the issue is now fully resolved, and Recovery Console access has been fully restored and is operating normally. If you continue to experience any difficulties, please open a ticket with our Support team. We apologize for the inconvenience caused.
Our Engineering team has deployed a fix to resolve the issue causing the Recovery Console to be unavailable. We are currently monitoring the situation to ensure access is fully restored and stable. Please note that Droplet functionality was not impacted by this issue. We will post another update once we confirm the issue is fully resolved.
Our Engineering team is actively investigating an issue causing the Recovery Console to be unavailable. Droplet functionality is not impacted. If customers need recovery iso, they can still select the "Boot from Recovery ISO" option in the recovery tab as seen in the guide here https://docs.digitalocean.com/products/droplets/how-to/recovery/recovery-iso/ but will need to use SSH to access their droplets. We apologize for the inconvenience and will share an update once we have more information.
App Platform Static Websites
3 updates
As of 18:15 UTC, our Engineering team has confirmed the issue impacting accessibility of App Platform static websites has been resolved. Service has been restored and are now functioning normally. We appreciate your patience and regret the inconvenience caused. If you continue to experience any issues, feel free to open a Support ticket for further investigation.
Our Engineering team has implemented a fix impacting the availability of App Platform static websites. Users should now experience improved performance when accessing the sites. We are actively monitoring the situation and will provide an update once we can confirm the issue has been fully resolved.
Our Engineering team is currently investigating an issue impacting App Platform static websites. During this period, users may notice 404 Not Found errors while accessing the sites. Our team is actively working on identifying the root cause and implementing the fix. We apologize for the inconvenience and will share an update once we have more information.
DOCR Access Errors for New App Creation
3 updates
From 18:02 UTC to 21:10 UTC, customers in the BLR1 region who had not previously created an app may have experienced DOCR (DigitalOcean Container Registry) access errors when attempting to create new apps. Our Engineering team has confirmed that the issue is fully resolved, and all systems are now operating normally. If you continue to experience any problems, please open a ticket with our Support team. We apologize for the inconvenience caused.
Our Engineering team has deployed a fix for the issue affecting the creation of new apps in the BLR1 region, where customers who had not previously created an app encountered DOCR (DigitalOcean Container Registry) access errors. We are currently monitoring the situation to ensure that the issue does not recur and that all functionality remains stable. We will post another update once we confirm the issue is fully resolved.
Our Engineering team is currently investigating an issue affecting the creation of new apps in the BLR1 region. Customers who have not previously created an app may encounter DOCR (DigitalOcean Container Registry) access errors. Our team is actively deploying a fix to restore normal functionality. We apologize for the inconvenience and will share an update once we have more information.
Control Panel Access
4 updates
Our Engineering team has confirmed the full resolution of this issue. From approximately 08:51 UTC – 09:12 UTC, users may have experienced difficulties signing in or accessing resources through the Control Panel and API due to an upstream provider issue. The upstream provider has fixed the issue, and all services are now functioning normally. If you continue to experience problems, please open a ticket with our support team. Thank you for your patience and we apologize for any inconvenience.
We are continuing to monitor for any further issues.
We are continuing to monitor for any further issues.
Our Engineering team has been made aware of an issue with an upstream provider that was affecting the Control Panel and API and has deployed a fix to resolve it. Users may have experienced issues signing in or accessing resources through the Control Panel. We are monitoring the situation closely and will share an update once the issue is resolved completely.
Degradation in Managed Databases
3 updates
From As of 16:34 to 19:47 UTC, may have encountered errors listing backup operations for their PostgreSQL, MySQL, OpenSearch, Redis and Kafa clusters through the API and UI. Our Engineering team has confirmed full resolution of the issue, users should no longer experience issues with listing backup operations. Thank you for your patience, and we apologize for the inconvenience. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel.
As of 19:47 UTC, our Engineering team has implemented a fix for the errors on Managed Database list backup operations, which was related to a dependency issue. The situation is currently improving, and we are seeing a reduction in error rates. The impact was limited to list backup operations for PostgreSQL, MySQL, OpenSearch, Redis, and Kafka engines, where users may have experienced errors when attempting to retrieve a list of backups through both the API and UI. We are now monitoring the situation to ensure that the fix is fully effective and that all operations are functioning normally. Users should no longer experience errors when listing backups, and all other control plane operations, such as creating, updating, or deleting databases, should continue to function normally. We will continue to monitor the situation to ensure that the issue is fully resolved. We apologize for the disruption and appreciate your patience.
As of 16:34 UTC, our Engineering team is investigating reports of errors and timeouts on control plane operations for Managed Databases. The issue is affecting multiple database engines, including PostgreSQL, MySQL, OpenSearch, Redis, and Kafka. Users may experience errors or timeouts when attempting to list backups, through both the API and UI. We want to emphasize that this issue does not currently appear to be impacting the data plane, and databases should continue to be accessible and functional. Our team is working to determine the root cause of the issue and will share an update once we have more information. We apologize for the inconvenience and appreciate your patience as we work to resolve this incident. We will provide further updates as soon as more information is available.
Gradient AI Platform – Service Degradation
1 update
During the timeframe 06:24 UTC – 13:45 UTC, the Gradient AI platform experienced a period of degraded functionality affecting a limited set of features. While the platform remained accessible, the below components did not perform as expected. Impacted Areas: - Gradient Agent Evaluations - Agent trace visibility - Access management for traces - Agent deletion for agents with traces enabled Our Engineering team identified the underlying cause and restored full functionality across all affected components. The platform has remained stable since resolution, and monitoring confirms normal performance.
Guardrails Service Disruption Impacting Customers
4 updates
Our Engineering team has fully implemented and confirmed the effectiveness of the fix for the increased guardrail latency. System performance has returned to normal, and all affected services are operating as expected. We have observed stable metrics during extended monitoring and have not detected any further latency or interruptions. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has implemented a fix that has significantly reduced the impact of the issue, largely mitigating the outage. Although the situation has improved, some users may still experience intermittent latency issues. We are closely monitoring the results and will continue to work towards full resolution. We will provide another update once we have confirmed the issue is fully resolved.
Our Engineering team is currently investigating increased guardrail latency, which may cause long response times for agents with attached guardrails. We apologize for the inconvenience and will share an update once we have more information. Thank you for your patience and understanding as we work to resolve this issue.
As of 20:07 UTC on December 2, 2025, our Engineering team has detected increased guardrail latency which may produce long response times for agents with attached guardrails. Engineering is working on a fix. We apologize for the inconvenience and will share an update once we have more information. Thank you for your patience and understanding as we work to resolve this issue.
November 2025(6 incidents)
Managed Databases
1 update
From 18:55 to 19:32 UTC, Our Engineering team observed an issue with all control plane operations (Create, Scale, Fork, etc ) for all non-Mongo Managed Database clusters. During this time, users encountered long running requests made either through the Cloud Console UI or API, including provisioning new clusters, listing clusters, etc. Our team has fully resolved the issues as of 19:32 UTC All services for non-Mongo Managed Databases should now be operating normally. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
Multiple Services Disruption
4 updates
Our Engineering team has confirmed that the external network incident affecting multiple DigitalOcean services has been fully mitigated. The impacted services including Gen AI tools, App Platform, Load Balancers, Spaces, and provisioning or management actions for new clusters have recovered and are now operating normally. All requests are completing successfully. Thank you for your patience. If you continue to experience any issues, please open a support ticket from within your account.
Our engineering team has observed that the external network incident impacting multiple DigitalOcean services has been mitigated. The affected services including Gen AI tools, App Platform, Load Balancer, Spaces, and provisioning or management actions for new clusters are showing continued signs of recovery, with most requests now completing successfully. Our engineering team continues to monitor the situation closely. We will post an update as soon as the issue is fully resolved. We apologize for the disruption and appreciate your patience.
Our Engineering team is actively investigating an issue impacting multiple DigitalOcean services caused by an upstream provider incident. This disruption affects a subset of Gen AI tools, the App Platform, Load Balancer, Spaces and provisioning or management actions for new clusters. Existing clusters are not affected. Users may experience degraded performance or intermittent failures within these services. We acknowledge the inconvenience this may cause and are working diligently to restore normal operations. Signs of recovery are starting to appear, with most requests beginning to succeed. We will continue to monitor the situation closely and provide timely updates as more information becomes available. Thank you for your patience as we work towards full service restoration.
Our Engineering team is actively investigating an issue impacting multiple DigitalOcean services caused by an upstream provider incident. This disruption affects a subset of Gen AI tools, the App Platform, Load Balancer, and Spaces. Users may experience degraded performance or intermittent failures within these services. We acknowledge the inconvenience this may cause and are working diligently to restore normal operations. Signs of recovery are starting to appear, with most requests beginning to succeed. We will continue to monitor the situation closely and provide timely updates as more information becomes available. Thank you for your patience as we work towards full service restoration
Block Storage Volumes in NYC1 and AMS3
1 update
From 16:59 to 17:34 UTC, our Engineering team observed an issue with block storage volumes in the NYC1 and AMS3 regions. During this time, users may have experienced failures when attempting create, snapshot, attach, detach, or resize volumes. There was no impact to performance or availability of existing volumes. Our team has fully resolved the issues as of 17:34 UTC and volumes should be operating normally. If you continue to experience problems, please open a ticket with our support team from within your Cloud Control Panel. We apologize for any inconvenience caused.
Creation Events Failing for Spaces, App Platform, DOCR
3 updates
Our Engineering team has confirmed full resolution of the issue with failed create events with their Spaces, App Platform, and DOCR services. From 06:15 UTC AM to 09:53 AM UTC, the Spaces, App Platform, and DOCR services were impacted and have since been restored to normal operation. If you continue to experience problems, please open a ticket with our support team. Thank you for your patience throughout this incident!
Our Engineering team has implemented a fix to resolve the issue with failed create events with their Spaces, App Platform, and DOCR services and at this time, services should be functioning as expected. We're monitoring the situation and will post a final update once we confirm this is fully resolved.
Our engineering team is investigating an issue with failed create events with their Spaces, App Platform, and DOCR services. At this time, users may experience errors when creating Spaces, App Platform, DOCR services. We apologize for the inconvenience and will share an update once we have more information.
Network Connectivity in BLR1
2 updates
Our Engineering team has confirmed the full resolution of the networking connectivity issue affecting the BLR1 region. Users should experience expected performance when accessing Droplets and other services. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team is still observing intermittent issues impacting network connectivity in the BLR1 region. Between 17:28 and 17:37 UTC, the team observed a major connectivity loss to this region and took immediate steps to reroute traffic via a different upstream provider to alleviate the impact. The issue BLR1 is experiencing at the moment stems from a broader Internet issue. Network accessibility in the region has improved as of now, and users should already experience better performance when accessing Droplets and other services. We are closely monitoring the situation to ensure stability. We appreciate your patience and will provide an update once the issue is fully confirmed as resolved.
Network connectivity in BLR1
1 update
From 13:50 to 14:15 UTC, our Engineering team observed an issue with an upstream provider impacting network connectivity in the BLR1 region. During this time, users may have experienced increase in latency or packet loss when accessing Droplets and Droplet-based services, like Managed Kubernetes and Database Clusters in BLR1 region. The impact has now subsided and as of 14:15 UTC, users should already experience better performance when accessing Droplets and other services. We apologize for the inconvenience. If you are still experiencing any problems or have additional questions, then please open a support ticket within your account.
October 2025(5 incidents)
Gradient AI Platform Agent Creation.
5 updates
This incident has been resolved.
We are continuing to work on a fix for this issue.
Our Engineering team has identified the cause of the issue with the deployment of Gradient AI Platform Agents in VPCs and is actively working on a fix. We will post an update as soon as the fix has rolled out or there is additional information to share.
Our Engineering team has identified the cause of the issue with the deployment of Gradient AI Platform Agents in VPCs and is actively working on a fix. We will post an update as soon as the fix has rolled out or there is additional information to share.
As of 18:50 UTC, our Engineering team is investigating reports of agent creation issues impacting customers using a VPC on the GradientAI platform. At this point, affected users may experience errors where the agent creation process is stuck on "Waiting for Deployment." We apologize for the inconvenience and will share an update once we have more information.
Container Registry Garbage Collection
4 updates
Our Engineering team has resolved the issue affecting Garbage Collection in container registries, and all services are operating normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has implemented a fix for the Garbage Collection issue affecting container registries and Customers should no longer experience Garbage Collection jobs failing or getting stuck. We are currently monitoring the situation and will post an update as soon as the issue is fully resolved. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has identified the root cause of the issue affecting Garbage Collection in the container registries. A fix is being implemented to resolve failures and stuck operations. We will provide an update once the mitigation has been deployed. We apologize for the inconvenience and will share an update once we have more information.
Our Engineering team is investigating an issue with the Garbage Collection in the container registries. At this time, users may experience errors with the Garbage Collection failing or being stuck. We apologize for the inconvenience and will share an update once we have more information.
Availability Across Multiple Services
1 update
Between 14:28 PM & 14:35 PM UTC today, our Engineering team has identified an issue impacting the availability of multiple services, including Droplets, Volumes, Spaces, Kubernetes, Load Balancers, Managed Databases, etc. During this period, users might have noticed 500 Internal Server Errors or 503 Service Unavailable Errors when accessing these services and other dependent services. Our team has taken appropriate measures to address the issue. We can confirm that all services have been restored and are now functioning normally. We regret the inconvenience caused. However, if you continue to experience any issues, please create a support ticket for further analysis.
Spaces Availability, Container Registry creation and App builds
4 updates
Our Engineering team has resolved the performance issue affecting Spaces. From approximately 15:42 UTC - 16:14 UTC, customers may have experienced slow performance or limited availability when accessing Spaces, its objects via the Control Panel or API. Accessing and creating Container Registry along with App builds were also affected during this time. All services should now be functioning normally. If you continue to experience problems, please open a ticket with our support team. We apologize for any inconvenience.
Our Engineering team has implemented a fix to address the performance issue affecting Spaces and is monitoring the situation. Customers should no longer experience slow performance or limited availability when accessing Spaces, its objects via the Control Panel or API. Also, App builds and creating or accessing the Container Registry should work fine. We will post an update as soon as the issue is fully resolved.
Our Engineering team has implemented a fix to address the performance issue affecting Spaces and is monitoring the situation. Customers should no longer experience slow performance or limited availability when accessing Spaces, its objects via the Control Panel or API. Also, App builds and creating or accessing the Container Registry should work fine. We will post an update as soon as the issue is fully resolved.
Our Engineering team is investigating a performance issue affecting Spaces. During this time, customers may experience slow performance or limited availability when accessing Spaces or its objects via the Control Panel or API. In additon to that, Container Registry access/creation and app builds may also be affected. We apologize for the inconvenience and will share an update once we have more information.
App Platform Deployments
3 updates
The issue impacting App Platform deployments has been successfully resolved. Users should no longer encounter delays during the build phase or have deployments getting stuck. All services are now confirmed to be stable and operating normally. We appreciate your patience throughout this but if you continue to experience any issues, please create a support ticket for further analysis.
Our Engineering team has implemented necessary changes to address the issue impacting both new and in-progress App Platform Deployments. Our team is currently monitoring the situation. Users should now notice improvements in deployment performance. We appreciate your patience. We'll update once the issue is confirmed to be resolved.
Our Engineering team is currently investigating an issue impacting both new and in-progress App Platform Deployments. During this period, users may notice app deployments getting stuck in the build phase or even delayed builds. In some cases, builds may even fail shortly after retries are exhausted. We apologize for the inconvenience caused and appreciate your patience while we work to resolve this issue. We'll update once we have more information.
📡 Tired of checking DigitalOcean status manually?
Better Stack monitors uptime every 30 seconds and alerts you instantly when DigitalOcean goes down.