Render Outage History
Past incidents and downtime events
Complete history of Render outages, incidents, and service disruptions. Showing 50 most recent incidents.
April 2026(8 incidents)
Delayed deployment for new services in Oregon region
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are investigating an issue where new services created in the Oregon region are taking longer than usual to deploy.
Some web services and databases in Oregon are unresponsive
2 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
BitBucket account connection issues
3 updates
This incident has been resolved.
The issue has been identified and a fix is being implemented.
We are currently investigating reports of issues connecting BitBucket accounts/repos to Render. Existing services configured with a BitBucket repo are unaffected.
Image pull failures in Oregon
3 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are seeing image pull failures within the Oregon region. We have identified the problem and are working on a fix.
Infrastructure maintenance operations are currently in progress for outbound IPs in the Oregon region
2 updates
The maintenance has been completed successfully.
Ongoing maintenance is underway for static IPs in the Oregon region. Services in this region may experience changes to their outbound IP ranges.
Disruption affecting the provisioning of new instances in Singapore and partially in Oregon
13 updates
We've confirmed that our fix has addressed the issue and workloads are now scheduling normally in all regions.
Provisioning is recovering for free and paid services across all regions. Free services have resumed serving traffic. We are continuing to monitor for further issues.
Paid services are continuing to see recovery across builds and deploys. Free services remain disabled and will return errors if loaded in a browser. Singapore and part of Oregon remain impacted.
We are continuing to monitor for any further issues.
We are disabling new free services in Singapore and partially in Oregon.
We're seeing recovery across all regions.
We are continuing to work on a fix for this issue.
We are now seeing the same issue affecting services in Oregon partially. We are investigating to remediate this as quickly as possible.
Services with paid compute should now be operating normally. Please note that free services in this region are still intermittently disabled.
We are continuing to investigate this issue.
We are seeing some recovery following our mitigation efforts, but we are still working to identify the root cause for a long-term resolution.
We have temporarily disabled free services in Singapore. We are continuing to investigate and work to mitigate the issue.
This issue may affect new instances during builds, scaling, and similar operations. You may experience build/deploy failures. We are still investigating.
Delays in stateful service creation in Oregon
1 update
Between 12:40 UTC and 18:10 UTC, creation of stateful services in Oregon was delayed. Stateful services are any of Postgres, Key Value, or services with a persistent disk.
Unable to view workflows in the Dashboard
1 update
Between 18:04 and 19:10 UTC, workflows were unable to be viewed in the Dashboard. All workflows continued to run and were otherwise operational. This issue has been resolved.
March 2026(6 incidents)
Some Render Workflows tasks are not being scheduled
5 updates
Affected customers have been contacted directly. If you would like more details, please contact support at [support@render.com](mailto:support@render.com) or reach out through the chat widget in the Render dashboard.
This issue is now resolved. A full RCA will be available in the incident page within the next two weeks.
A fix has been implemented and we are monitoring the results.
The issue has been identified, and there should be no issues scheduling tasks at this time. We are still working to ensure it is fully resolved.
We are working on resolving the issue.
Some users may experience delayed logs in the Oregon region
3 updates
The issue is fixed.
We are monitoring recovery.
The issue has been identified, and we are already monitoring recovery.
Degraded builds and deployments in the Singapore region
3 updates
This incident has been resolved.
We are currently monitoring.
We are observing a recurrence of the previous incident (https://status.render.com/incidents/d9k4p51v1y7g). The current impact appears to be significantly lower. We are proactively investigating and closely monitoring the situation internally to ensure minimal impact on users.
Degraded builds and deployments in the Singapore region
4 updates
We believe this incident has been successfully mitigated. If you continue to experience any issues, please contact support@render.com.
We have re-enabled services using the free instance types in the Singapore region and are currently monitoring the impact.
We have disabled services using free instance types in the Singapore region
Builds and deployments in the Singapore region are currently experiencing degraded performance. You may notice longer than usual completion times.
Custom domains are unable to be created
2 updates
This incident has been resolved.
We are currently investigating this issue.
Ohio Services with Disks, Data Persistence failing
3 updates
All affected services have been migrated to new hosts and are running at this time. This incident has been resolved.
Many services have recovered with healthy disks on new hosts, however this issue remains open as some services are still impaired.
Services located in Ohio, both application types (Web Service, Private Service, etc.) and data types (Postgres, Key Value with persistence) are failing due to disk errors. We are working on mitigations.
February 2026(5 incidents)
Some deploys may be slow or hanging
5 updates
This incident has been resolved.
Deploy times have decreased and failures have dropped to baseline levels. We are monitoring for other impacts or other issues.
Deploy delays and failures continue to remain elevated. We are continuing to work on this issue.
We have implemented a mitigation and are currently observing positive results. We are continuing to monitor deploy health across the platform.
We are currently investigating this issue.
Degraded deploys in Ohio and Virginia
5 updates
We implemented updates to builds and deploys to improve handling of slow updates. As a result of these changes, build and deploy performance has recovered.
We are continuing to monitor for any further issues.
We are observing improved deploy performance and continue to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
Deploys are experiencing degraded performance and may take longer to complete.
Elevated latency for some new services when using the onrender.com address
4 updates
Latency has returned to expected levels. Affected services were those created between 2026-02-04T16:30Z and 2026-02-04T18:17Z. Services created outside that period were not affected.
We have determined that services are reachable through their onrender.com address. Requests will be successful but may take longer. We are continuing to work on a fix.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Degraded Deploys in Singapore Region
3 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Degraded deploys in all regions
3 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
January 2026(5 incidents)
External connectivity issues with Postgres databases hosted in Singapore
4 updates
This incident has been resolved. Please reach out to support@render.com for any follow-up questions.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Metrics impacted for some services in Oregon
4 updates
This incident has been resolved.
Metrics for impacted services in Oregon are now being displayed. Metrics will be missing from impacted services from 2026-01-23 00:50 to 2026-01-23 01:10 UTC
The issue has been identified and a remediation is being implemented
Metrics for some services in Oregon are currently impacted and may not be displaying.
Delays in starting instances on services
2 updates
Instance creation times have been restored to expected timerames. This issue has been resolved.
High demand for new instances has created a backlog for some services in the Oregon region. Services attempting to add new instances including those for new deploys, instance scale ups, restarts, etc. may see delays doing so.
Some application and build logs are missing on the dashboard
5 updates
This incident has been resolved.
We’re seeing steady recovery now, and logs should be showing again. We are still monitoring to confirm the longer-term recovery.
We’ve identified the issue and are now in recovery. Recovery may be slow due to the large volume of logs involved.
We believe we’ve identified the root cause of the issue, and we’re currently doing some additional investigation to make sure it’s resolved properly.
Some logs, especially build and application logs, may be temporarily missing. We’re actively investigating this and will work to fix it as soon as possible. Builds can still complete successfully even if the logs aren’t showing up.
Deploy delays in Oregon
3 updates
This incident has been resolved.
We have implemented a fix and are monitoring for further issues.
Some users may experience slower build times for services deployed in Oregon.
December 2025(6 incidents)
Deploy delays in Virginia
3 updates
Deploy performance has returned to expected levels.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Unable to view service events
2 updates
This incident has been resolved.
When viewing service events, an error is returned. We are currently investigating the issue.
Elevated Latency for Requests to Web Services and Static Sites in Frankfurt
4 updates
This incident has been resolved.
Latency has remained stable. We continue to monitor the situation.
Latency has normalized. We continue to investigate with our upstream vendor to identify the cause.
We are currently investigating this issue.
Services not accessible
5 updates
From 08:47 to 09:11 UTC, all incoming web traffic in every region failed to reach services and returned 500 errors instead. Our dashboard and API were down too. Background workers, private services, and cron jobs were not affected. The upstream provider has recovered now, and we’re no longer seeing any issues on our side.
The upstream provider is recovering, and we’re seeing recovery on our side too.
Access to services is now recovering, and we are continuing to monitor.
We're experiencing issues with an upstream provider.
We're investigating services not being accessible
Increased Latency in Updates to Oregon Services
3 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
Creation of services or changes to existing services hosted in our Oregon region are experiencing increased latency. We are currently investigating.
Custom Domains: New certificates stuck on pending
3 updates
We understand the issue is resolved now. If you're still seeing issues, please reach out.
The provider is actively working on the issue and we’re seeing some progress on certificate issuance. We’re still waiting on full confirmation that the fix is complete.
You may see certificates stuck on 'Pending' after adding a custom domain. We’ve located an issue with a provider and are looking into it right now.
November 2025(8 incidents)
Web services (Oregon) and static sites availability disruption
3 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results. Impact resulted in intermittent latency, timeouts, and errors for some services for ~6 minutes (11:46-11:52 PST).
We are currently investigating an issue impacting web service and static site availability.
Increased slowness in Dashboard
4 updates
The incident has been resolved.
Dashboard performance remains healthy and we continue to monitor.
Dashboard performance has recovered. We are continuing to investigate the root cause.
We are currently investigating this issue.
Elevated rates of deploy failures
4 updates
This incident has been resolved.
The issue has been identified and a fix is being implemented.
We are continuing to investigate this issue.
We are currently investigating this issue.
GitHub-backed services failing to build in all regions
4 updates
This incident has been resolved.
The upstream provider has implemented a fix and recovery is ongoing. We are continuing to monitor the situation.
An upstream provider is experiencing an outage. We are monitoring the situation.
We are currently investigating this issue.
An upstream provider major incident is affecting some Render services
4 updates
We have observed no further impact and the upstream provider has affirmed full resolution.
The upstream provider has resolved the issue. We’re still checking to see if there’s any remaining impact on our side.
The upstream provider is still suffering from the incident, and we are still waiting for further mitigations from them.
We’re aware of a major incident with an upstream provider that’s impacting some services on Render. You might see some 500s until it’s resolved upstream. We’re also investigating on our side.
Metrics/Logs missing for Oregon services
2 updates
This incident has been resolved.
Metrics and Logs for services hosted in Oregon are missing due to a platform incident. We are working to resolve this issue now.
Cron Job runs cannot be cancelled from our dashboard or the API
2 updates
This incident has been resolved.
We’re looking into why this is happening. The cancel button on a run doesn’t actually stop it right now. The current workaround is to suspend and then unsuspend the cron to force-cancel the run. If that doesn’t do the trick, please reach out to our support team.
Increased 404s in Oregon (Web Services) and Static Sites
9 updates
# Summary As an infrastructure provider, providing a reliable platform that allows our customers to build and scale their applications with confidence is our highest obligation. We invest heavily to ensure our platform is highly reliable and secure, including in our routing layer that handles billions of HTTP requests every day. On November 5, 2025, we inadvertently rolled back a performance improvement that was gated behind a feature flag. This led to disruption in the form of intermittent 404s for some web services and static sites deployed to the Oregon region. We have fully identified the sequence of events that led to this outage and are in the process of taking steps to prevent it from recurring. # Impact There were two periods where some customers hosting web services and static sites in the Oregon region experienced a partial outage with intermittent 404s. The first period occurred between 10:39 AM PST and 11:25 AM PST . At this time, two Render clusters had slightly degraded service. One cluster returned a negligible number of 404 responses, and the other cluster returned 404 responses for approximately 10% of requests. The second period occurred between 11:59 AM PST to 12:34 AM PST and saw more significant service degradation. During this period, about 50% of all requests to services in the affected cluster received a 404 response. All newly created services in these clusters were affected and received 404 responses during the incident. Updates to existing services were also slow to propagate. Free tier services that were recently deployed or waking from sleep were also affected. # Root Cause Render's routing service depends on a metadata service to receive information about the user services it routes traffic to. When the routing service first starts and upon occasional reconnection, it will request and receive a large volume of data from the metadata service. Earlier in 2025, we successfully deployed a memory optimization related to data transfer between the metadata and routing services using a feature flag. In late October, we removed the flag from code and redeployed, but we didn't redeploy the metadata service, which still depended on the flag. On November 5th, we cleaned up unreferenced feature flags from our system. This caused the metadata service to revert to its less efficient data transfer method, leading to memory exhaustion and crashes. Our routing service is designed to handle metadata service outages and continue serving traffic based on its last known state. However, newly created instances that could not load their initial state were incorrectly sent requests, resulting in 404 errors. During the first period of impact, the metadata service was crashing in two of our clusters, and only a small fraction of routing service instances were impacted. During the second period of impact, we saw a large increase in HTTP requests for services in the affected cluster. This triggered scale-ups of the routing service, all of which returned 404 errors. # Mitigations ## Completed * Increased memory available to the metadata service \(this has since been reverted\) * Temporarily re-enabled the feature flag to support more efficient data transfer between the routing and metadata services \(this has since been removed\) * Deployed the metadata service to no longer rely on the feature flag * Enhanced our monitoring of the metadata service to alert us of this particular failure mode ## Planned * Improve our feature flag hygiene practice to prevent the removal of a feature flag while it is still being evaluated * Prevent the routing service from receiving traffic if it never successfully loaded state from the metadata service
This incident has been resolved.
We are continuing to monitor for any further issues.
A fix has been implemented and we are monitoring the results.
We are continuing to work on a fix for this issue.
We have identified continuing issues in Oregon. A fix is being worked on.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating the issue.
October 2025(8 incidents)
Failure to spin free web services back up after inactivity
1 update
Between 2025-10-28 at 17:00 UTC and 2025-10-30 at 17:22 UTC, a change was active that caused some free web services to fail to spin back up after inactivity. Most free web services were unaffected. While the change was reverted, any services that remain impacted should redeploy to resolve.
Degraded builds and deploys in Virginia
3 updates
This incident has been resolved.
An upstream provider is experiencing issues provisioning infrastructure. We continue to monitor the situation. Paid services are experiencing less delay than free services.
Builds and deploys may be slower than usual. We are currently investigating this issue.
Pre-deploys are failing in some regions
6 updates
This incident has been resolved.
A fix has been rolled out, and a re-deploy should now work as expected. We’re still keeping an eye on how the fix performs though.
We’ve found the root cause and are rolling out a fix.
Some pre-deploys in Singapore are also affected.
We think some pre-deploys are failing in Oregon and Frankfurt. We haven’t found any other affected regions so far, but we’re actively checking.
We are currently investigating this issue.
An upstream provider is experiencing some issues that are affecting parts of our platform (Virginia)
11 updates
This incident has been resolved.
All Render services have recovered, our upstream provider is continuing to recover. We are continuing to monitor impact.
Web Services (paid and free) and Static Site request latencies have returned to normal levels. Issues involving PostgreSQL database creation and inability to create backups persist.
The upstream provider has not yet recovered. We are still seeing request latency for Web Services and Static Sites in Virginia, and some users are unable to create new databases or backups.
Requests routed to Web Services have begun experiencing issues.
We’re seeing some issues again with a few components. Database creation might be slow in Virginia or appears stuck during the creation process.
We’re no longer seeing any issues related to Postgres databases from this incident on our platform.
We are continuing to monitor for any further issues.
We’re seeing steady recovery and keeping an eye on all components to make sure everything’s fully caught up. The upstream provider is still going through its own recovery process too.
Several of our tools were also affected during that time, including support tools, so responses may have been delayed or missed between 08:00 and 09:30 UTC. We’re working through the requests as quickly as we can.
We started seeing increased errors in our infrastructure around 08:00 UTC. Parts of our platform were affected by an outage with an upstream provider. We know that new database creation and backup creation were impacted, but we’re still assessing if there’s any broader impact. We’re seeing signs of recovery now, but we’re continuing to monitor.
Incorrect IP allowlists configured for new Environments created via REST API
2 updates
Changes were deployed to fix the issue with new Environments created via the REST API. All affected Environments have been updated to be their default Allow-All if not otherwise specified in the API call's parameters. This issue has been resolved.
We have identified and are working to fix Environments recently created via the REST API to ensure default IP allowlists are configured correctly. Until then, new Services created in these Environments may be responding to requests with unexpected errors.
Increased latency in Oregon region
3 updates
Latency has returned to baseline levels since 16:40 UTC and no further impact has been observed.
Peak impact occurred between 16:20 and 16:40 UTC. We are currently monitoring.
We are currently investigating increased latency in our Oregon region
Unable to create Postgres services or update their instance type in Oregon
3 updates
This incident has now been resolved. A subset of customers in Oregon, but not all, were impacted. Affected customers were unable to create Postgres services or update the instance type of Postgres services between 19:14 and 20:15 UTC.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
Partial degradation of service creation and deploys in Oregon
3 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
September 2025(4 incidents)
Small number of users impacted by stuck builds
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are aware of an issue resulting in stuck builds impacting a small minority of users with the "Wait" setting for their Overlapping Deploy Policy.
Image-based deploys failing due to upstream provider
3 updates
This incident has been resolved.
The upstream provider has rolled out a fix and are monitoring the issue. We are monitoring our systems as well.
Due to an outage from an upstream provider, users with image-based services are seeing failed deploys with reports of 401 errors.
Some Postgres databases can’t be created in Frankfurt
4 updates
This incident has been resolved.
We are continuing to work on the issue.
We’ve identified the issue, but we’re still investigating.
This doesn’t impact Postgres databases that are already running. It only partially affects Frankfurt. Any affected database that gets created will show a status of 'unknown'.
Dashboard operations degraded or failing
1 update
Dashboard operations were degraded for ~30 minutes, and within that period operations were mostly failing for ~5 mins.
📡 Tired of checking Render status manually?
Better Stack monitors uptime every 30 seconds and alerts you instantly when Render goes down.