R

Render Outage History

Past incidents and downtime events

Complete history of Render outages, incidents, and service disruptions. Showing 50 most recent incidents.

April 2026(8 incidents)

minorresolvedApr 23, 04:51 PM — Resolved Apr 23, 05:21 PM

Delayed deployment for new services in Oregon region

4 updates
resolvedApr 23, 05:21 PM

This incident has been resolved.

monitoringApr 23, 05:12 PM

A fix has been implemented and we are monitoring the results.

identifiedApr 23, 05:11 PM

The issue has been identified and a fix is being implemented.

investigatingApr 23, 05:10 PM

We are investigating an issue where new services created in the Oregon region are taking longer than usual to deploy.

majorresolvedApr 16, 11:36 PM — Resolved Apr 16, 11:51 PM

Some web services and databases in Oregon are unresponsive

2 updates
resolvedApr 16, 11:51 PM

This incident has been resolved.

monitoringApr 16, 11:36 PM

A fix has been implemented and we are monitoring the results.

minorresolvedApr 14, 02:31 PM — Resolved Apr 14, 04:04 PM

BitBucket account connection issues

3 updates
resolvedApr 14, 04:04 PM

This incident has been resolved.

identifiedApr 14, 03:12 PM

The issue has been identified and a fix is being implemented.

investigatingApr 14, 02:31 PM

We are currently investigating reports of issues connecting BitBucket accounts/repos to Render. Existing services configured with a BitBucket repo are unaffected.

majorresolvedApr 13, 10:03 PM — Resolved Apr 13, 10:26 PM

Image pull failures in Oregon

3 updates
resolvedApr 13, 10:26 PM

This incident has been resolved.

monitoringApr 13, 10:18 PM

A fix has been implemented and we are monitoring the results.

identifiedApr 13, 10:03 PM

We are seeing image pull failures within the Oregon region. We have identified the problem and are working on a fix.

noneresolvedApr 10, 02:41 PM — Resolved Apr 10, 02:56 PM

Infrastructure maintenance operations are currently in progress for outbound IPs in the Oregon region

2 updates
resolvedApr 10, 02:56 PM

The maintenance has been completed successfully.

investigatingApr 10, 02:41 PM

Ongoing maintenance is underway for static IPs in the Oregon region. Services in this region may experience changes to their outbound IP ranges.

majorresolvedApr 8, 09:57 AM — Resolved Apr 8, 10:11 PM

Disruption affecting the provisioning of new instances in Singapore and partially in Oregon

13 updates
resolvedApr 8, 10:11 PM

We've confirmed that our fix has addressed the issue and workloads are now scheduling normally in all regions.

monitoringApr 8, 07:44 PM

Provisioning is recovering for free and paid services across all regions. Free services have resumed serving traffic. We are continuing to monitor for further issues.

monitoringApr 8, 06:48 PM

Paid services are continuing to see recovery across builds and deploys. Free services remain disabled and will return errors if loaded in a browser. Singapore and part of Oregon remain impacted.

monitoringApr 8, 06:08 PM

We are continuing to monitor for any further issues.

monitoringApr 8, 04:05 PM

We are disabling new free services in Singapore and partially in Oregon.

monitoringApr 8, 03:02 PM

We're seeing recovery across all regions.

identifiedApr 8, 03:02 PM

We are continuing to work on a fix for this issue.

identifiedApr 8, 02:42 PM

We are now seeing the same issue affecting services in Oregon partially. We are investigating to remediate this as quickly as possible.

identifiedApr 8, 11:23 AM

Services with paid compute should now be operating normally. Please note that free services in this region are still intermittently disabled.

investigatingApr 8, 11:22 AM

We are continuing to investigate this issue.

investigatingApr 8, 10:54 AM

We are seeing some recovery following our mitigation efforts, but we are still working to identify the root cause for a long-term resolution.

investigatingApr 8, 10:31 AM

We have temporarily disabled free services in Singapore. We are continuing to investigate and work to mitigate the issue.

investigatingApr 8, 09:57 AM

This issue may affect new instances during builds, scaling, and similar operations. You may experience build/deploy failures. We are still investigating.

minorresolvedApr 8, 12:30 PM — Resolved Apr 8, 12:30 PM

Delays in stateful service creation in Oregon

1 update
resolvedApr 8, 07:17 PM

Between 12:40 UTC and 18:10 UTC, creation of stateful services in Oregon was delayed. Stateful services are any of Postgres, Key Value, or services with a persistent disk.

minorresolvedApr 7, 06:00 PM — Resolved Apr 7, 06:00 PM

Unable to view workflows in the Dashboard

1 update
resolvedApr 7, 07:33 PM

Between 18:04 and 19:10 UTC, workflows were unable to be viewed in the Dashboard. All workflows continued to run and were otherwise operational. This issue has been resolved.

March 2026(6 incidents)

minorpostmortemMar 25, 02:45 PM — Resolved Mar 26, 07:51 PM

Some Render Workflows tasks are not being scheduled

5 updates
postmortemApr 14, 01:23 PM

Affected customers have been contacted directly. If you would like more details, please contact support at [support@render.com](mailto:support@render.com) or reach out through the chat widget in the Render dashboard.

resolvedMar 26, 07:51 PM

This issue is now resolved. A full RCA will be available in the incident page within the next two weeks.

monitoringMar 26, 05:54 PM

A fix has been implemented and we are monitoring the results.

identifiedMar 25, 04:40 PM

The issue has been identified, and there should be no issues scheduling tasks at this time. We are still working to ensure it is fully resolved.

investigatingMar 25, 02:45 PM

We are working on resolving the issue.

minorresolvedMar 24, 01:09 PM — Resolved Mar 24, 01:31 PM

Some users may experience delayed logs in the Oregon region

3 updates
resolvedMar 24, 01:31 PM

The issue is fixed.

monitoringMar 24, 01:12 PM

We are monitoring recovery.

identifiedMar 24, 01:09 PM

The issue has been identified, and we are already monitoring recovery.

minorresolvedMar 13, 03:51 PM — Resolved Mar 13, 06:07 PM

Degraded builds and deployments in the Singapore region

3 updates
resolvedMar 13, 06:07 PM

This incident has been resolved.

monitoringMar 13, 03:53 PM

We are currently monitoring.

investigatingMar 13, 03:51 PM

We are observing a recurrence of the previous incident (https://status.render.com/incidents/d9k4p51v1y7g). The current impact appears to be significantly lower. We are proactively investigating and closely monitoring the situation internally to ensure minimal impact on users.

majorresolvedMar 13, 08:52 AM — Resolved Mar 13, 11:09 AM

Degraded builds and deployments in the Singapore region

4 updates
resolvedMar 13, 11:09 AM

We believe this incident has been successfully mitigated. If you continue to experience any issues, please contact support@render.com.

monitoringMar 13, 10:18 AM

We have re-enabled services using the free instance types in the Singapore region and are currently monitoring the impact.

investigatingMar 13, 09:27 AM

We have disabled services using free instance types in the Singapore region

investigatingMar 13, 08:52 AM

Builds and deployments in the Singapore region are currently experiencing degraded performance. You may notice longer than usual completion times.

majorresolvedMar 9, 08:19 PM — Resolved Mar 9, 08:39 PM

Custom domains are unable to be created

2 updates
resolvedMar 9, 08:39 PM

This incident has been resolved.

investigatingMar 9, 08:19 PM

We are currently investigating this issue.

majorresolvedMar 3, 12:41 AM — Resolved Mar 3, 01:00 AM

Ohio Services with Disks, Data Persistence failing

3 updates
resolvedMar 3, 01:00 AM

All affected services have been migrated to new hosts and are running at this time. This incident has been resolved.

identifiedMar 3, 12:59 AM

Many services have recovered with healthy disks on new hosts, however this issue remains open as some services are still impaired.

identifiedMar 3, 12:41 AM

Services located in Ohio, both application types (Web Service, Private Service, etc.) and data types (Postgres, Key Value with persistence) are failing due to disk errors. We are working on mitigations.

February 2026(5 incidents)

majorresolvedFeb 11, 04:16 PM — Resolved Feb 11, 07:25 PM

Some deploys may be slow or hanging

5 updates
resolvedFeb 11, 07:25 PM

This incident has been resolved.

monitoringFeb 11, 07:15 PM

Deploy times have decreased and failures have dropped to baseline levels. We are monitoring for other impacts or other issues.

identifiedFeb 11, 06:24 PM

Deploy delays and failures continue to remain elevated. We are continuing to work on this issue.

monitoringFeb 11, 04:39 PM

We have implemented a mitigation and are currently observing positive results. We are continuing to monitor deploy health across the platform.

investigatingFeb 11, 04:16 PM

We are currently investigating this issue.

minorresolvedFeb 3, 03:30 PM — Resolved Feb 5, 10:36 PM

Degraded deploys in Ohio and Virginia

5 updates
resolvedFeb 5, 10:36 PM

We implemented updates to builds and deploys to improve handling of slow updates. As a result of these changes, build and deploy performance has recovered.

monitoringFeb 4, 09:34 PM

We are continuing to monitor for any further issues.

monitoringFeb 4, 07:04 PM

We are observing improved deploy performance and continue to monitor for any further issues.

monitoringFeb 3, 11:58 PM

A fix has been implemented and we are monitoring the results.

investigatingFeb 3, 11:05 PM

Deploys are experiencing degraded performance and may take longer to complete.

majorresolvedFeb 4, 05:13 PM — Resolved Feb 4, 06:42 PM

Elevated latency for some new services when using the onrender.com address

4 updates
resolvedFeb 4, 06:42 PM

Latency has returned to expected levels. Affected services were those created between 2026-02-04T16:30Z and 2026-02-04T18:17Z. Services created outside that period were not affected.

identifiedFeb 4, 05:29 PM

We have determined that services are reachable through their onrender.com address. Requests will be successful but may take longer. We are continuing to work on a fix.

identifiedFeb 4, 05:21 PM

The issue has been identified and a fix is being implemented.

investigatingFeb 4, 05:13 PM

We are currently investigating this issue.

minorresolvedFeb 3, 10:32 AM — Resolved Feb 3, 11:09 AM

Degraded Deploys in Singapore Region

3 updates
resolvedFeb 3, 11:09 AM

This incident has been resolved.

monitoringFeb 3, 10:49 AM

A fix has been implemented and we are monitoring the results.

investigatingFeb 3, 10:32 AM

We are currently investigating this issue.

minorresolvedFeb 2, 10:04 PM — Resolved Feb 2, 10:54 PM

Degraded deploys in all regions

3 updates
resolvedFeb 2, 10:54 PM

This incident has been resolved.

monitoringFeb 2, 10:40 PM

A fix has been implemented and we are monitoring the results.

investigatingFeb 2, 10:04 PM

We are currently investigating this issue.

January 2026(5 incidents)

majorresolvedJan 30, 11:28 AM — Resolved Jan 30, 12:25 PM

External connectivity issues with Postgres databases hosted in Singapore

4 updates
resolvedJan 30, 12:25 PM

This incident has been resolved. Please reach out to support@render.com for any follow-up questions.

monitoringJan 30, 12:20 PM

A fix has been implemented and we are monitoring the results.

identifiedJan 30, 12:09 PM

The issue has been identified and a fix is being implemented.

investigatingJan 30, 11:28 AM

We are currently investigating this issue.

minorresolvedJan 23, 12:49 AM — Resolved Jan 23, 01:46 AM

Metrics impacted for some services in Oregon

4 updates
resolvedJan 23, 01:46 AM

This incident has been resolved.

monitoringJan 23, 01:30 AM

Metrics for impacted services in Oregon are now being displayed. Metrics will be missing from impacted services from 2026-01-23 00:50 to 2026-01-23 01:10 UTC

identifiedJan 23, 01:16 AM

The issue has been identified and a remediation is being implemented

investigatingJan 23, 01:12 AM

Metrics for some services in Oregon are currently impacted and may not be displaying.

minorresolvedJan 17, 02:29 AM — Resolved Jan 17, 02:41 AM

Delays in starting instances on services

2 updates
resolvedJan 17, 02:41 AM

Instance creation times have been restored to expected timerames. This issue has been resolved.

identifiedJan 17, 02:29 AM

High demand for new instances has created a backlog for some services in the Oregon region. Services attempting to add new instances including those for new deploys, instance scale ups, restarts, etc. may see delays doing so.

minorresolvedJan 12, 02:45 PM — Resolved Jan 12, 07:56 PM

Some application and build logs are missing on the dashboard

5 updates
resolvedJan 12, 07:56 PM

This incident has been resolved.

monitoringJan 12, 06:04 PM

We’re seeing steady recovery now, and logs should be showing again. We are still monitoring to confirm the longer-term recovery.

identifiedJan 12, 05:16 PM

We’ve identified the issue and are now in recovery. Recovery may be slow due to the large volume of logs involved.

investigatingJan 12, 03:31 PM

We believe we’ve identified the root cause of the issue, and we’re currently doing some additional investigation to make sure it’s resolved properly.

investigatingJan 12, 02:45 PM

Some logs, especially build and application logs, may be temporarily missing. We’re actively investigating this and will work to fix it as soon as possible. Builds can still complete successfully even if the logs aren’t showing up.

minorresolvedJan 8, 04:47 PM — Resolved Jan 8, 06:48 PM

Deploy delays in Oregon

3 updates
resolvedJan 8, 06:48 PM

This incident has been resolved.

monitoringJan 8, 06:26 PM

We have implemented a fix and are monitoring for further issues.

investigatingJan 8, 04:47 PM

Some users may experience slower build times for services deployed in Oregon.

December 2025(6 incidents)

minorresolvedDec 10, 08:58 PM — Resolved Dec 12, 12:05 AM

Deploy delays in Virginia

3 updates
resolvedDec 12, 12:05 AM

Deploy performance has returned to expected levels.

monitoringDec 10, 10:05 PM

A fix has been implemented and we are monitoring the results.

investigatingDec 10, 08:58 PM

We are currently investigating this issue.

majorresolvedDec 10, 09:50 PM — Resolved Dec 10, 09:57 PM

Unable to view service events

2 updates
resolvedDec 10, 09:57 PM

This incident has been resolved.

investigatingDec 10, 09:50 PM

When viewing service events, an error is returned. We are currently investigating the issue.

minorresolvedDec 5, 11:03 PM — Resolved Dec 5, 11:52 PM

Elevated Latency for Requests to Web Services and Static Sites in Frankfurt

4 updates
resolvedDec 5, 11:52 PM

This incident has been resolved.

monitoringDec 5, 11:29 PM

Latency has remained stable. We continue to monitor the situation.

investigatingDec 5, 11:19 PM

Latency has normalized. We continue to investigate with our upstream vendor to identify the cause.

investigatingDec 5, 11:03 PM

We are currently investigating this issue.

majorresolvedDec 5, 09:01 AM — Resolved Dec 5, 09:36 AM

Services not accessible

5 updates
resolvedDec 5, 09:36 AM

From 08:47 to 09:11 UTC, all incoming web traffic in every region failed to reach services and returned 500 errors instead. Our dashboard and API were down too. Background workers, private services, and cron jobs were not affected. The upstream provider has recovered now, and we’re no longer seeing any issues on our side.

monitoringDec 5, 09:20 AM

The upstream provider is recovering, and we’re seeing recovery on our side too.

monitoringDec 5, 09:20 AM

Access to services is now recovering, and we are continuing to monitor.

identifiedDec 5, 09:07 AM

We're experiencing issues with an upstream provider.

investigatingDec 5, 09:01 AM

We're investigating services not being accessible

minorresolvedDec 2, 08:48 PM — Resolved Dec 2, 10:26 PM

Increased Latency in Updates to Oregon Services

3 updates
resolvedDec 2, 10:26 PM

This incident has been resolved.

monitoringDec 2, 09:12 PM

A fix has been implemented and we are monitoring the results.

investigatingDec 2, 09:08 PM

Creation of services or changes to existing services hosted in our Oregon region are experiencing increased latency. We are currently investigating.

minorresolvedDec 2, 02:32 PM — Resolved Dec 2, 04:00 PM

Custom Domains: New certificates stuck on pending

3 updates
resolvedDec 2, 04:00 PM

We understand the issue is resolved now. If you're still seeing issues, please reach out.

identifiedDec 2, 03:30 PM

The provider is actively working on the issue and we’re seeing some progress on certificate issuance. We’re still waiting on full confirmation that the fix is complete.

investigatingDec 2, 02:32 PM

You may see certificates stuck on 'Pending' after adding a custom domain. We’ve located an issue with a provider and are looking into it right now.

November 2025(8 incidents)

majorresolvedNov 25, 08:13 PM — Resolved Nov 25, 08:26 PM

Web services (Oregon) and static sites availability disruption

3 updates
resolvedNov 25, 08:26 PM

This incident has been resolved.

monitoringNov 25, 08:26 PM

A fix has been implemented and we are monitoring the results. Impact resulted in intermittent latency, timeouts, and errors for some services for ~6 minutes (11:46-11:52 PST).

investigatingNov 25, 08:13 PM

We are currently investigating an issue impacting web service and static site availability.

minorresolvedNov 20, 05:53 PM — Resolved Nov 21, 01:23 AM

Increased slowness in Dashboard

4 updates
resolvedNov 21, 01:23 AM

The incident has been resolved.

monitoringNov 20, 08:23 PM

Dashboard performance remains healthy and we continue to monitor.

investigatingNov 20, 06:29 PM

Dashboard performance has recovered. We are continuing to investigate the root cause.

investigatingNov 20, 05:53 PM

We are currently investigating this issue.

majorresolvedNov 20, 05:28 PM — Resolved Nov 20, 07:31 PM

Elevated rates of deploy failures

4 updates
resolvedNov 20, 07:31 PM

This incident has been resolved.

identifiedNov 20, 06:52 PM

The issue has been identified and a fix is being implemented.

investigatingNov 20, 06:21 PM

We are continuing to investigate this issue.

investigatingNov 20, 05:28 PM

We are currently investigating this issue.

majorresolvedNov 18, 09:00 PM — Resolved Nov 18, 09:56 PM

GitHub-backed services failing to build in all regions

4 updates
resolvedNov 18, 09:56 PM

This incident has been resolved.

monitoringNov 18, 09:39 PM

The upstream provider has implemented a fix and recovery is ongoing. We are continuing to monitor the situation.

identifiedNov 18, 09:09 PM

An upstream provider is experiencing an outage. We are monitoring the situation.

investigatingNov 18, 09:00 PM

We are currently investigating this issue.

majorresolvedNov 18, 12:10 PM — Resolved Nov 18, 06:37 PM

An upstream provider major incident is affecting some Render services

4 updates
resolvedNov 18, 06:37 PM

We have observed no further impact and the upstream provider has affirmed full resolution.

monitoringNov 18, 03:02 PM

The upstream provider has resolved the issue. We’re still checking to see if there’s any remaining impact on our side.

identifiedNov 18, 01:57 PM

The upstream provider is still suffering from the incident, and we are still waiting for further mitigations from them.

investigatingNov 18, 12:10 PM

We’re aware of a major incident with an upstream provider that’s impacting some services on Render. You might see some 500s until it’s resolved upstream. We’re also investigating on our side.

minorresolvedNov 14, 08:55 PM — Resolved Nov 14, 09:18 PM

Metrics/Logs missing for Oregon services

2 updates
resolvedNov 14, 09:18 PM

This incident has been resolved.

identifiedNov 14, 08:55 PM

Metrics and Logs for services hosted in Oregon are missing due to a platform incident. We are working to resolve this issue now.

minorresolvedNov 13, 03:40 PM — Resolved Nov 13, 11:00 PM

Cron Job runs cannot be cancelled from our dashboard or the API

2 updates
resolvedNov 13, 11:00 PM

This incident has been resolved.

investigatingNov 13, 03:40 PM

We’re looking into why this is happening. The cancel button on a run doesn’t actually stop it right now. The current workaround is to suspend and then unsuspend the cron to force-cancel the run. If that doesn’t do the trick, please reach out to our support team.

majorpostmortemNov 5, 07:19 PM — Resolved Nov 5, 09:52 PM

Increased 404s in Oregon (Web Services) and Static Sites

9 updates
postmortemNov 18, 06:48 PM

# Summary As an infrastructure provider, providing a reliable platform that allows our customers to build and scale their applications with confidence is our highest obligation. We invest heavily to ensure our platform is highly reliable and secure, including in our routing layer that handles billions of HTTP requests every day. On November 5, 2025, we inadvertently rolled back a performance improvement that was gated behind a feature flag. This led to disruption in the form of intermittent 404s for some web services and static sites deployed to the Oregon region. We have fully identified the sequence of events that led to this outage and are in the process of taking steps to prevent it from recurring. # Impact There were two periods where some customers hosting web services and static sites in the Oregon region experienced a partial outage with intermittent 404s. The first period occurred between 10:39 AM PST and 11:25 AM PST . At this time, two Render clusters had slightly degraded service. One cluster returned a negligible number of 404 responses, and the other cluster returned 404 responses for approximately 10% of requests. The second period occurred between 11:59 AM PST to 12:34 AM PST and saw more significant service degradation. During this period, about 50% of all requests to services in the affected cluster received a 404 response. All newly created services in these clusters were affected and received 404 responses during the incident. Updates to existing services were also slow to propagate. Free tier services that were recently deployed or waking from sleep were also affected. # Root Cause Render's routing service depends on a metadata service to receive information about the user services it routes traffic to. When the routing service first starts and upon occasional reconnection, it will request and receive a large volume of data from the metadata service. Earlier in 2025, we successfully deployed a memory optimization related to data transfer between the metadata and routing services using a feature flag. In late October, we removed the flag from code and redeployed, but we didn't redeploy the metadata service, which still depended on the flag. On November 5th, we cleaned up unreferenced feature flags from our system. This caused the metadata service to revert to its less efficient data transfer method, leading to memory exhaustion and crashes. Our routing service is designed to handle metadata service outages and continue serving traffic based on its last known state. However, newly created instances that could not load their initial state were incorrectly sent requests, resulting in 404 errors. During the first period of impact, the metadata service was crashing in two of our clusters, and only a small fraction of routing service instances were impacted. During the second period of impact, we saw a large increase in HTTP requests for services in the affected cluster. This triggered scale-ups of the routing service, all of which returned 404 errors. # Mitigations ## Completed * Increased memory available to the metadata service \(this has since been reverted\) * Temporarily re-enabled the feature flag to support more efficient data transfer between the routing and metadata services \(this has since been removed\) * Deployed the metadata service to no longer rely on the feature flag * Enhanced our monitoring of the metadata service to alert us of this particular failure mode ## Planned * Improve our feature flag hygiene practice to prevent the removal of a feature flag while it is still being evaluated * Prevent the routing service from receiving traffic if it never successfully loaded state from the metadata service

resolvedNov 5, 09:52 PM

This incident has been resolved.

monitoringNov 5, 09:03 PM

We are continuing to monitor for any further issues.

monitoringNov 5, 08:48 PM

A fix has been implemented and we are monitoring the results.

identifiedNov 5, 08:21 PM

We are continuing to work on a fix for this issue.

identifiedNov 5, 08:08 PM

We have identified continuing issues in Oregon. A fix is being worked on.

monitoringNov 5, 07:58 PM

A fix has been implemented and we are monitoring the results.

identifiedNov 5, 07:24 PM

The issue has been identified and a fix is being implemented.

investigatingNov 5, 07:19 PM

We are currently investigating the issue.

October 2025(8 incidents)

minorresolvedOct 30, 05:00 PM — Resolved Oct 30, 05:00 PM

Failure to spin free web services back up after inactivity

1 update
resolvedOct 30, 07:08 PM

Between 2025-10-28 at 17:00 UTC and 2025-10-30 at 17:22 UTC, a change was active that caused some free web services to fail to spin back up after inactivity. Most free web services were unaffected. While the change was reverted, any services that remain impacted should redeploy to resolve.

minorresolvedOct 28, 05:08 PM — Resolved Oct 28, 05:59 PM

Degraded builds and deploys in Virginia

3 updates
resolvedOct 28, 05:59 PM

This incident has been resolved.

identifiedOct 28, 05:34 PM

An upstream provider is experiencing issues provisioning infrastructure. We continue to monitor the situation. Paid services are experiencing less delay than free services.

investigatingOct 28, 05:08 PM

Builds and deploys may be slower than usual. We are currently investigating this issue.

majorresolvedOct 23, 04:27 PM — Resolved Oct 23, 06:15 PM

Pre-deploys are failing in some regions

6 updates
resolvedOct 23, 06:15 PM

This incident has been resolved.

monitoringOct 23, 05:22 PM

A fix has been rolled out, and a re-deploy should now work as expected. We’re still keeping an eye on how the fix performs though.

identifiedOct 23, 04:56 PM

We’ve found the root cause and are rolling out a fix.

investigatingOct 23, 04:47 PM

Some pre-deploys in Singapore are also affected.

investigatingOct 23, 04:29 PM

We think some pre-deploys are failing in Oregon and Frankfurt. We haven’t found any other affected regions so far, but we’re actively checking.

investigatingOct 23, 04:27 PM

We are currently investigating this issue.

majorresolvedOct 20, 09:58 AM — Resolved Oct 20, 11:16 PM

An upstream provider is experiencing some issues that are affecting parts of our platform (Virginia)

11 updates
resolvedOct 20, 11:16 PM

This incident has been resolved.

monitoringOct 20, 09:10 PM

All Render services have recovered, our upstream provider is continuing to recover. We are continuing to monitor impact.

identifiedOct 20, 05:54 PM

Web Services (paid and free) and Static Site request latencies have returned to normal levels. Issues involving PostgreSQL database creation and inability to create backups persist.

identifiedOct 20, 05:10 PM

The upstream provider has not yet recovered. We are still seeing request latency for Web Services and Static Sites in Virginia, and some users are unable to create new databases or backups.

identifiedOct 20, 04:17 PM

Requests routed to Web Services have begun experiencing issues.

monitoringOct 20, 03:58 PM

We’re seeing some issues again with a few components. Database creation might be slow in Virginia or appears stuck during the creation process.

monitoringOct 20, 02:15 PM

We’re no longer seeing any issues related to Postgres databases from this incident on our platform.

monitoringOct 20, 12:46 PM

We are continuing to monitor for any further issues.

monitoringOct 20, 12:43 PM

We’re seeing steady recovery and keeping an eye on all components to make sure everything’s fully caught up. The upstream provider is still going through its own recovery process too.

monitoringOct 20, 10:13 AM

Several of our tools were also affected during that time, including support tools, so responses may have been delayed or missed between 08:00 and 09:30 UTC. We’re working through the requests as quickly as we can.

monitoringOct 20, 09:58 AM

We started seeing increased errors in our infrastructure around 08:00 UTC. Parts of our platform were affected by an outage with an upstream provider. We know that new database creation and backup creation were impacted, but we’re still assessing if there’s any broader impact. We’re seeing signs of recovery now, but we’re continuing to monitor.

minorresolvedOct 10, 07:34 PM — Resolved Oct 10, 09:17 PM

Incorrect IP allowlists configured for new Environments created via REST API

2 updates
resolvedOct 10, 09:17 PM

Changes were deployed to fix the issue with new Environments created via the REST API. All affected Environments have been updated to be their default Allow-All if not otherwise specified in the API call's parameters. This issue has been resolved.

identifiedOct 10, 07:34 PM

We have identified and are working to fix Environments recently created via the REST API to ensure default IP allowlists are configured correctly. Until then, new Services created in these Environments may be responding to requests with unexpected errors.

minorresolvedOct 7, 04:57 PM — Resolved Oct 7, 05:38 PM

Increased latency in Oregon region

3 updates
resolvedOct 7, 05:38 PM

Latency has returned to baseline levels since 16:40 UTC and no further impact has been observed.

monitoringOct 7, 05:06 PM

Peak impact occurred between 16:20 and 16:40 UTC. We are currently monitoring.

investigatingOct 7, 04:57 PM

We are currently investigating increased latency in our Oregon region

majorresolvedOct 1, 07:15 PM — Resolved Oct 1, 08:15 PM

Unable to create Postgres services or update their instance type in Oregon

3 updates
resolvedOct 1, 10:42 PM

This incident has now been resolved. A subset of customers in Oregon, but not all, were impacted. Affected customers were unable to create Postgres services or update the instance type of Postgres services between 19:14 and 20:15 UTC.

monitoringOct 1, 08:17 PM

A fix has been implemented and we are monitoring the results.

investigatingOct 1, 08:15 PM

We are currently investigating this issue.

minorresolvedOct 1, 02:08 PM — Resolved Oct 1, 02:57 PM

Partial degradation of service creation and deploys in Oregon

3 updates
resolvedOct 1, 02:57 PM

This incident has been resolved.

monitoringOct 1, 02:24 PM

A fix has been implemented and we are monitoring the results.

investigatingOct 1, 02:08 PM

We are currently investigating this issue.

September 2025(4 incidents)

majorresolvedSep 23, 06:30 PM — Resolved Sep 26, 06:19 PM

Small number of users impacted by stuck builds

4 updates
resolvedSep 26, 06:19 PM

This incident has been resolved.

monitoringSep 25, 11:43 PM

A fix has been implemented and we are monitoring the results.

identifiedSep 24, 10:41 PM

The issue has been identified and a fix is being implemented.

investigatingSep 23, 06:30 PM

We are aware of an issue resulting in stuck builds impacting a small minority of users with the "Wait" setting for their Overlapping Deploy Policy.

majorresolvedSep 25, 12:02 AM — Resolved Sep 25, 01:31 AM

Image-based deploys failing due to upstream provider

3 updates
resolvedSep 25, 01:31 AM

This incident has been resolved.

monitoringSep 25, 01:14 AM

The upstream provider has rolled out a fix and are monitoring the issue. We are monitoring our systems as well.

identifiedSep 25, 12:02 AM

Due to an outage from an upstream provider, users with image-based services are seeing failed deploys with reports of 401 errors.

criticalresolvedSep 22, 02:31 PM — Resolved Sep 22, 03:41 PM

Some Postgres databases can’t be created in Frankfurt

4 updates
resolvedSep 22, 05:49 PM

This incident has been resolved.

identifiedSep 22, 03:40 PM

We are continuing to work on the issue.

identifiedSep 22, 03:13 PM

We’ve identified the issue, but we’re still investigating.

investigatingSep 22, 02:31 PM

This doesn’t impact Postgres databases that are already running. It only partially affects Frankfurt. Any affected database that gets created will show a status of 'unknown'.

minorresolvedSep 21, 10:00 PM — Resolved Sep 21, 10:00 PM

Dashboard operations degraded or failing

1 update
resolvedSep 23, 08:57 PM

Dashboard operations were degraded for ~30 minutes, and within that period operations were mostly failing for ~5 mins.

📡 Tired of checking Render status manually?

Better Stack monitors uptime every 30 seconds and alerts you instantly when Render goes down.

Start Free Monitoring →