R

Render Outage History

Past incidents and downtime events

Complete history of Render outages, incidents, and service disruptions. Showing 50 most recent incidents.

February 2026(3 incidents)

minormonitoringFeb 3, 03:30 PM

Degraded deploys in Ohio

2 updates
monitoringFeb 3, 11:58 PM

A fix has been implemented and we are monitoring the results.

investigatingFeb 3, 11:05 PM

Deploys are experiencing degraded performance and may take longer to complete.

minorresolvedFeb 3, 10:32 AM — Resolved Feb 3, 11:09 AM

Degraded Deploys in Singapore Region

3 updates
resolvedFeb 3, 11:09 AM

This incident has been resolved.

monitoringFeb 3, 10:49 AM

A fix has been implemented and we are monitoring the results.

investigatingFeb 3, 10:32 AM

We are currently investigating this issue.

minorresolvedFeb 2, 10:04 PM — Resolved Feb 2, 10:54 PM

Degraded deploys in all regions

3 updates
resolvedFeb 2, 10:54 PM

This incident has been resolved.

monitoringFeb 2, 10:40 PM

A fix has been implemented and we are monitoring the results.

investigatingFeb 2, 10:04 PM

We are currently investigating this issue.

January 2026(5 incidents)

majorresolvedJan 30, 11:28 AM — Resolved Jan 30, 12:25 PM

External connectivity issues with Postgres databases hosted in Singapore

4 updates
resolvedJan 30, 12:25 PM

This incident has been resolved. Please reach out to support@render.com for any follow-up questions.

monitoringJan 30, 12:20 PM

A fix has been implemented and we are monitoring the results.

identifiedJan 30, 12:09 PM

The issue has been identified and a fix is being implemented.

investigatingJan 30, 11:28 AM

We are currently investigating this issue.

minorresolvedJan 23, 12:49 AM — Resolved Jan 23, 01:46 AM

Metrics impacted for some services in Oregon

4 updates
resolvedJan 23, 01:46 AM

This incident has been resolved.

monitoringJan 23, 01:30 AM

Metrics for impacted services in Oregon are now being displayed. Metrics will be missing from impacted services from 2026-01-23 00:50 to 2026-01-23 01:10 UTC

identifiedJan 23, 01:16 AM

The issue has been identified and a remediation is being implemented

investigatingJan 23, 01:12 AM

Metrics for some services in Oregon are currently impacted and may not be displaying.

minorresolvedJan 17, 02:29 AM — Resolved Jan 17, 02:41 AM

Delays in starting instances on services

2 updates
resolvedJan 17, 02:41 AM

Instance creation times have been restored to expected timerames. This issue has been resolved.

identifiedJan 17, 02:29 AM

High demand for new instances has created a backlog for some services in the Oregon region. Services attempting to add new instances including those for new deploys, instance scale ups, restarts, etc. may see delays doing so.

minorresolvedJan 12, 02:45 PM — Resolved Jan 12, 07:56 PM

Some application and build logs are missing on the dashboard

5 updates
resolvedJan 12, 07:56 PM

This incident has been resolved.

monitoringJan 12, 06:04 PM

We’re seeing steady recovery now, and logs should be showing again. We are still monitoring to confirm the longer-term recovery.

identifiedJan 12, 05:16 PM

We’ve identified the issue and are now in recovery. Recovery may be slow due to the large volume of logs involved.

investigatingJan 12, 03:31 PM

We believe we’ve identified the root cause of the issue, and we’re currently doing some additional investigation to make sure it’s resolved properly.

investigatingJan 12, 02:45 PM

Some logs, especially build and application logs, may be temporarily missing. We’re actively investigating this and will work to fix it as soon as possible. Builds can still complete successfully even if the logs aren’t showing up.

minorresolvedJan 8, 04:47 PM — Resolved Jan 8, 06:48 PM

Deploy delays in Oregon

3 updates
resolvedJan 8, 06:48 PM

This incident has been resolved.

monitoringJan 8, 06:26 PM

We have implemented a fix and are monitoring for further issues.

investigatingJan 8, 04:47 PM

Some users may experience slower build times for services deployed in Oregon.

December 2025(6 incidents)

minorresolvedDec 10, 08:58 PM — Resolved Dec 12, 12:05 AM

Deploy delays in Virginia

3 updates
resolvedDec 12, 12:05 AM

Deploy performance has returned to expected levels.

monitoringDec 10, 10:05 PM

A fix has been implemented and we are monitoring the results.

investigatingDec 10, 08:58 PM

We are currently investigating this issue.

majorresolvedDec 10, 09:50 PM — Resolved Dec 10, 09:57 PM

Unable to view service events

2 updates
resolvedDec 10, 09:57 PM

This incident has been resolved.

investigatingDec 10, 09:50 PM

When viewing service events, an error is returned. We are currently investigating the issue.

minorresolvedDec 5, 11:03 PM — Resolved Dec 5, 11:52 PM

Elevated Latency for Requests to Web Services and Static Sites in Frankfurt

4 updates
resolvedDec 5, 11:52 PM

This incident has been resolved.

monitoringDec 5, 11:29 PM

Latency has remained stable. We continue to monitor the situation.

investigatingDec 5, 11:19 PM

Latency has normalized. We continue to investigate with our upstream vendor to identify the cause.

investigatingDec 5, 11:03 PM

We are currently investigating this issue.

majorresolvedDec 5, 09:01 AM — Resolved Dec 5, 09:36 AM

Services not accessible

5 updates
resolvedDec 5, 09:36 AM

From 08:47 to 09:11 UTC, all incoming web traffic in every region failed to reach services and returned 500 errors instead. Our dashboard and API were down too. Background workers, private services, and cron jobs were not affected. The upstream provider has recovered now, and we’re no longer seeing any issues on our side.

monitoringDec 5, 09:20 AM

The upstream provider is recovering, and we’re seeing recovery on our side too.

monitoringDec 5, 09:20 AM

Access to services is now recovering, and we are continuing to monitor.

identifiedDec 5, 09:07 AM

We're experiencing issues with an upstream provider.

investigatingDec 5, 09:01 AM

We're investigating services not being accessible

minorresolvedDec 2, 08:48 PM — Resolved Dec 2, 10:26 PM

Increased Latency in Updates to Oregon Services

3 updates
resolvedDec 2, 10:26 PM

This incident has been resolved.

monitoringDec 2, 09:12 PM

A fix has been implemented and we are monitoring the results.

investigatingDec 2, 09:08 PM

Creation of services or changes to existing services hosted in our Oregon region are experiencing increased latency. We are currently investigating.

minorresolvedDec 2, 02:32 PM — Resolved Dec 2, 04:00 PM

Custom Domains: New certificates stuck on pending

3 updates
resolvedDec 2, 04:00 PM

We understand the issue is resolved now. If you're still seeing issues, please reach out.

identifiedDec 2, 03:30 PM

The provider is actively working on the issue and we’re seeing some progress on certificate issuance. We’re still waiting on full confirmation that the fix is complete.

investigatingDec 2, 02:32 PM

You may see certificates stuck on 'Pending' after adding a custom domain. We’ve located an issue with a provider and are looking into it right now.

November 2025(8 incidents)

majorresolvedNov 25, 08:13 PM — Resolved Nov 25, 08:26 PM

Web services (Oregon) and static sites availability disruption

3 updates
resolvedNov 25, 08:26 PM

This incident has been resolved.

monitoringNov 25, 08:26 PM

A fix has been implemented and we are monitoring the results. Impact resulted in intermittent latency, timeouts, and errors for some services for ~6 minutes (11:46-11:52 PST).

investigatingNov 25, 08:13 PM

We are currently investigating an issue impacting web service and static site availability.

minorresolvedNov 20, 05:53 PM — Resolved Nov 21, 01:23 AM

Increased slowness in Dashboard

4 updates
resolvedNov 21, 01:23 AM

The incident has been resolved.

monitoringNov 20, 08:23 PM

Dashboard performance remains healthy and we continue to monitor.

investigatingNov 20, 06:29 PM

Dashboard performance has recovered. We are continuing to investigate the root cause.

investigatingNov 20, 05:53 PM

We are currently investigating this issue.

majorresolvedNov 20, 05:28 PM — Resolved Nov 20, 07:31 PM

Elevated rates of deploy failures

4 updates
resolvedNov 20, 07:31 PM

This incident has been resolved.

identifiedNov 20, 06:52 PM

The issue has been identified and a fix is being implemented.

investigatingNov 20, 06:21 PM

We are continuing to investigate this issue.

investigatingNov 20, 05:28 PM

We are currently investigating this issue.

majorresolvedNov 18, 09:00 PM — Resolved Nov 18, 09:56 PM

GitHub-backed services failing to build in all regions

4 updates
resolvedNov 18, 09:56 PM

This incident has been resolved.

monitoringNov 18, 09:39 PM

The upstream provider has implemented a fix and recovery is ongoing. We are continuing to monitor the situation.

identifiedNov 18, 09:09 PM

An upstream provider is experiencing an outage. We are monitoring the situation.

investigatingNov 18, 09:00 PM

We are currently investigating this issue.

majorresolvedNov 18, 12:10 PM — Resolved Nov 18, 06:37 PM

An upstream provider major incident is affecting some Render services

4 updates
resolvedNov 18, 06:37 PM

We have observed no further impact and the upstream provider has affirmed full resolution.

monitoringNov 18, 03:02 PM

The upstream provider has resolved the issue. We’re still checking to see if there’s any remaining impact on our side.

identifiedNov 18, 01:57 PM

The upstream provider is still suffering from the incident, and we are still waiting for further mitigations from them.

investigatingNov 18, 12:10 PM

We’re aware of a major incident with an upstream provider that’s impacting some services on Render. You might see some 500s until it’s resolved upstream. We’re also investigating on our side.

minorresolvedNov 14, 08:55 PM — Resolved Nov 14, 09:18 PM

Metrics/Logs missing for Oregon services

2 updates
resolvedNov 14, 09:18 PM

This incident has been resolved.

identifiedNov 14, 08:55 PM

Metrics and Logs for services hosted in Oregon are missing due to a platform incident. We are working to resolve this issue now.

minorresolvedNov 13, 03:40 PM — Resolved Nov 13, 11:00 PM

Cron Job runs cannot be cancelled from our dashboard or the API

2 updates
resolvedNov 13, 11:00 PM

This incident has been resolved.

investigatingNov 13, 03:40 PM

We’re looking into why this is happening. The cancel button on a run doesn’t actually stop it right now. The current workaround is to suspend and then unsuspend the cron to force-cancel the run. If that doesn’t do the trick, please reach out to our support team.

majorpostmortemNov 5, 07:19 PM — Resolved Nov 5, 09:52 PM

Increased 404s in Oregon (Web Services) and Static Sites

9 updates
postmortemNov 18, 06:48 PM

# Summary As an infrastructure provider, providing a reliable platform that allows our customers to build and scale their applications with confidence is our highest obligation. We invest heavily to ensure our platform is highly reliable and secure, including in our routing layer that handles billions of HTTP requests every day. On November 5, 2025, we inadvertently rolled back a performance improvement that was gated behind a feature flag. This led to disruption in the form of intermittent 404s for some web services and static sites deployed to the Oregon region. We have fully identified the sequence of events that led to this outage and are in the process of taking steps to prevent it from recurring. # Impact There were two periods where some customers hosting web services and static sites in the Oregon region experienced a partial outage with intermittent 404s. The first period occurred between 10:39 AM PST and 11:25 AM PST . At this time, two Render clusters had slightly degraded service. One cluster returned a negligible number of 404 responses, and the other cluster returned 404 responses for approximately 10% of requests. The second period occurred between 11:59 AM PST to 12:34 AM PST and saw more significant service degradation. During this period, about 50% of all requests to services in the affected cluster received a 404 response. All newly created services in these clusters were affected and received 404 responses during the incident. Updates to existing services were also slow to propagate. Free tier services that were recently deployed or waking from sleep were also affected. # Root Cause Render's routing service depends on a metadata service to receive information about the user services it routes traffic to. When the routing service first starts and upon occasional reconnection, it will request and receive a large volume of data from the metadata service. Earlier in 2025, we successfully deployed a memory optimization related to data transfer between the metadata and routing services using a feature flag. In late October, we removed the flag from code and redeployed, but we didn't redeploy the metadata service, which still depended on the flag. On November 5th, we cleaned up unreferenced feature flags from our system. This caused the metadata service to revert to its less efficient data transfer method, leading to memory exhaustion and crashes. Our routing service is designed to handle metadata service outages and continue serving traffic based on its last known state. However, newly created instances that could not load their initial state were incorrectly sent requests, resulting in 404 errors. During the first period of impact, the metadata service was crashing in two of our clusters, and only a small fraction of routing service instances were impacted. During the second period of impact, we saw a large increase in HTTP requests for services in the affected cluster. This triggered scale-ups of the routing service, all of which returned 404 errors. # Mitigations ## Completed * Increased memory available to the metadata service \(this has since been reverted\) * Temporarily re-enabled the feature flag to support more efficient data transfer between the routing and metadata services \(this has since been removed\) * Deployed the metadata service to no longer rely on the feature flag * Enhanced our monitoring of the metadata service to alert us of this particular failure mode ## Planned * Improve our feature flag hygiene practice to prevent the removal of a feature flag while it is still being evaluated * Prevent the routing service from receiving traffic if it never successfully loaded state from the metadata service

resolvedNov 5, 09:52 PM

This incident has been resolved.

monitoringNov 5, 09:03 PM

We are continuing to monitor for any further issues.

monitoringNov 5, 08:48 PM

A fix has been implemented and we are monitoring the results.

identifiedNov 5, 08:21 PM

We are continuing to work on a fix for this issue.

identifiedNov 5, 08:08 PM

We have identified continuing issues in Oregon. A fix is being worked on.

monitoringNov 5, 07:58 PM

A fix has been implemented and we are monitoring the results.

identifiedNov 5, 07:24 PM

The issue has been identified and a fix is being implemented.

investigatingNov 5, 07:19 PM

We are currently investigating the issue.

October 2025(8 incidents)

minorresolvedOct 30, 05:00 PM — Resolved Oct 30, 05:00 PM

Failure to spin free web services back up after inactivity

1 update
resolvedOct 30, 07:08 PM

Between 2025-10-28 at 17:00 UTC and 2025-10-30 at 17:22 UTC, a change was active that caused some free web services to fail to spin back up after inactivity. Most free web services were unaffected. While the change was reverted, any services that remain impacted should redeploy to resolve.

minorresolvedOct 28, 05:08 PM — Resolved Oct 28, 05:59 PM

Degraded builds and deploys in Virginia

3 updates
resolvedOct 28, 05:59 PM

This incident has been resolved.

identifiedOct 28, 05:34 PM

An upstream provider is experiencing issues provisioning infrastructure. We continue to monitor the situation. Paid services are experiencing less delay than free services.

investigatingOct 28, 05:08 PM

Builds and deploys may be slower than usual. We are currently investigating this issue.

majorresolvedOct 23, 04:27 PM — Resolved Oct 23, 06:15 PM

Pre-deploys are failing in some regions

6 updates
resolvedOct 23, 06:15 PM

This incident has been resolved.

monitoringOct 23, 05:22 PM

A fix has been rolled out, and a re-deploy should now work as expected. We’re still keeping an eye on how the fix performs though.

identifiedOct 23, 04:56 PM

We’ve found the root cause and are rolling out a fix.

investigatingOct 23, 04:47 PM

Some pre-deploys in Singapore are also affected.

investigatingOct 23, 04:29 PM

We think some pre-deploys are failing in Oregon and Frankfurt. We haven’t found any other affected regions so far, but we’re actively checking.

investigatingOct 23, 04:27 PM

We are currently investigating this issue.

majorresolvedOct 20, 09:58 AM — Resolved Oct 20, 11:16 PM

An upstream provider is experiencing some issues that are affecting parts of our platform (Virginia)

11 updates
resolvedOct 20, 11:16 PM

This incident has been resolved.

monitoringOct 20, 09:10 PM

All Render services have recovered, our upstream provider is continuing to recover. We are continuing to monitor impact.

identifiedOct 20, 05:54 PM

Web Services (paid and free) and Static Site request latencies have returned to normal levels. Issues involving PostgreSQL database creation and inability to create backups persist.

identifiedOct 20, 05:10 PM

The upstream provider has not yet recovered. We are still seeing request latency for Web Services and Static Sites in Virginia, and some users are unable to create new databases or backups.

identifiedOct 20, 04:17 PM

Requests routed to Web Services have begun experiencing issues.

monitoringOct 20, 03:58 PM

We’re seeing some issues again with a few components. Database creation might be slow in Virginia or appears stuck during the creation process.

monitoringOct 20, 02:15 PM

We’re no longer seeing any issues related to Postgres databases from this incident on our platform.

monitoringOct 20, 12:46 PM

We are continuing to monitor for any further issues.

monitoringOct 20, 12:43 PM

We’re seeing steady recovery and keeping an eye on all components to make sure everything’s fully caught up. The upstream provider is still going through its own recovery process too.

monitoringOct 20, 10:13 AM

Several of our tools were also affected during that time, including support tools, so responses may have been delayed or missed between 08:00 and 09:30 UTC. We’re working through the requests as quickly as we can.

monitoringOct 20, 09:58 AM

We started seeing increased errors in our infrastructure around 08:00 UTC. Parts of our platform were affected by an outage with an upstream provider. We know that new database creation and backup creation were impacted, but we’re still assessing if there’s any broader impact. We’re seeing signs of recovery now, but we’re continuing to monitor.

minorresolvedOct 10, 07:34 PM — Resolved Oct 10, 09:17 PM

Incorrect IP allowlists configured for new Environments created via REST API

2 updates
resolvedOct 10, 09:17 PM

Changes were deployed to fix the issue with new Environments created via the REST API. All affected Environments have been updated to be their default Allow-All if not otherwise specified in the API call's parameters. This issue has been resolved.

identifiedOct 10, 07:34 PM

We have identified and are working to fix Environments recently created via the REST API to ensure default IP allowlists are configured correctly. Until then, new Services created in these Environments may be responding to requests with unexpected errors.

minorresolvedOct 7, 04:57 PM — Resolved Oct 7, 05:38 PM

Increased latency in Oregon region

3 updates
resolvedOct 7, 05:38 PM

Latency has returned to baseline levels since 16:40 UTC and no further impact has been observed.

monitoringOct 7, 05:06 PM

Peak impact occurred between 16:20 and 16:40 UTC. We are currently monitoring.

investigatingOct 7, 04:57 PM

We are currently investigating increased latency in our Oregon region

majorresolvedOct 1, 07:15 PM — Resolved Oct 1, 08:15 PM

Unable to create Postgres services or update their instance type in Oregon

3 updates
resolvedOct 1, 10:42 PM

This incident has now been resolved. A subset of customers in Oregon, but not all, were impacted. Affected customers were unable to create Postgres services or update the instance type of Postgres services between 19:14 and 20:15 UTC.

monitoringOct 1, 08:17 PM

A fix has been implemented and we are monitoring the results.

investigatingOct 1, 08:15 PM

We are currently investigating this issue.

minorresolvedOct 1, 02:08 PM — Resolved Oct 1, 02:57 PM

Partial degradation of service creation and deploys in Oregon

3 updates
resolvedOct 1, 02:57 PM

This incident has been resolved.

monitoringOct 1, 02:24 PM

A fix has been implemented and we are monitoring the results.

investigatingOct 1, 02:08 PM

We are currently investigating this issue.

September 2025(9 incidents)

majorresolvedSep 23, 06:30 PM — Resolved Sep 26, 06:19 PM

Small number of users impacted by stuck builds

4 updates
resolvedSep 26, 06:19 PM

This incident has been resolved.

monitoringSep 25, 11:43 PM

A fix has been implemented and we are monitoring the results.

identifiedSep 24, 10:41 PM

The issue has been identified and a fix is being implemented.

investigatingSep 23, 06:30 PM

We are aware of an issue resulting in stuck builds impacting a small minority of users with the "Wait" setting for their Overlapping Deploy Policy.

majorresolvedSep 25, 12:02 AM — Resolved Sep 25, 01:31 AM

Image-based deploys failing due to upstream provider

3 updates
resolvedSep 25, 01:31 AM

This incident has been resolved.

monitoringSep 25, 01:14 AM

The upstream provider has rolled out a fix and are monitoring the issue. We are monitoring our systems as well.

identifiedSep 25, 12:02 AM

Due to an outage from an upstream provider, users with image-based services are seeing failed deploys with reports of 401 errors.

criticalresolvedSep 22, 02:31 PM — Resolved Sep 22, 03:41 PM

Some Postgres databases can’t be created in Frankfurt

4 updates
resolvedSep 22, 05:49 PM

This incident has been resolved.

identifiedSep 22, 03:40 PM

We are continuing to work on the issue.

identifiedSep 22, 03:13 PM

We’ve identified the issue, but we’re still investigating.

investigatingSep 22, 02:31 PM

This doesn’t impact Postgres databases that are already running. It only partially affects Frankfurt. Any affected database that gets created will show a status of 'unknown'.

minorresolvedSep 21, 10:00 PM — Resolved Sep 21, 10:00 PM

Dashboard operations degraded or failing

1 update
resolvedSep 23, 08:57 PM

Dashboard operations were degraded for ~30 minutes, and within that period operations were mostly failing for ~5 mins.

minorresolvedSep 17, 08:30 PM — Resolved Sep 17, 08:30 PM

Issues with deploys and spinning down free services in Virginia

1 update
resolvedSep 17, 09:43 PM

Between 20:35 UTC and 20:53 UTC today, a process was unavailable that is responsible for handling port detection and deploys that were waiting due to a workspace's overlapping deploy policy. Queued deploys will proceed as expected in most cases. Additionally, free services may not have spun down on idle during that period.

minorresolvedSep 15, 03:57 PM — Resolved Sep 16, 03:42 PM

Failure to pull some images from GitHub Container Registry

4 updates
resolvedSep 16, 03:42 PM

We no longer see elevated failure rates for images pulled from GitHub Container Registry. If the problem persists please contact GitHub or Render's support team.

monitoringSep 15, 09:39 PM

Intermittent failure continues to occur for images pulled from GitHub Container Registry. We recommend pulling the image locally and contacting GitHub if the issue persists locally. We are continuing to monitor the situation.

investigatingSep 15, 05:36 PM

We have determined that this affects images from GitHub Container Registry only. We are continuing to investigate.

investigatingSep 15, 03:57 PM

We are currently investigating this issue.

minorresolvedSep 15, 05:49 PM — Resolved Sep 15, 08:25 PM

Deploy failure when using some public repos from GitHub

3 updates
resolvedSep 15, 08:25 PM

This incident has been resolved.

monitoringSep 15, 07:09 PM

Deploys are beginning to succeed when using a public repo and we are monitoring for any further issues.

investigatingSep 15, 05:49 PM

Deploys may fail when using some public repos from GitHub. This affects some, but not all, public repos. We are investigating.

minorresolvedSep 11, 07:46 PM — Resolved Sep 11, 08:49 PM

Some slack notifications failing August 1 - September 11

3 updates
resolvedSep 11, 08:49 PM

We have completed monitoring our fix, and will publish an RCA attached to this incident.

monitoringSep 11, 08:39 PM

Engineers have implemented a fix and are Monitoring.

investigatingSep 11, 07:46 PM

We are investigating reports that a subset of Slack notifications were not being delivered. Fixing this has resulted in a high volume of notifications to be delivered. Some notifications may be delayed. We apologize for the noise that this may be creating in your configured channel(s).

minorresolvedSep 9, 09:49 PM — Resolved Sep 9, 10:20 PM

Builds and Deploys erroring on Oregon services

3 updates
resolvedSep 9, 10:20 PM

This incident has been resolved.

monitoringSep 9, 09:58 PM

The root cause of these failures has been addressed and build failure rates have decreased substantially, we are monitoring for any further issues.

identifiedSep 9, 09:49 PM

Services hosted in Oregon are seeing Builds and Deploys failing due to a platform issue. We have already begun steps to resolve this issue.

August 2025(9 incidents)

noneresolvedAug 26, 06:28 PM — Resolved Aug 27, 07:41 PM

Incorrect bandwidth billing data in the Dashboard

3 updates
resolvedAug 27, 07:41 PM

This incident has been resolved.

identifiedAug 26, 06:51 PM

We have identified the issue, understand why it's occurring, and are working on a fix.

investigatingAug 26, 06:28 PM

We are investigating reports of incorrect bandwidth billing data in the Dashboard.

minorresolvedAug 21, 06:02 PM — Resolved Aug 21, 08:01 PM

Degraded network performance in Virginia region

4 updates
resolvedAug 21, 08:01 PM

This incident has been resolved.

monitoringAug 21, 07:48 PM

A fix has been implemented by our upstream provider and we are monitoring the results.

identifiedAug 21, 06:09 PM

We have identified that this is tied to network performance issues from an upstream provider.

investigatingAug 21, 06:02 PM

We are currently investigating this issue.

majorresolvedAug 20, 06:17 PM — Resolved Aug 20, 06:56 PM

Degraded Webhooks and Queued Deploys

4 updates
resolvedAug 20, 06:56 PM

This incident has been resolved.

monitoringAug 20, 06:32 PM

A fix has been implemented and we are monitoring the results.

investigatingAug 20, 06:26 PM

We are continuing to investigate this issue.

investigatingAug 20, 06:17 PM

We are currently investigating this issue.

minorresolvedAug 13, 11:08 PM — Resolved Aug 13, 11:58 PM

Delays in deploy completion

3 updates
resolvedAug 13, 11:58 PM

This incident has been resolved.

monitoringAug 13, 11:48 PM

Performance has returned to expected levels, we are monitoring for any further issues.

identifiedAug 13, 11:08 PM

We have been alerted to performance issues causing delayed deployments for Oregon hosted services. Deploys are taking an increased amount of time to complete. We have begun steps toward remediating this issue.

minorresolvedAug 12, 05:00 PM — Resolved Aug 12, 05:00 PM

Unable to issue certificate for wildcard custom domains

1 update
resolvedAug 12, 05:31 PM

A change on our end inadvertently prevented wildcard custom domains from getting certificates. We've implemented and confirmed a fix. For any wildcard custom domains added to a service between approximately 2025-08-08T16:00Z and 2025-08-12T17:00Z, delete the custom domain and re-add it to the service.

noneresolvedAug 7, 07:32 AM — Resolved Aug 7, 08:15 AM

Singapore region issues

3 updates
resolvedAug 7, 08:15 AM

This incident has been resolved.

monitoringAug 7, 07:58 AM

A fix has been implemented and we are monitoring the results.

investigatingAug 7, 07:32 AM

We are currently investigating reports of issues with services in the Singapore region

majorresolvedAug 5, 07:47 PM — Resolved Aug 6, 12:08 AM

Some services may have runtime errors

6 updates
resolvedAug 6, 12:08 AM

This incident has been resolved.

monitoringAug 5, 11:25 PM

We have not observed a recurrence of the issue after our fix and continue to monitor for any further issues.

monitoringAug 5, 10:25 PM

A fix has been implemented and we are monitoring the results.

identifiedAug 5, 09:26 PM

We are continuing to look into this and determine the best remediation strategy.

identifiedAug 5, 07:54 PM

An underlying system package upgrade in our native environment runtime is causing a segfault for a small percentage of users. We are working on a fix.

investigatingAug 5, 07:47 PM

We are currently investigating this issue.

majorresolvedAug 4, 06:26 PM — Resolved Aug 4, 07:06 PM

Deploy failure on downloading bun

2 updates
resolvedAug 4, 07:06 PM

This incident has been resolved.

investigatingAug 4, 06:26 PM

Services, even those that don't use bun, may be experiencing deploy failure due to being unable to find bun to download. We're investigating the issue.

noneresolvedAug 1, 04:00 PM — Resolved Aug 1, 04:00 PM

Point-In-Time Recovery restores degraded

1 update
resolvedAug 1, 11:47 PM

Some Point-In-Time Recovery (PITR) backups of Render Postgres services hosted in Oregon began to error at approximately 10AM Pacific today (August 1). Attempts to restore backups from times later than 10AM may fail due to the inability to retrieve the data necessary to accomplish the restore. Restores for timeframes prior to 10AM today will succeed. We have already restored PITR coverage for the vast majority of Postgres services. Fewer than 10 services remain that may take until mid-day tomorrow for coverage to be restored.

July 2025(2 incidents)

majorresolvedJul 22, 03:41 PM — Resolved Jul 22, 04:44 PM

The Render dashboard and REST API are slow

4 updates
resolvedJul 22, 04:44 PM

This incident has been resolved.

monitoringJul 22, 03:56 PM

A fix has been implemented and we are monitoring the results.

investigatingJul 22, 03:54 PM

We are continuing to investigate this issue.

investigatingJul 22, 03:41 PM

We are currently investigating this issue.

majorresolvedJul 18, 07:56 PM — Resolved Jul 18, 08:18 PM

Inability to create or update Postgres services for some users in Oregon

2 updates
resolvedJul 18, 08:18 PM

This incident has been resolved.

monitoringJul 18, 07:56 PM

We have identified the issue, implemented a fix, and are monitoring for full restoration. This affects some users in the Oregon region, but not all.