Supabase Outage History
Past incidents and downtime events
Complete history of Supabase outages, incidents, and service disruptions. Showing 50 most recent incidents.
February 2026(15 incidents)
Users Experiencing Network Connectivity Problems (India Region)
6 updates
We are currently aware of an issue affecting reachability of Supabase projects for a subset of users based in India. Supabase infrastructure remains fully operational. We have confirmed that the projects of impacted users remain accessible from regions outside India. Our investigation has confirmed that a service provider in the region is not serving the correct DNS responses for Supabase projects from their internal DNS resolvers. We are following up through all available channels to work with the ISP to resolve this issue. We also advise affected customers to also report the issue to their ISP. The best workaround we have currently is for affected users to use an alternative DNS providers such as: Cloudflare: https://1.1.1.1 Google https://developers.google.com/speed/public-dns Quad9: https://quad9.net/ Alternatively users can use a VPN to avoid this issue We will post a further update once the ISP has confirmed that the issue is resolved or additional information becomes available.
We have confirmed that the projects of impacted users remain accessible from regions outside India. Our investigation is currently focused on an ISP-level block affecting users within India. We advise affected customers to also report the issue to their ISP. Supabase infrastructure remains fully operational. The best workaround we have currently is for affected users to use an alternative DNS providers such as: Cloudflare: https://1.1.1.1 Google https://developers.google.com/speed/public-dns Quad9: https://quad9.net/
We have confirmed that the projects of impacted users remain accessible from regions outside India. Our investigation is currently focused on a potential ISP-level block affecting users within India.
We continue to work with customers to address their connectivity issues. We are also contacting the relevant network providers to work with them to resolve the issue.
A DNS resolution issue is affecting customers in AP-South-1 and AP-Southeast-2. Certain ISP DNS servers appear to be unavailable, preventing connections to Supabase endpoints. Some users reported that switching to Cloudflare DNS (1.1.1.1) may restore connectivity. We advise affected customers to also report the issue to their ISP. Supabase infrastructure remains fully operational.
We have reports of customers having difficulties connecting to Supabase from locations accessing AP-south-1, including a number of reports from India. We are actively investigating the issue.
Resource Metrics Collection Delays in US-West-2
6 updates
Metrics collection continues to operate normally in US-West-2, and full metrics collection has been fully restored with no blips in stability.
The team has deployed additional capacity and has restored metrics collection operations. All metrics collection in region is fully operational and the team is actively monitoring the fixes deployed.
The team has deployed a fix and is actively monitoring recovery of partially degraded metrics collection. Some US-West-2 users may see metrics periodically available, but others may still see missing metrics. This does not impact project availability or functionality, which remain healthy.
The team is continuing to work on a fix for metrics collection. Some US-West-2 users may see metrics periodically available, but others may still see missing metrics. We'll continue to update here as we have more information. This does not impact project availability or functionality, which remain healthy.
We have identified the components resulting in the delayed metrics collection, and the team is working on a fix. This does not impact project availability or functionality, which remain healthy.
We have discovered an issue with metrics collection for projects in US-West-2. Some users may not be able to see metrics for these projects in the dashboard, and functions such as automatic disk resizes may be delayed as well. This does not impact project availability or functionality, which remain healthy. The team is currently investigating.
Errors across logs and observability services
10 updates
Things have remained stable, and we are confident that things are now resolved. The impact of this event was limited to log visibility and log retention. Some logs between 20:20 UTC and 23:23 UTC on Friday, Feb 20, 2026 may not be available.
The configuration update has brought error rates and stability back to normal, all logging and observability data should now be accessible. The impact of this event was limited to log visibility and log retention. Some logs between 20:20 UTC and 23:23 UTC on Friday, Feb 20, 2026 may not be available. We will continue to monitor to ensure things continue to look good.
The team is currently pushing a configuration change we hope will finish stabilizing the analytics services. We will continue to update as we have more information. Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.
The analytics service continues to be periodically degraded, which means some users may still periodically see issues seeing logging and observability information. We have added additional resources to the logging service to increase stability. The team is continuing their work to stabilize all analytics functionality. Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.
We are still seeing logging and observability services continue to stabilize, but some users may continue to see some issues. The team is continuing to working on full stabilization efforts. Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.
We are still seeing logging and observability services continue to stabilize, but some users may continue to see some issues. The team is continuing to working on full stabilization efforts. Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.
We've implemented a fix, and we are seeing the affected services begin to stabilize, but access to logs, observability metrics, and edge function invocation information may still be spotty. The team is continuing to working on full stabilization efforts. Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information.
We have identified an issue resulting in missing log, observability, and edge function information. Projects and services are up and running, edge function invocations are unaffected, and project creation/adjustments are unaffected. This only affects visibility of the above mentioned information. We will have another update within 20 minutes.
We are investigating errors across our logs and observability services and will provide an update soon.
We are investigating errors with Edge Functions and will provide an update soon.
Degraded Supavisor performance in us-east-1
3 updates
Performance has returned to normal levels. This incident has been resolved.
We've identified degraded performance in a Supavisor cluster resulted in elevated connection latency and increased p99 response times. The problematic node has been replaced and performance is returning to normal levels. We will continue to monitor.
Starting from Feb 22:55 UTC, degraded performance in one of our Supavisor clusters resulted in elevated connection latency and increased query p99 response times.
Elevated Supavisor query response times in us-east-1
1 update
Degraded performance in a Supavisor cluster resulted in elevated connection latency and increased p99 response times between 2026-02-15 22:00 and 2026-02-16 01:00. The problematic node was replaced and performance returned to normal levels.
Outage in US-East-2 (Ohio)
8 updates
We’ve published a [post-mortem on our blog](https://supabase.com/blog/supabase-incident-on-february-12-2026).
Service has been fully restored. All impacted jobs have been requeued and are currently processing normally. We will be publishing a public post-mortem with additional details about this incident.
The revert of the change helped and most of the metrics are back to the pre incident levels. We are requeuing failed jobs and monitoring to make sure the issue doesn’t come back.
We identified a potential internal networking configuration that may have caused the incident. We have since reverted that change and it appears services are recovering.
We are still investigating the root cause for this incident. us-east-2 region isn’t receiving any network traffic at this point. We are also seeing some API request errors in other US regions, but not as high as us-east-2.
We continue to see increased levels of 500 errors across US-West and US-East regions. Our engineering team is investigating the issue.
The issue identified it as a problem in US-West with some impact in US-East and the impact seems to be primarily on reads rather than writes.
We have identified increasing 500 errors in some US regions and are actively investigating the cause.
Regional network issues in Yemen
4 updates
This issue is now resolved.
We continue to work with network vendors to mitigate this issue. In the interim, using a VPN will give you access to your Supabase project.
We are actively working with network vendors to mitigate this issue.
We have noticed increase connection failures to supabase.co domains from connections originating in Yemen. Projects are up and running, this only impacts connections from this region. We are working to resolve this issue with appropriate parties and will provide an update soon. We have specifically had reports of connection issues from connections via these ISPs: Yemen Mobile Sabafon Y-Telecom Spacetel
High connection latency via shared Pooler in us-west-1
4 updates
This incident has been resolved.
The team noticed that some connection pools had workers stuck as a consequence of the previous issue. This could cause query failures. Stuck workers were now restarted.
We've removed the problematic cluster node and latency returned to the normal level. We are now monitoring.
We’re investigating high latency in us-west-1 affecting some connections to databases via our shared connection pooler.
Edge Function issues when using supabase-js@2.95.0
6 updates
Function deploys are working normally now. We are resolving this incident.
esm.sh maintainers have implemented a fix which should unblock deploys of Edge Functions that were importing supabase-js from esm.sh.
We’ve attempted several fixes on the npm registry side, but the esm.sh issue persists. We are continuing to investigate. In the meantime, use npm: or jsr: specifiers, or jsdelivr as an alternative CDN: import { createClient } from "npm:@supabase/supabase-js@2.95.0" or import { createClient } from "https://cdn.jsdelivr.net/npm/@supabase/supabase-js@2.95.0/+esm"
We are removing v2.95.0 release, so the @2 tag will resolve to v2.94.1. This should resolve majority of issues. If you have previously switched to using v2.95.0 directly, please switch to v2.94.1
We recommend importing via `npm:` or `jsr:` specifiers instead of CDN imports: `import { createClient } from "npm:@supabase/supabase-js@2.95.0"` These are more reliable than third-party CDNs. We’ve reached out to the esm.sh maintainer and are tracking the issue upstream.
Edge Functions using @supabase/supabase-js@2.95.0 from esm.sh are failing. The issue is specific to the esm.sh CDN. Workarounds: Pin to v2.94.0 or use cdn.jsdelivr.net (https://cdn.jsdelivr.net/npm/@supabase/supabase-js@2.95.0/+esm) as a workaround while we investigate.
Storage upload via S3 Protocol is degraded
2 updates
We have deployed a fix and confirmed that the issue is fully resolved.
We are aware of issues with S3 upload protocols affecting projects in us-east-1 and ap-northeast-2. We are working on a fix.
Realtime cluster instabilty
3 updates
This incident has been resolved.
We've observing a recovery for latency and error rates after the capacity increase. We will be monitoring to make sure the problem was fully resolved.
We're seeing elevated errors and latency in Realtime service. We are already working on increasing cluster capacity to stabilize it.
Reports of DNS lookup errors for some customers in the United States
6 updates
This incident has been resolved.
Our upstream provider has pushed a fix, and requests seem to be back to normal now. According to data from our partner, users in the US South were largely affected, with users in and around Texas being the most common, but some users in and around Georgia may have also seen issues, but these should all be resolved at this point. Users connecting to project URLs should no longer be seeing "DNS address could not be found" errors. We are continuing to monitor the situation.
Our upstream provider has identified the specifc issues with DNS resolution and has begun working on a fix. Projects and the underlying infrastructure are unaffected, and remain running and available for users without affected DNS Providers. Affected users will see errors similar to "DNS address could not be found"
We are continuing to work on this with our upstream network provider. Projects and the underlying infrastructure are unaffected, and remain running and available for users without affected DNS Providers. Affected users will see errors similar to "DNS address could not be found"
Our upstream network provider has identified an issue and is working on a fix. Projects and the underlying infrastructure are unaffected, and remain running and available for users without affected DNS Providers. Affected users will see errors similar to "DNS address could not be found"
We are seeing reports of DNS Lookup failures for users based in the United States. The issue does not seem to affect all DNS providers, and we are still working to narrow down affected areas. Projects and the underlying infrastructure are unaffected, and remain running and available for users without affected DNS Providers. Affected users will see errors similar to "DNS address could not be found"
Issues with setting DNS for newly created and unpaused projects
4 updates
This issue is now resolved.
New project creation and unpauses are now available. We are working on remediation of projects impacted during the incident and will continue to monitor.
We have identified the issue and are working towards a resolution.
We are investigating issues affecting instance operations globally. This may impact new project creation and project unpauses. Existing projects remain unaffected.
Project Clone Failures (Beta Feature)
4 updates
This issue is now resolved. Affected customers should delete any failed cloned projects and try again with a new clone.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating an issue with our beta project cloning feature (Restore to a New Project). This has been disabled while we conduct our investigations and we will provide an update soon.
Degraded performance for Supavisor (multi-tenant connection pooler) in us-east-1
3 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are currently investigating this issue.
January 2026(7 incidents)
Errors when Updating auth configs with github branching
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We have identified the issue causing errors for these changes, and the team is working on a fix.
Some users are seeing errors when attempting to update auth configs on branches with github branching. This is only affecting branches, and is not affecting production projects. The team is currently investigating.
Project Creation Issues - eu-west-1
7 updates
This incident has been resolved.
Project actions (creation, unpauses, read replica creation, upgrades) have been re-enabled, and the team is monitoring.
We have received word from our upstream partner that their issue has been resolved. The team is currently testing all functionality on our side and cleaning up previously errored requests. Once we're confident things are back to normal, we'll resume project actions in the region.
We are continuing to monitor for resolution of the upstream provider incident.
We continue to monitor for resolution of the upstream provider incident.
Instance operations in eu-west-1 are impacted by an ongoing infrastructure provider incident causing network issues in the region. Affected operations include new project creation, unpauses, read replica creation, and upgrades. Existing projects are unaffected. Our provider is aware of the issue and we already see error rates and latency trending down. We will update as the situation progresses.
We are investigating issues affecting instance operations in the eu-west-1 region. This may impact new project creation, project unpauses, read replica creation, and upgrades. Existing projects remain unaffected.
Degraded Performance for Supavisor
1 update
Between 6am and 7am UTC our Supavisor cluster experienced instability, resulting in elevated error rates and increased client reconnects. The issue has since been resolved, and the service is operating normally. We are currently reviewing preventative measures to reduce the likelihood of recurrence.
Edge Function increased error rate in EU-West-1
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Elevated real-time error rates in ap-se-1
4 updates
We've confirmed that the elevated 5xx errors and high response times were isolated to the ap-southeast-1 region. Other regions remained healthy throughout the incident. Dashboards showed a sharp spike in 5xx errors and response times for ap-southeast-1, with metrics rapidly recovering after remediation steps. For more details, see the realtime-service combined dashboard. The incident was the result of a resource constraint. We have added the required resources and will take this into account in our future planning. We're continuing to monitor system health to confirm stability.
We are continuing to monitor for any further issues.
The performance of the impacted components has improved.
We're currently investigating increased error rates across all regions
Some edge function deployment operations are timing out
4 updates
ESM has resolved their issue. All function deploy actions should now be working.
To address function deploy issues, replace esm.sh with npm: or jsr imports instead. This workaround is confirmed to work.
For users experiencing this issue: esm.sh is having issues. Users can replace esm.sh with npm: or jsr imports to get function deploys to work.
We are seeing increased timeouts when deploying edge functions and are currently investigating. Already deployed functions are not affected.
Degraded log ingestion
6 updates
This incident has been resolved.
Log ingestion has been fully restored across all services. We will continue to monitor the system to ensure stability.
We are continuing to work on restoring full log ingestion for Postgres, PostgREST, and Auth services.
We have stabilized the ingestion servers. Error rates are back to normal and the Logflare dashboard is functioning. Our team is now working to restore log ingestion for Postgres, PostgREST, and Auth services.
We continue to see degraded log ingestion across all regions. Logflare dashboard may be temporarily unavailable as well while we take steps to mitigate the issue and restore normal service. Our engineering team is actively working on a resolution. We will provide further updates as we make progress.
We are seeing degradation in log ingestion in all regions, our engineering team is investigating this issue.
December 2025(10 incidents)
Shared SMTP service has low deliverability for some users
11 updates
This incident has been resolved.
A fix has been deployed, all new projects are fully operational. We've finished backporting the fix to affected projects and now monitoring.
A fix has been deployed, all new projects are fully operational. We're currently backporting the fix to affected projects (69% complete). An immediate workaround is to use custom SMTP. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
We've deployed a fix and confirmed all new projects are fully operational. Remediation for previously affected projects is underway and 50% complete. An immediate workaround is to use custom SMTP. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
A fix has been deployed, all new projects are now fully operational. We're currently backporting the fix to affected projects (30% complete). An immediate workaround is to use custom SMTP. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
We have implemented a fix, and all new projects should now be fully operational. We are rolling out a backporting fix to the affected projects; this is going to take around 3 hours. An immediate workaround is to use custom SMTP. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
We have implemented a fix, and all new projects should now be fully operational. We are now working on backporting it to the affected projects. An immediate workaround is to use custom SMTP. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
We have identified the issue, and we are working on the fix. An immediate workaround is to use custom SMTP. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
We are continuing to investigate this issue. An immediate workaround is to use custom SMTP. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
We are currently investigating an issue where some users have reported failures when sending email via the Shared SMTP service. This is also impacting newly creating projects. If you are impacted, and need email services working immediately, you can work around this immediately by following best practice for production use cases and configure custom SMTP: https://supabase.com/docs/guides/auth/auth-smtp
We are currently investigating an issue where some users have reported failures when sending email via the Shared SMTP service. If you are impacted, and need email services working immediately, you can work around this immediately by following best practice for production use cases and configure custom SMTP: https://supabase.com/docs/guides/auth/auth-smtp
Shared SMTP service has low deliverability
9 updates
We worked with our email provider to restore the shared SMTP service, and email delivery has now fully returned to normal. All emails are being delivered as expected. We’ll be following up internally to implement preventative measures to reduce the likelihood of this happening again.
We are continuing to work on a solution for this issue. To clarify the impact: This only affects the shared SMTP service, which is intended for testing use only. The team is, in parallel, continuing to work with our mail provider and is preparing a migration to a different service - we will go with whichever solution happens the earliest; however, this migration must be done carefully to avoid further issues. If you need email services working immediately, you can still work around this immediately by following best practice for production use cases and configure custom SMTP: https://supabase.com/docs/guides/auth/auth-smtp We expect to have an update no later than December 22 at 0900 UTC; however, we will share more information if it becomes available earlier.
We are continuing to work on a solution for this issue. You can still work around this immediately by following best practice for production use cases and configure custom SMTP: https://supabase.com/docs/guides/auth/auth-smtp
We are working with the email provider for shared SMTP services. For now, users without Custom SMTP will continue to see email deliverability issues for auth-related emails. We are also exploring other provider options in the event we need to migrate in the near term. You can still work around this immediately by following best practice for production use cases and configure custom SMTP: https://supabase.com/docs/guides/auth/auth-smtp
We have identified the issues with sending mail via the shared SMTP service and are working on a fix. For any production workloads, we would still strongly encourage setting up custom SMTP on your projects. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
All projects using Supabase shared SMTP are experiencing emails going undelivered. We are currently investigating, but an immediate workaround is to use custom SMTP. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
We are continuing to investigate this issue.
All projects using Supabase shared SMTP are experiencing emails going undelivered. We are currently investigating, but an immediate workaround is to use custom SMTP. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
All projects using Supabase shared SMTP are experiencing emails going undelivered. We are currently investigating, but an immediate workaround is to use custom SMTP. More information available on configuring that here: https://supabase.com/docs/guides/auth/auth-smtp
Elevated 5xx error rates
1 update
An upstream provider was experiencing an issue which led to increase 5xx error rates between 11:19 UTC and 12:34 UTC
Increased latency and occasional timeouts on requests to Data APIs for projects in US-East-1
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We have identified increased errors with an upstream provider. We are currently working with them on resolution.
We are currently investigating this issue.
Project creation and restoration intermittent issues
5 updates
This incident has been resolved.
We’ve identified the potential root cause of the today’s disruption and we are validating a long-term fix to prevent recurrence. All systems operational.
Project creation and restoration are working again as usual. Our testing confirms normal operation. We’re continuing to investigate the root cause and will provide further updates soon.
The problem has been identified and we're working towards a resolution. Existing projects and running services continue to operate as usual.
New project creation and project restores are currently disabled due to an ongoing issue. Our engineering team is investigating the root cause and working on a fix. Existing projects and running services continue to operate as usual.
Project creation failing due to AMI permissions issues in ap-northeast-1 (Tokyo)
1 update
We briefly noticed increased error rates in ap-northeast-1 (Tokyo) The problem has been identified and resolved.
Connectivity issues with projects
5 updates
This incident has been resolved.
The problem was fixed and we are now monitoring. We had a complete outage of services going through the API Gateway due to the issues with our upstream provider. DB connections were not affected, Dashboard was partially affected.
Our upstream provider has implemented a fix, and traffic levels have recovered a few minutes ago.
There is an issue with our upstream provider. We are still identifying the exact scope and engage with our provider to understand the root cause of the problem.
We've spotted issues with our upstream provider. We're currently investigating the impact, and will provide an update soon.
Edge Function invocations are experiencing high error rates in ap-northeast-2 (Seoul) region
3 updates
This incident has been resolved.
Issues were encountered with the latest deploy. We've reverted the change and error rates have reduced. We'll continue to monitor.
We are currently investigating reports that Edge Function invocations are experiencing high error rates in ap-northeast-2 (Seoul) region. We will provide additional information as soon as it is available.
Edge Function invocations are experiencing high error rates in ap-northeast-2 (Seoul) region
2 updates
This incident has been resolved.
We’ve reverted the latest deployment and continuing to monitor
Degraded Supavisor performance in US-West-1 (Northern California)
5 updates
This incident has been resolved.
The fix was deployed and the latency stabilized. We are now monitoring to make sure that the issue was fully resolved.
We have identified the problematic Supavisor node and we are working on the fix.
We are currently investigating reports of increased Supavisor connection and query execution times in us-west-1 (Northern California) We'll provide further updates as soon as we have additional information
We are currently investigating reports of increased Supavisor connection times in us-west-1 (Northern California) We'll provide further updates as soon as we have additional information
November 2025(8 incidents)
We are currently investigating an issue where users are unable to view the Auth config in the dashboard
4 updates
This incident has been resolved.
The fix was deployed, and the Auth config is now accessible again. We will continue to monitor to make sure that the issue is fully resolved.
We have identified the issue and our Engineering team is working to resolve. Existing auth traffic is not impacted.
We are currently investigating this issue.
We are investigating reports of issues with requests failing across the platform
6 updates
Between 09:26 UTC and 09:55 UTC on November 24th 2025, customer projects returned HTTP 556 and 500 errors for 90% of requests to multiple products. ### Timeline | Time | Description | | --- | --- | | 09:26 UTC | API Gateway deployment. Customer impact starts. | | 09:30 UTC \(\+04 mins\) | Multiple product teams report monitoring anomalies and an influx of support tickets indicating problems with the Management API. | | 09:38 UTC \(\+12 mins\) | Incident is declared internally. | | 09:40 UTC \(\+14 mins\) | Status page post is created. | | 09:43 UTC \(\+17 mins\) | A recent deploy was rolled back as a precautionary step. | | 09:43 UTC \(\+17 mins\) | The team continued investigating various causes to the issue. We saw varied reports of Management API downtime, as well as errors manifesting in the Product services. | | 09:52 UTC \(\+26 mins\) | Root cause identified as a recent API Gateway release. | | 09:55 UTC \(\+29 mins\) | Rollback of API Gateway release. Customer impact ends. | ### Who was affected? 90% of requests via our API Gateway resulted in errors, which includes requests for our Data API, Auth, Storage, Realtime and Functions products - affecting customer projects in all regions. Direct Database connections and Supavisor connections were not impacted. Additionally our Management API was affected, and customers may have experienced errors when taking actions to manage their Supabase organization or projects. | **Service** | **Impact** | | --- | --- | | Data API | Impacted | | Auth | Impacted | | Storage | Impacted | | Realtime | Impacted | | Functions | Impacted | | Management API | Partially Impacted | | Direct Database connections | Not impacted | | Supavisor connections | Not impacted | ### What happened? At 09:26 UTC we deployed a release to our API Gateway service, which uses feature flags: configuration settings that allow us to enable or disable features without redeploying code. In this deployment there was a missing feature flag: the value was undefined in production and the code did not gracefully handle undefined feature flag values. The release had been tested extensively in our staging environment since November 20th 2025 with no issues, however the testing in staging did not take into account how an undefined feature flag value in production could have affected the release. Due to this testing overconfidence, we enabled an immediate global release, rather than a gradual rollout. The immediate global rollout caused the issue to have larger impact that it otherwise should have had. The team rolled-back a recent deploy relating to the Management API as precautionary step while they continued to investigate. This did not solve the issue. Once the offending commit was identified the API Gateway release was rolled back, resolving the issue. ### Fixing the root cause To guarantee that our code properly supports undefined feature flags, we now apply TypeScript's `Partial` utility type to the FeatureFlags interface. This ensures that every feature flag property is automatically marked as optional, eliminating the risk of runtime errors from missing values: ``` // Make all feature flags optional: enforced by type system type FeatureFlagPartial = Partial // Internal type that includes feature flags: only used by getFeatureFlags type EnvironmentInternal = Config & FeatureFlagPartial ``` ### What will we do to mitigate problems like this in the future? 1. **Feature Flag Resilience** - we have improved our API Gateway to ensure that we gracefully support undefined feature flag values. We are making the same changes to all systems across the platform. 2. **Improved Deployment Strategy** - we are enforcing changes to our Release Management process, so that global rollouts are not possible except in the explicit case case of an override due to a hotfix or security concern. All services will follow a gradual staged rollout. 3. **Enhanced Observability** - we are integrating additional API Gateway metrics and alerts into our monitoring stack. This work has been prioritized to ensure similar incidents are detected immediately. This will bring the API Gateway in-line with our standard for service monitoring. 4. **Fixing second order-effects** - during the incident, there were reports of BigQuery errors. This was a downstream effect: as more customers were logging in to check their logs and observability, we hit rate limits on our logging service. We are already in the process of migrating to a new logging backend that will prevent these errors in the future. ### To our customers To our customers: I'm sorry if this affected you. Supabase is the core infrastructure for millions of developers. This outage was avoidable, and the relaxed processes were unacceptable. Our focus as a company is to become the most reliable service available. This outage is our responsibility and we’ll use it as an important lesson for several areas of improvement. On behalf of the Supabase team - we’re sorry, and we’ll do better. Paul, CEO & cofounder
This incident has been resolved.
We rolled back a deployment to our API Gateway and we can see requests recovering. We will continue to monitor.
We are continuing to work on a fix for this issue.
We have identified issues with failing requests across a range of services and customer projects. Our Engineering team is working to resolve this now.
We are currently investigating this issue.
Disruption in new project creation and project configuration updates
9 updates
This incident has been resolved.
Things are continuing to look healthy, but out of an abundance of caution, we're going to keep this incident open and continue to monitor.
A fix has been implemented and we are monitoring the results.
We're continuing to scale up load across multiple regions. Managment API responsiveness should be improving as we continue this process. New project creation, Configuration updates, and certain dashboard actions should now be working, but you may still see occasional slowness. All existing projects, APIs, and databases continue to operate normally.
The underlying issues are improving, and we are spreading load across globally distributed services. This should result in lower latencies as we continue to spread the load again. You may still see occasional issues with New project creation, Configuration updates, and Certain dashboard actions until this is completed. All existing projects, APIs, and databases continue to operate normally.
The team is rolling out another change which we hope to reduce error rates, but will result in longer latencies for responses on affected activities. This will be done gradually to not cause any further disruption.
The team is continuing to work on recovery. New project creation, Configuration updates, and certain dashboard actions are still affected. All existing projects, APIs, and databases continue to operate normally.
We have identified the source of the issue and a fix has already been pushed. We are seeing much lower load on the management layer, and are keeping an eye on its effects on recovery.
We are currently investigating an issue affecting some management-layer services, which may impact: New project creation, Configuration updates, and Certain dashboard actions All existing projects, APIs, and databases continue to operate normally. Our engineering team is actively working to resolve this. We will provide updates as soon as more information is available.
Global upstream provider outage, platform-level and project-level services impacted
10 updates
This incident has been resolved.
We received reports about increased error rates for PostgREST for a several projects. After initial investigation we have shared this with our upstream provider.
All services are operational. We continue to monitor.
The fix was implemented, and we were able to confirm that error rates are back to normal for all services. We are monitoring to make sure the issue doesn’t resurface.
We continue to monitor the issue resolution process with our upstream provider.
We still see elevated error rates, but overall volume of errors stabilizes gradually. We continue to monitor the issue resolution with our upstream provider.
We are continuing to work on a fix for this issue.
We still see elevated error rates, but overall volume of errors stabilizes gradually. We continue to monitor the issue resolution with our upstream provider.
We have observed that services slowly recover, but error rates are still elevated. We continue to engage and monitor the issue with the upstream provider.
A global upstream provider is currently experiencing an outage which is impacting platform-level and project-level services
Some users seeing storage request failures in us-east-2
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
The issue has been identified and a fix is being implemented.
We are currently investigating this issue.
Increased error rates for Edge functions
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
Edge Functions are globally affected across all regions, and a fix is being rolled out.
We are currently investigating this issue.
Delays and intermittent drops in log ingestion across multiple projects
5 updates
This incident has been resolved.
We’ve implemented a temporary workaround and 503 errors are back to normal levels. Our upstream provider is still working on a permanent fix. We will continue to follow the situation closely and provide further updates.
Our upstream provider has identified the issue and is working on a fix. Log ingestion is slowly getting back to normal, however we are still seeing elevated error levels. We will continue to monitor and provide updates as the situation improves.
Our upstream provider has acknowledged the issue and their team is actively investigating the root cause. In the meantime, we are exploring workarounds to minimize the impact on log ingestion. We will continue to monitor the situation closely and provide updates as we receive more information.
We’re experiencing delays and intermittent drops in log ingestion across multiple projects. Impact All products are impacted to varying degrees. Where possible we will look to retry and backfill but some logs may be permanently lost. The cause of the issue looks to be rate limiting by an upstream provider and we are working with them to resolve the issue
Some Users seeing Auth failures with deep-linked redirect URLs after sign-in
5 updates
This incident has been resolved.
The fix has been completed across all regions. Email link and OAuth sign-in with deep-link redirect URLs should now be working correctly on iOS and Android apps. We are continuing to monitor. If you’re still experiencing issues with your project, please contact support.
The fix rollout is in progress across all regions. We’re aiming to complete it today.
Email link, and web-based OAuth sign-in on iOS and Android apps might be broken if the redirect URL used is a deep link and only has a scheme, like com.example.app://. Email + password, native Apple (on iOS) or Google (on Android) are not affected. The team is rolling out a fix.
We have identified the issue and are working on a fix. Email link and Oauth sign in on iOS and Android apps might be broken if the redirect URL is a deep link and only has a scheme, like com.example.app://. The team is rolling out a fix.
October 2025(9 incidents)
Some Users Unable to see Project Metrics in the Dashboard
2 updates
There was a very brief interruption, which is now resolved. We are seeing no unusual error rates at this time.
We are currently investigating this issue.
"Too Many Requests" errors in the Supabase Dashboard for Some Users
5 updates
This incident has been resolved.
This incident has been resolved.
We identified the change causing this and have rolled it back. Error rates are reducing, and we're continuing to monitor
We are continuing to investigate this issue.
We are currently investigating this issue.
Platform Service Degradation - Increased Latency and High Error Rates
4 updates
This incident has been resolved.
A fix has been implemented and we are monitoring the results.
We are continuing to investigate this issue.
We are currently investigating reports of degraded performance across our platform.
Degraded Supavisor performance in sa-east-1 region.
3 updates
This incident has been resolved. The impact began at 2:00 UTC causing a small fraction of queries in the affected region to be slower. At 11:00 UTC, the issue escalated and became more apparent as p95 latency increased. By 15:00 UTC, the degradation became widespread, with median query performance showing high variance and p95 consistently elevated. Higher than usual connection times could also be perceived during the incident.
We identified and replaced a faulty node in the sa-east-1 region. We are seeing query and connection times return to normal and will continue to monitor.
We are investigating degraded supavisor performance in the sa-east-1 region.
Degraded Management API performance in EU and US regions.
5 updates
This incident has been resolved.
Management API performance has stabilized in EU and US regions, though some write requests may experience slightly elevated latency. We continue to monitor for sporadic spikes while implementing additional improvements to ensure full stability.
Management API performance has largely stabilized in EU and US regions. However, we are still observing sporadic latency spikes and occasional errors. Our team continues to work on a permanent fix to fully resolve the issue.
We’ve rebalanced request traffic to stabilise the service. Initial indicators look healthy. We’ll continue to monitor over the next few hours to ensure the issue remains resolved.
We are experiencing degraded Management API performance in EU and US regions. As a result users may experience dashboard latency and increased error rates. Existing projects are not impacted
Elevated query latency and connection times for Shared Pooler in us-west-1
3 updates
This incident has been resolved.
We've implemented a fix. Both query latency and connection times have returned to normal values. We continue to monitor the situation to ensure the issue is fully resolved.
We are currently experiencing elevated p99 query latency and increased connection times for Shared Pooler users in the us-west-1 region. Our team is actively investigating the issue and working toward a resolution.
Regional outage: Service disruption in us-east-1 (N. Virginia)
3 updates
The upstream issues have been resolved, and we are seeing normal behavior across the fleet now. We've re-enabled platform level operations, and all systems looking stable.
We have received reports from users experiencing issues with the dashboard as well as some in-dashboard actions. While our core infrastructure and APIs remain healthy, these issues are linked to an ongoing incident affecting services hosted in the us-east-1 region, where our dashboard infrastructure and a subset of our APIs are distributed in. This may impact users globally, including those with projects outside of that region. We’re monitoring the situation closely and will provide updates as the upstream provider resolves the incident. Thank you for your patience.
We are currently seeing elevated error rates from our cloud provider’s API in the us-east-1 region. To safeguard existing projects, we have temporarily paused all platform-level operations in this region, including the creation of new projects.
Upstream provider issue affecting repo sync in US-EAST-1 (N. Virginia)
2 updates
We’re closing this incident in favour of the higher-impact event now tracked here: status.supabase.com/incidents/595nsxm9hj59. Please follow that incident for further updates.
We’re seeing impact in US-EAST-1 (N. Virginia) due to an upstream provider issue. GitHub pushes are not syncing to existing branches. You can still create new projects and branches, but new branches will start in an empty state as they will not be synced from GitHub. Once the issue is resolved, please push again to bring affected branches up to date.
Project and branch lifecycle workflows affected in EU and APAC regions
12 updates
We are no longer observing capacity issues with our cloud provider, and there have been no capacity related errors with new project creation, restores, restarts, or upgrades in the past 24 hours. Some projects remain stuck and require manual intervention. Our team is actively working through these requests.
We are currently experiencing capacity issues across all EU and APAC regions due to a surge in project creation requests. This is impacting the availability of Nano and Micro instance types. Larger compute sizes are not impacted. Note: Branches are deployed as Micro by default, so branching from larger compute sizes may be impacted. As a result, users may encounter errors when attempting the following operations on a Nano or Micro instance: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation We are continuously monitoring available capacity across all regions and will enable or disable project and branch lifecycle workflows (listed above) as needed based on current capacity. At this time, these workflows are enabled in all regions. Existing active projects in the region remain unaffected.
We are currently experiencing capacity issues across all EU and APAC regions due to a surge in project creation requests. This is impacting the availability of Nano and Micro instance types. Larger compute sizes are not impacted. As a result, users may encounter errors when attempting the following operations on a Nano or Micro instance: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation We are continuously monitoring available capacity across all regions and will enable or disable project and branch lifecycle workflows (listed above) as needed based on current capacity. At this time, these workflows are enabled in all regions Existing active projects in the region remain unaffected.
We are currently experiencing capacity issues across all EU and APAC regions due to a surge in project creation requests. This is impacting the availability of Nano and Micro instance types. Larger compute sizes are not impacted. As a result, users may encounter errors when attempting the following operations on a Nano or Micro instance: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation We are continuously monitoring available capacity across all regions and will enable or disable project and branch lifecycle workflows (listed above) as needed based on current capacity. At this time, these workflows are disabled in eu-central-1, eu-north-1, eu-west-2 while we work to provision additional capacity. Our team is actively collaborating with our cloud provider to increase capacity and restore normal operations. Existing active projects in the region remain unaffected.
We are currently experiencing capacity issues across all EU and APAC regions due to a surge in project creation requests. This is impacting the availability of Nano and Micro instance types. Larger compute sizes are not impacted. As a result, users may encounter errors when attempting the following operations on a Nano or Micro instance: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation We are continuously monitoring available capacity across all regions and will enable or disable project and branch lifecycle workflows (listed above) as needed based on current capacity. Our team is actively collaborating with our cloud provider to increase capacity and restore normal operations. Existing active projects in the region remain unaffected.
We are currently experiencing capacity issues across all EU and APAC regions due to a surge in project creation requests. This is impacting the availability of Nano and Micro instance types. Larger compute sizes are not impacted. As a result, users may encounter errors when attempting the following operations on a Nano or Micro instance: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation We are continuously monitoring available capacity across all regions and will enable or disable project and branch lifecycle workflows (listed above) as needed based on current capacity. At this time, these workflows are disabled in eu-west-2 while we work to provision additional capacity. Our team is actively collaborating with our cloud provider to increase capacity and restore normal operations. Existing active projects in the region remain unaffected.
We are currently experiencing capacity issues across all EU and APAC regions due to a surge in project creation requests. This is impacting the availability of Nano and Micro instance types. Larger compute sizes are not impacted. As a result, users may encounter errors when attempting the following operations on a Nano or Micro instance: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation We are continuously monitoring available capacity across all regions and will enable or disable project and branch lifecycle workflows (listed above) as needed based on current capacity. At this time, these workflows are disabled in ap-south-1, eu-north-1 while we work to provision additional capacity. Our team is actively collaborating with our cloud provider to increase capacity and restore normal operations. Existing active projects in the region remain unaffected.
We are currently experiencing capacity issues across all EU and APAC regions due to a surge in project creation requests. This is impacting the availability of Nano and Micro instance types. Larger compute sizes are not impacted. As a result, users may encounter errors when attempting the following operations on a Nano or Micro instance: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation We are continuously monitoring available capacity across all regions and will enable or disable project and branch lifecycle workflows (listed above) as needed based on current capacity. At this time, these workflows are disabled in ap-south-1 while we work to provision additional capacity. Our team is actively collaborating with our cloud provider to increase capacity and restore normal operations. Existing active projects in the region remain unaffected.
We are currently experiencing capacity issues across all EU and APAC regions due to a surge in project creation requests. This is impacting the availability of Nano and Micro instance types. Larger compute sizes are not impacted. As a result, users may encounter errors when attempting the following operations on a Nano or Micro instance: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation We are continuously monitoring available capacity across all regions and will enable or disable project and branch lifecycle workflows (listed above) as needed based on current capacity. At this time, these workflows are disabled in ap-south-1, eu-central-1, eu-central-2, eu-north-1, eu-west-2 while we work to provision additional capacity. Our team is actively collaborating with our cloud provider to increase capacity and restore normal operations. Existing active projects in the region remain unaffected.
We are currently experiencing capacity issues across all EU and APAC regions due to a surge in project creation requests. This is impacting the availability of Nano and Micro instance types. Larger compute sizes are not impacted. As a result, users may encounter errors when attempting the following operations on a Nano or Micro instance: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation The above workflows have been disabled eu-north-1, eu-central-2, and ap-south-1 while we work to make additional capacity available. Our team is actively working with our cloud provider to increase capacity and restore normal operation. Existing active projects in the region remain unaffected.
We are currently experiencing capacity issues across all EU regions due to a surge in project creation requests. This is impacting the availability of Nano and Micro instance types. Larger compute sizes are not impacted. As a result, users may encounter errors when attempting the following operations on a Nano or Micro instance: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation The above workflows have been disabled eu-north-1 and eu-central-2 while we work to make additional capacity available Our team is actively working with our cloud provider to increase capacity and restore normal operation. Existing active projects in the region remain unaffected.
We are currently experiencing capacity issues across all EU regions due to a surge in project creation requests. As a result, users may encounter errors when attempting the following operations: - Project creation - Project restore - Project restart - Project resize - Unpausing projects - Database upgrade - Read replica creation - Branch creation The above workflows have been disabled eu-north-1 and eu-central-2 while we work to make additional capacity available Our team is actively working with our cloud provider to increase capacity and restore normal operation. Existing active projects in the region remain unaffected.
September 2025(1 incident)
Project and branch lifecycle workflows affected in ap-south-1
5 updates
We are no longer observing capacity issues with our cloud provider, and there have been no capacity related errors with new project creation, restores, restarts, or upgrades in the past 24 hours. Some projects remain stuck and require manual intervention. Our team is actively working through these requests.
The following project and branch workflows have been re-enabled - project creation - project restore - project restart - project resize - database upgrade - read replica creation - branch creation We may temporarily these workflows in this region as needed based on capacity constraints while we continue working with our cloud provider on a permanent solution. Existing active projects in the region remain unaffected.
We have temporarily disabled project creation in ap-south-1 again. We are working with our cloud provider to re-enable the region soon. Existing active projects in the region are unaffected.
Project creation in ap-south-1 has been enabled now. We are monitoring to ensure stability.
We temporarily disabled project creation in ap-south-1. We are working with our cloud provider to re-enable the region soon. Existing active projects in the region are unaffected.