G

GitHub Outage History

Past incidents and downtime events

Complete history of GitHub outages, incidents, and service disruptions. Showing 50 most recent incidents.

February 2026(32 incidents)

minorresolvedFeb 23, 07:59 PM — Resolved Feb 24, 12:46 AM

Code search experiencing degraded performance

7 updates
resolvedFeb 24, 12:46 AM

This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

investigatingFeb 24, 12:38 AM

We have identified a cause for the latency and timeouts and have implemented a fix. We are observing initial recovery now.

investigatingFeb 23, 11:10 PM

Customers using code search continue to see increased latency and timeout errors. We are working to mitigate issues on the affected shard.

investigatingFeb 23, 10:22 PM

Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are taking steps to isolate and mitigate the affected shard.

investigatingFeb 23, 09:18 PM

Elevated latency and timeouts for code search is isolated to a single shard experiencing elevated CPU. We are continuing to investigate the cause and steps to mitigate.

investigatingFeb 23, 08:33 PM

We are continuing to investigate elevated latency and timeouts for code search.

investigatingFeb 23, 07:59 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedFeb 23, 09:16 PM — Resolved Feb 23, 09:30 PM

Incident with Issues and Pull Requests Search

3 updates
resolvedFeb 23, 09:30 PM

This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

investigatingFeb 23, 09:24 PM

Some customers are seeing timeout errors when searching for issues or pull requests. Team is currently investigating a fix.

investigatingFeb 23, 09:16 PM

We are investigating reports of degraded performance for Issues and Pull Requests

minorresolvedFeb 23, 04:17 PM — Resolved Feb 23, 05:03 PM

Incident with Actions

2 updates
resolvedFeb 23, 05:03 PM

This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

investigatingFeb 23, 04:17 PM

We are investigating reports of degraded performance for Actions

minorresolvedFeb 23, 02:56 PM — Resolved Feb 23, 04:19 PM

Incident with Copilot

6 updates
resolvedFeb 23, 04:19 PM

This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

investigatingFeb 23, 03:59 PM

Copilot is operating normally.

investigatingFeb 23, 03:59 PM

The issues with our upstream model provider have been resolved, and Haiku 4.5 is once again available in Copilot Chat and across IDE integrations.We will continue monitoring to ensure stability, but mitigation is complete.

investigatingFeb 23, 03:13 PM

Our provider has recovered and we are not seeing errors but we are awaiting a signal from them that the issue will not regress before we go green.

investigatingFeb 23, 02:56 PM

We are experiencing degraded availability for the Haiku 4.5 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.Other models are available and working as expected.

investigatingFeb 23, 02:56 PM

We are investigating reports of degraded performance for Copilot

minorresolvedFeb 20, 08:00 PM — Resolved Feb 20, 08:41 PM

Extended job start delays for larger hosted runners

4 updates
resolvedFeb 20, 08:41 PM

This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

investigatingFeb 20, 08:36 PM

The team continues to investigate issues with some larger runner jobs being queued for a long time. We are though seeing improvement in the queue times. We will continue providing updates on the progress towards mitigation.

investigatingFeb 20, 08:01 PM

We are investigating reports of degraded performance for Larger Hosted Runners

investigatingFeb 20, 08:00 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedFeb 20, 10:02 AM — Resolved Feb 20, 11:41 AM

Incident with Copilot GPT-5.1-Codex

5 updates
resolvedFeb 20, 11:41 AM

This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

investigatingFeb 20, 11:19 AM

The issues with our upstream model provider have been resolved, and GPT 5.1 Codex is once again available in Copilot Chat and across IDE integrations [VSCode, Visual Studio, JetBrains].We will continue monitoring to ensure stability, but mitigation is complete.

investigatingFeb 20, 10:36 AM

We are still experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

investigatingFeb 20, 10:02 AM

We are experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.Other models are available and working as expected.

investigatingFeb 20, 10:02 AM

We are investigating reports of degraded performance for Copilot

minorresolvedFeb 18, 06:25 PM — Resolved Feb 18, 07:20 PM

Degraded performance in merge queue

5 updates
resolvedFeb 18, 07:20 PM

This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

investigatingFeb 18, 07:18 PM

We have seen significant recovery in merge queue we are continuing to monitor for any other degraded services.

investigatingFeb 18, 06:27 PM

We are investigating reports of issues with merge queue. We will continue to keep users updated on progress towards mitigation.

investigatingFeb 18, 06:26 PM

Pull Requests is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 18, 06:25 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedFeb 17, 05:46 PM — Resolved Feb 17, 07:06 PM

Intermittent authentication failures on GitHub

5 updates
resolvedFeb 17, 07:06 PM

On February 17, 2026, between 17:07 UTC and 19:06 UTC, some customers experienced intermittent authentication failures affecting GitHub Actions, parts of Git operations, and other authentication-dependent requests. On average, the Actions error rate was approximately 0.6% of affected API requests. Git operations ssh read error rate was approximately 0.29%, while ssh write and http operations were not impacted. During the incident, a subset of requests failed due to token verification lookups intermittently failing, leading to 401 errors and degraded reliability for impacted workflows.The issue was caused by elevated replication lag in the token verification database cluster. In the days leading up to the incident, the token store’s write volume grew enough to exceed the cluster’s available capacity. Under peak load, older replica hosts were unable to keep up, replica lag increased, and some token lookups became inconsistent, resulting in intermittent authentication failures.We mitigated the incident by adjusting the database replica topology to route reads away from lagging replicas and by adding/bringing additional replica capacity online. Service health improved progressively after the change, with GitHub Actions recovering by ~19:00 UTC and the incident resolved at 19:06 UTC.We are working to prevent recurrence by improving the resilience and scalability of our underlying token verification data stores to better handle continued growth.

investigatingFeb 17, 06:55 PM

We are continuing to monitor the mitigation and continuing to see signs of recovery.

investigatingFeb 17, 06:18 PM

We have rolled out a mitigation and are seeing signs of recovery and are continuing to monitor.

investigatingFeb 17, 05:46 PM

We have identified a low rate of authentication failures affecting GitHub App server to server tokens, GitHub Actions authentication tokens, and git operations. Some customers may experience intermittent API request failures when using these tokens. We believe we've identified the cause and are working to mitigate impact.

investigatingFeb 17, 05:46 PM

We are investigating reports of degraded performance for Actions and Git Operations

minorresolvedFeb 13, 10:30 PM — Resolved Feb 13, 10:58 PM

Disruption with some GitHub services regarding file upload

2 updates
resolvedFeb 13, 10:58 PM

On February 13, 2026, between 21:46 UTC and 22:58 UTC (72 minutes), the GitHub file upload service was degraded and users uploading from a web browser on GitHub.com were unable to upload files to repositories, create release assets, or upload manifest files. During the incident, successful upload completions dropped by ~85% from baseline levels. This was due to a code change that inadvertently modified browser request behavior and violated CORS (Cross-Origin Resource Sharing) policy requirements, causing upload requests to be blocked before reaching the upload service.We mitigated the incident by reverting the code change that introduced the issue.We are working to improve automated testing for browser-side request changes and to add monitoring/automated safeguards for upload flows to reduce our time to detection and mitigation of similar issues in the future.

investigatingFeb 13, 10:30 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedFeb 12, 06:36 PM — Resolved Feb 12, 08:34 PM

Disruption with some GitHub services

5 updates
resolvedFeb 12, 08:34 PM

Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency. The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.

investigatingFeb 12, 07:59 PM

Next Edit Suggestions availability is recovering. We are continuing to monitor until fully restored.

investigatingFeb 12, 07:18 PM

We are experiencing degraded availability in Australia and Brazil for Copilot completions and suggestions. We are working to resolve the issue

investigatingFeb 12, 06:46 PM

We are experiencing degraded availability in Australia for Copilot completions and suggestions. We are working to resolve the issue

investigatingFeb 12, 06:36 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedFeb 12, 02:06 PM — Resolved Feb 12, 04:50 PM

Intermittent disruption with Copilot completions and inline suggestions

4 updates
resolvedFeb 12, 04:50 PM

Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.

investigatingFeb 12, 03:33 PM

We are experiencing degraded availability in Western Europe for Copilot completions and suggestions. We are working to resolve the issue.

investigatingFeb 12, 02:08 PM

We are experiencing degraded availability in some regions for Copilot completions and suggestions. We are working to resolve the issue.

investigatingFeb 12, 02:06 PM

We are investigating reports of impacted performance for some GitHub services.

majorresolvedFeb 12, 10:38 AM — Resolved Feb 12, 11:12 AM

Disruption with some GitHub services

4 updates
resolvedFeb 12, 11:12 AM

From Feb 12, 2026 09:16:00 UTC to Feb 12, 2026 11:01 UTC, users attempting to download repository archives (tar.gz/zip) that include Git LFS objects received errors. Standard repository archives without LFS objects were not affected. On average, the archive download error rate was 0.0042% and peaked at 0.0339% of requests to the service. This was caused by deploying a corrupt configuration bundle, resulting in missing data used for network interface connections by the service.We mitigated the incident by applying the correct configuration to each site. We have added checks for corruption in this deployment, and will add auto-rollback detection for this service to prevent issues like this in the future.

investigatingFeb 12, 11:01 AM

We have resolved the issue and are seeing full recovery.

investigatingFeb 12, 10:39 AM

We are investigating an issue with downloading repository archives that include Git LFS objects.

investigatingFeb 12, 10:38 AM

We are investigating reports of impacted performance for some GitHub services.

majorresolvedFeb 12, 07:53 AM — Resolved Feb 12, 09:56 AM

Incident with Codespaces

8 updates
resolvedFeb 12, 09:56 AM

On February 12, 2026, between 00:51 UTC and 09:35 UTC, users attempting to create or resume Codespaces experienced elevated failure rates across Europe, Asia and Australia, peaking at a 90% failure rate.The disconnects were triggered by a bad configuration rollout in a core networking dependency, which led to internal resource provisioning failures. We are working to improve our alerting thresholds to catch issues before they impact customers and strengthening rollout safeguards to prevent similar incidents.

investigatingFeb 12, 09:56 AM

Recovery looks consistent with Codespaces creating and resuming successfully across all regions. Thank you for your patience.

investigatingFeb 12, 09:42 AM

Codespaces is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 12, 09:39 AM

We are seeing widespread recovery across all our regions. We will continue to monitor progress and will resolve the incident when we are confident on durable recovery.

investigatingFeb 12, 09:04 AM

We have identified the issue causing Codespace create/resume actions to fail and are applying a fix. This is estimated to take ~2 hours to complete but impact will begin to reduce sooner than that.We will continue to monitor recovery progress and will report back when more information is available.

investigatingFeb 12, 08:32 AM

We now understand the source of the VM create/resume failures and are working with our partners to mitigate the impact.

investigatingFeb 12, 08:02 AM

We are seeing an increase in Codespaces creation and resuming failures across multiple regions, primarily in EMEA. Our team are analysing the situation and are working to mitigate this impact.While we are working, customers are advised to create Codespaces in US East and US West regions via the "New with options..." button when creating a Codespace.More updates as we have them.

investigatingFeb 12, 07:53 AM

We are investigating reports of degraded availability for Codespaces

minorresolvedFeb 11, 06:58 PM — Resolved Feb 12, 12:59 AM

Disruption with some GitHub services

5 updates
resolvedFeb 12, 12:59 AM

On February 11 between 16:37 UTC and 00:59 UTC the following day, 4.7% of workflows running on GitHub Larger Hosted Runners were delayed by an average of 37 minutes. Standard Hosted and self-hosted runners were not impacted. This incident was caused by capacity degradation in Central US for Larger Hosted Runners. Workloads not pinned to that region were picked up by other regions, but were delayed as those regions became saturated. Workloads configured with private networking in that region were delayed until compute capacity in that region recovered. The issue was mitigated by rebalancing capacity across internal and external workloads and general increases in capacity in affected regions to speed recovery. In addition to working with our compute partners on the core capacity degradation, we are working to ensure other regions are better able to absorb load with less delay to customer workloads. For pinned workflows using private networking, we are shipping support soon for customers to failover if private networking is configured in a paired region.

investigatingFeb 11, 09:33 PM

Actions is experiencing capacity constraints with larger hosted runners, leading to high wait times. Standard hosted labels and self-hosted runners are not impacted. The issue is mitigated and we are monitoring recovery.

investigatingFeb 11, 07:37 PM

We're continuing to work toward mitigation with our capacity provider, and adding capacity.

investigatingFeb 11, 07:00 PM

Actions is experiencing capacity constraints with larger hosted runners, leading to high wait times. Standard hosted labels and self-hosted runners are not impacted.We're working with the capacity provider to mitigate the impact.

investigatingFeb 11, 06:58 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedFeb 11, 03:26 PM — Resolved Feb 11, 05:15 PM

Incident with API Requests

6 updates
resolvedFeb 11, 05:15 PM

On February 11, 2026, between 13:51 UTC and 17:03 UTC, the GraphQL API experienced degraded performance due to elevated resource utilization. This resulted in incoming client requests waiting longer than normal, timing out in certain cases. During the impact window, approximately 0.65% of GraphQL requests experienced these issues, peaking at 1.06%. The increased load was due to an increase in query patterns that drove higher than expected resource utilization of the GraphQL API. We mitigated the incident by scaling out resource capacity and limiting the capacity available to these query patterns. We're improving our telemetry to identify slow usage growth and changes in GraphQL workloads. We’ve also added capacity safeguards to prevent similar incidents in the future.

investigatingFeb 11, 05:13 PM

We've observed recovery for the GraphQL service latency.

investigatingFeb 11, 04:54 PM

We're continuing to remediate the service degradation and scaling out to further mitigate the potential for latency impact.

investigatingFeb 11, 03:54 PM

We've identified a dependency of GraphQL that is in a degraded state and are working on remediating the issue.

investigatingFeb 11, 03:27 PM

We're investigating increased latency for GraphQL traffic.

investigatingFeb 11, 03:26 PM

We are investigating reports of degraded performance for API Requests

minorresolvedFeb 11, 03:26 PM — Resolved Feb 11, 03:46 PM

Incident with Copilot

5 updates
resolvedFeb 11, 03:46 PM

On February 11, 2025, between 14:30 UTC and 15:30 UTC, the Copilot service experienced degraded availability for requests to Claude Haiku 4.5. During this time, on average 10% of requests failed with 23% of sessions impacted. The issue was caused by an upstream problem from multiple external model providers that affected our ability to serve requests. The incident was mitigated once one of the providers resolved the issue and we rerouted capacity fully to that provider. We have improved our telemetry to improve incident observability and implemented an automated retry mechanism for requests to this model to mitigate similar future upstream incidents.

investigatingFeb 11, 03:46 PM

Copilot is operating normally.

investigatingFeb 11, 03:46 PM

The issues with our upstream model provider have been resolved, and Claude Haiku 4.5 is once again available in Copilot Chat and across IDE integrations.We will continue monitoring to ensure stability, but mitigation is complete.

investigatingFeb 11, 03:27 PM

We are experiencing degraded availability for the Claude Haiku 4.5 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.Other models are available and working as expected.

investigatingFeb 11, 03:26 PM

We are investigating reports of degraded performance for Copilot

minorresolvedFeb 10, 03:07 PM — Resolved Feb 10, 03:58 PM

Disruption with some GitHub services

8 updates
resolvedFeb 10, 03:58 PM

On February 10th, 2026, between 14:35 UTC and 15:58 UTC web experiences on GitHub.com were degraded including Pull Requests and Authentication, resulting in intermittent 5xx errors and timeouts. The error rate on web traffic peaked at approximately 2%. This was due to increased load on a critical database, which caused significant memory pressure resulting in intermittent errors. We mitigated the incident by applying a configuration change to the database to increase available memory on the host. We are working to identify changes in load patterns and are reviewing the configuration of our databases to ensure there is sufficient capacity to meet growth. Additionally, we are improving monitoring and self-healing functionalities for database memory issues to reduce our time to detect and mitigation.

investigatingFeb 10, 03:58 PM

Pull Requests is operating normally.

investigatingFeb 10, 03:51 PM

We have deployed a mitigation for the issue and are observing what we believe is the start of recovery. We will continue to monitor.

investigatingFeb 10, 03:47 PM

We believe we have found the cause of the problem and are working on mitigation.

investigatingFeb 10, 03:33 PM

We continue investigating intermittent timeouts on some pages.

investigatingFeb 10, 03:08 PM

Pull Requests is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 10, 03:08 PM

We are seeing intermittent timeouts on some pages and are investigating.

investigatingFeb 10, 03:07 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedFeb 9, 04:29 PM — Resolved Feb 10, 09:57 AM

Copilot Policy Propagation Delays

10 updates
resolvedFeb 10, 09:57 AM

This incident has been resolved.

investigatingFeb 10, 12:51 AM

Copilot is operating normally.

investigatingFeb 10, 12:26 AM

We're continuing to address an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users. This may prevent newly enabled models from appearing when users try to access them. This issue is understand and we are working to get the mitigation applied. Next update in one hour.

investigatingFeb 9, 10:09 PM

We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.This may prevent newly enabled models from appearing when users try to access them.Next update in two hours.

investigatingFeb 9, 08:39 PM

We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.This may prevent newly enabled models from appearing when users try to access them.Next update in two hours.

investigatingFeb 9, 06:49 PM

We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.This may prevent newly enabled models from appearing when users try to access them.Next update in two hours.

investigatingFeb 9, 06:06 PM

We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.This may prevent newly enabled models from appearing when users try to access them.

investigatingFeb 9, 05:24 PM

We're continuing to investigate a an issue where Copilot policy updates are not propagating correctly for all customers.This may prevent newly enabled models from appearing when users try to access them.

investigatingFeb 9, 04:30 PM

We’ve identified an issue where Copilot policy updates are not propagating correctly for some customers. This may prevent newly enabled models from appearing when users try to access them.The team is actively investigating the cause and working on a resolution. We will provide updates as they become available.

investigatingFeb 9, 04:29 PM

We are investigating reports of degraded performance for Copilot

majorresolvedFeb 9, 07:01 PM — Resolved Feb 9, 08:09 PM

Incident with Issues, Actions and Git Operations

13 updates
resolvedFeb 9, 08:09 PM

On February 9, 2026, GitHub experienced two related periods of degraded availability affecting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other services. The first period occurred between 16:12 UTC and 17:39 UTC, and the second between 18:53 UTC and 20:09 UTC. In total, users experienced approximately 2 hours and 43 minutes of degraded service across the two incidents. During both incidents, users encountered errors loading pages on GitHub.com, failures when pushing or pulling code over HTTPS, failures starting or completing GitHub Actions workflow runs, and errors using GitHub Copilot. Additional services including GitHub Issues, pull requests, webhooks, Dependabot, GitHub Pages, and GitHub Codespaces experienced intermittent errors. SSH-based Git operations were not affected during either incident. Our investigation determined that both incidents shared the same underlying cause: a configuration change to a user settings caching mechanism caused a large volume of cache rewrites to occur simultaneously. During the first incident, asynchronous rewrites overwhelmed a shared infrastructure component responsible for coordinating background work, triggering cascading failures. Increased load caused the service responsible for proxying Git operations over HTTPS to exhaust available connections, preventing it from accepting new requests. We mitigated this incident by disabling async cache rewrites and restarting the affected Git proxy service across multiple datacenters. An additional source of updates to the same cache circumvented our initial mitigations and caused the second incident. This generated a high volume of synchronous writes, causing replication delays that cascaded in a similar pattern and again exhausted the Git proxy’s connection capacity, degrading availability across multiple services. We mitigated by disabling the source of the cache rewrites and again restarting Git proxy. We know these incidents disrupted the workflows of millions of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, GitHub's availability is not yet meeting our expectations. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps: 1. We have already optimized the caching mechanism to avoid write amplification and added self-throttling during bulk updates. 2. We are adding safeguards to ensure the caching mechanism responds more quickly to rollbacks and strengthening how changes to these caching systems are planned, validated, and rolled out with additional checks. 3. We are fixing the underlying cause of connection exhaustion in our Git HTTPS proxy layer so the proxy can recover from this failure mode automatically without requiring manual restarts. GitHub is critical infrastructure for your work, your teams, and your businesses. We're focusing on these mitigations and long-term infrastructure work so GitHub is available, at scale, when and where you need it.

investigatingFeb 9, 08:09 PM

Actions, Codespaces, Git Operations, Issues, Packages, Pages, Pull Requests and Webhooks are operating normally.

investigatingFeb 9, 08:08 PM

We are seeing all services have returned to normal processing.

investigatingFeb 9, 07:54 PM

A number of services have recovered, but we are continuing to investigate issues with Dependabot, Actions, and a number of other services.We will continue to investigate and monitor for full recovery.

investigatingFeb 9, 07:31 PM

Codespaces is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 07:29 PM

We have applied mitigations and are seeing signs of recovery.We will continue to monitor for full recovery.

investigatingFeb 9, 07:10 PM

Packages is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 07:07 PM

Pull Requests is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 07:07 PM

We are seeing impact to several systems including Actions, Copilot, Issues, and Git.Customers may see slow and failed requests, and Actions jobs being delayed.We are investigating.

investigatingFeb 9, 07:07 PM

Webhooks is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 07:05 PM

Pages is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 07:02 PM

Actions is experiencing degraded availability. We are continuing to investigate.

investigatingFeb 9, 07:01 PM

We are investigating reports of degraded performance for Actions, Git Operations and Issues

minorresolvedFeb 9, 03:54 PM — Resolved Feb 9, 07:29 PM

Notifications are delayed

8 updates
resolvedFeb 9, 07:29 PM

On February 9th notifications service started showing degradation around 13:50 UTC, resulting in an increase in notification delivery delays. Our team started investigating. Around 14:30 UTC the service started to recover as the team continued investigating the incident. Around 15:20 UTC degradation resurfaced, with increasing delays in notification deliveries and small error rate (below 1%) on UI and API endpoints related to notifications. At 16:30 UTC, we mitigated the incident by reducing contention through throttling workloads and performing a database failover. The median delay for notification deliveries was 80 minutes at this point and queues started emptying. Around 19:30 UTC the backlog of notifications was processed, bringing the service back to normal and declaring the incident closed.The incident was caused by the notifications database showing degradation under intense load. Most notifications-related asynchronous workloads, including notifications deliveries, were stopped to try to reduce the pressure on the database. To ensure system stability, a database failover was executed. Following the failover, we applied a configuration change to improve the performance. The service started recovering after these changes.We are reviewing the configuration of our databases to understand the performance drop and prevent similar issues from happening in the future. We are also investing in monitoring to detect and mitigate this class of incidents faster.

investigatingFeb 9, 07:14 PM

We continue observing recovery of the notifications. Notification delivery delays have been resolved.

investigatingFeb 9, 06:33 PM

We are continuing to recover from notification delivery delays. Notifications are currently being delivered with an average delay of approximately 15 minutes. We are working through the remaining backlog.

investigatingFeb 9, 05:57 PM

We are continuing to recover from notification delivery delays. Notifications are currently being delivered with an average delay of approximately 30 minutes. We are working through the remaining backlog.

investigatingFeb 9, 05:25 PM

We are seeing recovery in notification delivery. Notifications are currently being delivered with an average delay of approximately 1 hour as we work through the backlog. We continue to monitor the situation closely.

investigatingFeb 9, 04:51 PM

We continue to investigate delays in notification delivery with average delivery latency now nearing 1 hour 20 minutes. We are just now starting to see some signs of recovery.

investigatingFeb 9, 04:12 PM

We are investigating notification delivery delays with the current delay being around 50 minutes. We are working on mitigation.

investigatingFeb 9, 03:54 PM

We are investigating reports of impacted performance for some GitHub services.

majorresolvedFeb 9, 04:19 PM — Resolved Feb 9, 05:40 PM

Incident with Pull Requests

17 updates
resolvedFeb 9, 05:40 PM

On February 9, 2026, GitHub experienced two related periods of degraded availability affecting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other services. The first period occurred between 16:12 UTC and 17:39 UTC, and the second between 18:53 UTC and 20:09 UTC. In total, users experienced approximately 2 hours and 43 minutes of degraded service across the two incidents. During both incidents, users encountered errors loading pages on GitHub.com, failures when pushing or pulling code over HTTPS, failures starting or completing GitHub Actions workflow runs, and errors using GitHub Copilot. Additional services including GitHub Issues, pull requests, webhooks, Dependabot, GitHub Pages, and GitHub Codespaces experienced intermittent errors. SSH-based Git operations were not affected during either incident. Our investigation determined that both incidents shared the same underlying cause: a configuration change to a user settings caching mechanism caused a large volume of cache rewrites to occur simultaneously. During the first incident, asynchronous rewrites overwhelmed a shared infrastructure component responsible for coordinating background work, triggering cascading failures. Increased load caused the service responsible for proxying Git operations over HTTPS to exhaust available connections, preventing it from accepting new requests. We mitigated this incident by disabling async cache rewrites and restarting the affected Git proxy service across multiple datacenters. An additional source of updates to the same cache circumvented our initial mitigations and caused the second incident. This generated a high volume of synchronous writes, causing replication delays that cascaded in a similar pattern and again exhausted the Git proxy’s connection capacity, degrading availability across multiple services. We mitigated by disabling the source of the cache rewrites and again restarting Git proxy. We know these incidents disrupted the workflows of millions of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, GitHub's availability is not yet meeting our expectations. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps: 1. We have already optimized the caching mechanism to avoid write amplification and added self-throttling during bulk updates. 2. We are adding safeguards to ensure the caching mechanism responds more quickly to rollbacks and strengthening how changes to these caching systems are planned, validated, and rolled out with additional checks. 3. We are fixing the underlying cause of connection exhaustion in our Git HTTPS proxy layer so the proxy can recover from this failure mode automatically without requiring manual restarts. GitHub is critical infrastructure for your work, your teams, and your businesses. We're focusing on these mitigations and long-term infrastructure work so GitHub is available, at scale, when and where you need it.

investigatingFeb 9, 05:40 PM

Pull Requests is operating normally.

investigatingFeb 9, 05:39 PM

Webhooks is operating normally.

investigatingFeb 9, 05:37 PM

Actions is operating normally.

investigatingFeb 9, 05:32 PM

We are seeing recovery across all products and are continuing to monitor service health.

investigatingFeb 9, 05:29 PM

Pages is operating normally.

investigatingFeb 9, 05:26 PM

Git Operations is operating normally.

investigatingFeb 9, 05:25 PM

Issues is operating normally.

investigatingFeb 9, 05:08 PM

Pages is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 04:58 PM

We have identified the cause of high error rates and taken steps to mitigate. We see early signs of recovery but are continuing to monitor impact.

investigatingFeb 9, 04:50 PM

Issues is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 04:40 PM

Webhooks is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 04:40 PM

Git Operations is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 04:22 PM

Actions is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 04:21 PM

We are seeing intermittent errors on many pages and API requests and are investigating.

investigatingFeb 9, 04:20 PM

Issues is experiencing degraded availability. We are continuing to investigate.

investigatingFeb 9, 04:19 PM

We are investigating reports of degraded performance for Pull Requests

minorresolvedFeb 9, 02:17 PM — Resolved Feb 9, 03:46 PM

Incident with Actions

7 updates
resolvedFeb 9, 03:46 PM

On February 9th, 2026, between 09:16 UTC and 15:12 UTC GitHub Actions customers experienced run start delays. Approximately 0.6% of runs across 1.8% of repos were affected, with an average delay of 19 minutes for those delayed runs.The incident occurred when increased load exposed a bottleneck in our event publishing system, causing one compute node to fall behind on processing Actions Jobs. We mitigated by rebalancing traffic and increasing timeouts for event processing. We have since isolated performance critical events to a new, dedicated publisher to prevent contention between events and added safeguards to better tolerate processing timeouts.

investigatingFeb 9, 03:46 PM

Actions is operating normally.

investigatingFeb 9, 03:46 PM

Actions run delays have returned to normal levels.

investigatingFeb 9, 03:26 PM

We identified a bottleneck in our processing pipeline and have applied mitigations. We will continue to monitor for full recovery.

investigatingFeb 9, 02:54 PM

We continue to investigate an issue causing Actions run start delays, impacting approximately 4% of users.

investigatingFeb 9, 02:17 PM

We are investigating an issue with Actions run start delays, impacting approximately 4% of users.

investigatingFeb 9, 02:17 PM

We are investigating reports of degraded performance for Actions

minorresolvedFeb 9, 10:01 AM — Resolved Feb 9, 12:12 PM

Degraded performance for Copilot Coding Agent

4 updates
resolvedFeb 9, 12:12 PM

On February 9, 2026, between ~06:00 UTC and ~12:12 UTC, Copilot Coding Agent and related Copilot API endpoints experienced degraded availability. The primary impact was to agent-based workflows (requests to /agents/swe/*, including custom agent configuration checks), where 154k users saw failed requests and error responses in their editor/agent experience. Impact was concentrated among users and integrations actively using Copilot Coding Agent with VS Code. The degradation was caused by an unexpected surge in traffic to the related API endpoints that exceeded an internal secondary rate limit. That resulted in upstream request denials which were surfaced to users as elevated 500 errors.We mitigated the incident by deploying a change that increased the applicable rate limit for this traffic, which allowed requests to complete successfully and returned the service to normal operation.After the mitigation, we deployed guardrails with applicable caching to avoid a repeat of similar incidents. We also temporarily increased infrastructure capacity to better handle backlog recovery from the rate limiting. We're are improving monitoring around growing agentic API endpoints.

investigatingFeb 9, 11:14 AM

We are continuing to investigate the degraded availability for Copilot Coding Agent.

investigatingFeb 9, 10:04 AM

We are investigating degraded availability for Copilot Coding Agent. We will continue to keep users updated on progress towards mitigation.

investigatingFeb 9, 10:01 AM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedFeb 9, 08:15 AM — Resolved Feb 9, 11:26 AM

Degraded Performance in Webhooks API and UI, Pull Requests

16 updates
resolvedFeb 9, 11:26 AM

On February 9, 2026, between 07:05 UTC and 11:26 UTC, GitHub experienced intermittent degradation across Issues, Pull Requests, Webhooks, Actions, and Git operations. Approximately every 30 minutes, users encountered brief periods of elevated errors and timeouts lasting roughly 15 seconds each. During the incident window, approximately 1–2% of requests were impacted across these services, with Git operations experiencing up to 7% error rates during individual spikes. GitHub Actions saw up to 2% of workflow runs delayed by a median of approximately 7 minutes due to backups created during these periods. This was due to multiple resource-intensive workloads running simultaneously, which caused intermittent processing delays on the data storage layer. We mitigated the incident by scaling storage to a larger compute capacity, which resolved the processing delays. We are working to improve detection of resource-intensive queries, identify changes in load patterns, and enhance our monitoring to reduce our time to detection and mitigation of issues like this one in the future.

investigatingFeb 9, 11:26 AM

Actions is operating normally.

investigatingFeb 9, 11:26 AM

Issues is operating normally.

investigatingFeb 9, 11:26 AM

Webhooks is operating normally.

investigatingFeb 9, 11:26 AM

Pull Requests is operating normally.

investigatingFeb 9, 11:11 AM

We have identified a faulty infrastructure component and have failed over to a healthy instance. We are continuing to monitor the system for recovery.

investigatingFeb 9, 11:04 AM

Git Operations is operating normally.

investigatingFeb 9, 10:48 AM

We are continuing to investigate intermittent elevated timeouts across the service.

investigatingFeb 9, 10:33 AM

Git Operations is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 10:09 AM

We are continuing to investigate intermittent elevated timeouts across the service.

investigatingFeb 9, 09:31 AM

We are continuing to investigate intermittent elevated timeouts across the service. Current impact is estimated around 1% or less of requests.

investigatingFeb 9, 09:23 AM

Actions is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 08:52 AM

We are continuing to investigate intermittent elevated timeouts.

investigatingFeb 9, 08:17 AM

We are investigating intermittent latency and errors with Webhooks API, Webhooks UI, and PRs. We will continue to keep users updated on progress towards mitigation.

investigatingFeb 9, 08:17 AM

Issues is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 9, 08:15 AM

We are investigating reports of degraded performance for Pull Requests and Webhooks

minorresolvedFeb 6, 05:49 PM — Resolved Feb 6, 06:36 PM

Incident with Pull Requests

5 updates
resolvedFeb 6, 06:36 PM

On February 6, 2026, between 17:49 UTC and 18:36 UTC, the GitHub Mobile service was degraded, and some users were unable to create pull request review comments on deleted lines (and in some cases, comments on deleted files). This impacted users on the newer comment-positioning flow available in version 1.244.0 of the mobile apps. Telemetry indicated that the failures increased as the Android rollout progressed. This was due to a defect in the new comment-positioning workflow that could result in the server rejecting comment creation for certain deleted-line positions.We mitigated the incident by halting the Android rollout and implementing interim client-side fallback behavior while a platform fix is in progress. The client-side fallback is scheduled to be published early this week. We are working to (1) add clearer client-side error handling (avoid infinite spinners), (2) improve monitoring/alerting for these failures, and (3) adopt stable diff identifiers for diff-based operations to reduce the likelihood of recurrence.

investigatingFeb 6, 06:36 PM

Some GitHub Mobile app users may be unable to add review comments on deleted lines in pull requests. We're working on a fix and expect to release it early next week.

investigatingFeb 6, 06:04 PM

Pull Requests is operating normally.

investigatingFeb 6, 06:00 PM

We're currently investigating an issue affecting the Mobile app that can prevent review comments from being posted on certain pull requests when commenting on deleted lines.

investigatingFeb 6, 05:49 PM

We are investigating reports of degraded performance for Pull Requests

minorresolvedFeb 6, 11:16 AM — Resolved Feb 6, 11:58 AM

Incident with Copilot

5 updates
resolvedFeb 6, 11:58 AM

On February 10, 2026, between 10:28 and 11:54 UTC, Visual Studio Code users experienced a degraded experience on GitHub Copilot when using the Claude Opus 4.6 model. During this time, approximately 50% of users encountered agent turn failures due to the model being unable to serve the volume of incoming requests.Rate limits set too low for actual demand caused the issue. While the initial deployment showed no concerns, a surge in traffic from Europe on the following day caused VSCode to begin hitting rate limit errors. Additionally, a degradation message intended to notify users of high usage failed to trigger due to a misconfiguration. We mitigated the incident by adjusting rate limits for the model.We improved our rate limiting to prevent future models from experiencing similar issues. We are also improving our capacity planning processes to reduce the risk of similar incidents in the future, and enhancing our detection and mitigation capabilities to reduce impact to customers.

investigatingFeb 6, 11:58 AM

Copilot is operating normally.

investigatingFeb 6, 11:57 AM

We have increased capacity and are seeing recovery.

investigatingFeb 6, 11:21 AM

Opus 4.6 is currently experiencing high demand and we are working on adding capacity.

investigatingFeb 6, 11:16 AM

We are investigating reports of degraded performance for Copilot

minorresolvedFeb 3, 04:10 PM — Resolved Feb 3, 07:28 PM

Delays in UI updates for Actions Runs

4 updates
resolvedFeb 3, 07:28 PM

On February 3, 2026, between 14:00 UTC and 17:40 UTC, customers experienced delays in Webhook delivery for push events and delayed GitHub Actions workflow runs. During this window, Webhook deliveries for push events were delayed by up to 40 minutes, with an average delay of 10 minutes. GitHub Actions workflows triggered by push events experienced similar job start delays. Additionally, between 15:25 UTC and 16:05 UTC, all GitHub Actions workflow runs experienced status update delays of up to 11 minutes, with a median delay of 6 minutes.The issue stemmed from connection churn in our eventing service, which caused CPU saturation and delays for reads and writes, with subsequent downstream delivery delays for Actions and Webhooks. We have added observability tooling and metrics to accelerate detection, and are correcting stream processing client configuration to prevent recurrence.

investigatingFeb 3, 06:06 PM

Our telemetry shows improvement on latency in job status updates. We will continue monitoring until full recovery.

investigatingFeb 3, 04:51 PM

We've applied a mitigation to improve system throughput and are monitoring for reduced latency for job status updates.

investigatingFeb 3, 04:10 PM

We are investigating reports of degraded performance for Actions

minorresolvedFeb 3, 10:16 AM — Resolved Feb 3, 10:56 AM

Incident with Copilot

4 updates
resolvedFeb 3, 10:56 AM

On February 3, 2026, between 09:35 UTC and 10:15 UTC, GitHub Copilot experienced elevated error rates, with an average of 4% of requests failing.This was caused by a capacity imbalance that led to resource exhaustion on backend services. The incident was resolved by infrastructure rebalancing, and we subsequently deployed additional capacity.We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.

investigatingFeb 3, 10:55 AM

We are now seeing recovery.

investigatingFeb 3, 10:21 AM

We are investigating elevated 500s across Copilot services.

investigatingFeb 3, 10:16 AM

We are investigating reports of degraded performance for Copilot

majorresolvedFeb 2, 07:03 PM — Resolved Feb 3, 12:56 AM

Incident with Actions

16 updates
resolvedFeb 3, 12:56 AM

On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enterprise Importer, and Pages. All regions and runner types were impacted. Self-hosted runners on other providers were not impacted. This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.

investigatingFeb 3, 12:56 AM

Actions is operating normally.

investigatingFeb 2, 11:50 PM

Based on our telemetry, most customers should see full recovery from failing GitHub Actions jobs on hosted runners.We are monitoring closely to confirm complete recovery.Other GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot) should also see recovery.

investigatingFeb 2, 11:43 PM

Actions is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 2, 11:42 PM

Copilot is operating normally.

investigatingFeb 2, 11:31 PM

Pages is operating normally.

investigatingFeb 2, 10:53 PM

Our upstream provider has applied a mitigation to address queuing and job failures on hosted runners.Telemetry shows improvement, and we are monitoring closely for full recovery.

investigatingFeb 2, 10:10 PM

We continue to investigate failures impacting GitHub Actions hosted-runner jobs.We're waiting on our upstream provider to apply the identified mitigations, and we're preparing to resume job processing as safely as possible.

investigatingFeb 2, 09:27 PM

Copilot is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 2, 09:13 PM

We continue to investigate failures impacting GitHub Actions hosted-runner jobs.We have identified the root cause and are working with our upstream provider to mitigate.This is also impacting GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot).

investigatingFeb 2, 08:27 PM

The team continues to investigate issues causing GitHub Actions jobs on hosted runners to remain queued for extended periods, with a percentage of jobs failing. We will continue to provide updates as we make progress toward mitigation.

investigatingFeb 2, 07:48 PM

Pages is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 2, 07:44 PM

The team continues to investigate issues causing GitHub Actions jobs on hosted runners to remain queued for extended periods, with a percentage of jobs failing. We will continue to provide updates as we make progress toward mitigation.

investigatingFeb 2, 07:43 PM

Actions is experiencing degraded availability. We are continuing to investigate.

investigatingFeb 2, 07:07 PM

GitHub Actions hosted runners are experiencing high wait times across all labels. Self-hosted runners are not impacted.

investigatingFeb 2, 07:03 PM

We are investigating reports of degraded performance for Actions

majorresolvedFeb 2, 08:17 PM — Resolved Feb 3, 12:54 AM

Incident with Codespaces

6 updates
resolvedFeb 3, 12:54 AM

On February 2, 2026, GitHub Codespaces were unavailable between 18:55 and 22:20 UTC and degraded until the service fully recovered at February 3, 2026 00:15 UTC. During this time, Codespaces creation and resume operations failed in all regions. This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.

investigatingFeb 3, 12:54 AM

Codespaces is operating normally.

investigatingFeb 3, 12:25 AM

Codespaces is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 2, 11:52 PM

Codespaces is seeing steady recovery

investigatingFeb 2, 08:19 PM

Users may see errors creating or resuming codespaces. We are investigating and will provide further updates as we have them.

investigatingFeb 2, 08:17 PM

We are investigating reports of degraded availability for Codespaces

majorresolvedFeb 2, 05:41 PM — Resolved Feb 2, 06:46 PM

Disruption with some GitHub services

3 updates
resolvedFeb 2, 06:46 PM

From Jan 31, 2026 00:30 UTC to Feb 2, 2026 18:00 UTC Dependabot service was degraded and failed to create 10% of Automated Pull Requests. This was due to a cluster failover that connected to a read-only database.We mitigated the incident by pausing Dependabot queues until traffic was properly routed to healthy clusters. We’re working on identifying and rerunning all failed jobs during this time.We’re adding new monitors and alerts to reduce our time to detection and prevent this in the future.

investigatingFeb 2, 05:58 PM

Dependabot is currently experiencing an issue that may cause scheduled update jobs to fail when creating pull requests.Our team has identified the problem and deployed a fix. We’re seeing signs of recovery and expect full resolution within the next few hours.

investigatingFeb 2, 05:41 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedFeb 2, 05:34 PM — Resolved Feb 2, 05:43 PM

Disruption with some GitHub services

4 updates
resolvedFeb 2, 05:43 PM

From Feb 2, 2026 17:13 UTC to Feb 2, 2026 17:36 UTC we experienced failures on ~0.02% of Git operations. While deploying an internal service, a misconfiguration caused a small subset of traffic to route to a service that was not ready. During the incident we observed the degradation and statused publicly.To mitigate the issue, traffic was redirected to healthy instances and we resumed normal operation.We are improving our monitoring and deployment processes in this area to avoid future routing issues.

investigatingFeb 2, 05:42 PM

We’ve observed a low rate (~0.01%) of 5xx errors for HTTP-based fetches and clones. We’re currently routing traffic away from the affected location and are seeing recovery.

investigatingFeb 2, 05:35 PM

Git Operations is experiencing degraded performance. We are continuing to investigate.

investigatingFeb 2, 05:34 PM

We are investigating reports of impacted performance for some GitHub services.

January 2026(18 incidents)

minorresolvedJan 30, 08:59 PM — Resolved Jan 30, 09:22 PM

Degraded Experience - Failing to finalize some CCA Jobs

3 updates
resolvedJan 30, 09:22 PM

Between 2026-01-30 19:06 UTC and 2026-01-30 20:04 UTC, Copilot Coding Agent experienced sessions getting stuck, with a mismatch between the UI-reported session status and the underlying Actions and job execution state. Impacted users could observe Actions finish successfully but the session UI continuing to show in-progress state, or sessions remaining in queued state.The issue was caused by a feature flag that resulted in events being published to a new Kafka topic. Publishing failures led to buffer/queue overflows in the shared event publishing client, preventing other critical events from being emitted. We mitigated the incident by disabling the feature flag and redeploying production pods, which resumed normal event delivery. We are working to improve safeguards and detection around event publishing failures to reduce time to mitigation for similar issues in the future.

investigatingJan 30, 09:05 PM

Customers may experience misreported Copilot Coding Agent tasks in the GitHub UI. Although the underlying actions are completing as requested, surfaces like Agent Sessions on the GitHub website, or Agent Hub in VS Code, will show that an agent is still working on a task, even if that work has completed. We are working to understand the root cause and mitigate these discrepancies.

investigatingJan 30, 08:59 PM

We are investigating reports of degraded performance for Actions

minorresolvedJan 28, 03:12 PM — Resolved Jan 28, 03:54 PM

Actions Workflows Run Start Delays

3 updates
resolvedJan 28, 03:54 PM

On Jan 28, 2026, between 14:56 UTC and 15:44 UTC, GitHub Actions experienced degraded performance. During this time, workflows experienced an average delay of 49 seconds, and 4.7% of workflow runs failed to start within 5 minutes. The root cause was an atypical load pattern that overwhelmed system capacity and caused resource contention.Recovery began once additional resources came online at 15:25 UTC, with full recovery at 15:44 UTC. We are implementing safeguards to prevent this failure mode and enhancing our monitoring to detect and address similar patterns more quickly in the future.

investigatingJan 28, 03:37 PM

Actions workflow run starts are delayed. We are actively investigating to find a mitigation.

investigatingJan 28, 03:12 PM

We are investigating reports of degraded performance for Actions

minorresolvedJan 26, 07:25 PM — Resolved Jan 26, 11:51 PM

Regression in windows runners for public repositories

8 updates
resolvedJan 26, 11:51 PM

On Jan 26, 2026, from approximately 14:03 UTC to 23:42 UTC, GitHub Actions experienced job failures on some Windows standard hosted runners. This was caused by a configuration difference in a new Windows runner type that caused the expected D: drive to be missing. About 2.5% of all Windows standard runners jobs were impacted. Re-run of failed workflows had a high chance of succeeding given the limited rollout of the change.The job failures were mitigated by rolling back the affected configuration and removing the provisioned runners that had this configuration. To reduce the chance of recurrence, we are expanding runner telemetry and improving validation of runner configuration changes. We are also evaluating options to accelerate the mitigation time of any similar future events.

investigatingJan 26, 11:49 PM

At 23:45 UTC we applied a mitigation to take remaining impacted capacity offline and are seeing improvement. We will update again once we've confirmed the issue is resolved.

investigatingJan 26, 11:04 PM

Our investigation into GitHub Actions 4 Core Windows runner failures in public repositories is ongoing.If you have a failing GitHub Actions run, please retry it and it is likely to succeed.

investigatingJan 26, 10:02 PM

We're continuing to investigate failures in GitHub Actions 4 Core Windows runners in public repositories. If you have a failing GitHub Actions run, please retry it and it is likely to succeed.

investigatingJan 26, 09:20 PM

Rollback has been completed, but we are still seeing failures on about 11% of GitHub Actions runs on 4 Core Windows runners in public repositories.If your workflow fails to start, try re-running and it is likely to work a second time.

investigatingJan 26, 08:10 PM

Mitigation for failing GitHub Actions jobs on 4-Core Windows runners is still being mitigated. You should start to see more runs succeeding.If you do see failing runs, please retry and they might succeed.

investigatingJan 26, 07:32 PM

We've applied a mitigation to unblock running Actions. A regression occurred for Windows runners in public repositories which caused Actions workflows to fail. A mitigation is in place and customers should expect to see resolution soon.If you have a failing Actions workflow on a Windows runner, please retry and it is likely to work.

investigatingJan 26, 07:25 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedJan 25, 02:43 AM — Resolved Jan 25, 03:08 AM

Disruption with repo creation

4 updates
resolvedJan 25, 03:08 AM

Between January 24, 2026,19:56 UTC and January 25, 2026, 2:50 UTC repository creation and clone were degraded. On average, the error rate was 25% and peaked at 55% of requests for repository creation. This was due to increased latency on the repositories database impacting a read-after-write problem during repo creation. We mitigated the incident by stopping an operation that was generating load on the database to increase throughput. We have identified the repository creation problem and are working to address the issue and improve our observability to reduce our time to detection and mitigation of issues like this one in the future.

investigatingJan 25, 03:08 AM

The issue has been resolved. We will continue to monitor to ensure stability.

investigatingJan 25, 02:58 AM

Repo creation failure rate increased above 50%. We have mitigated the problem and are monitoring for recovery.

investigatingJan 25, 02:43 AM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedJan 22, 02:12 PM — Resolved Jan 22, 03:22 PM

Disruption with some GitHub services

5 updates
resolvedJan 22, 03:22 PM

On January 22, 2026, our authentication service experienced an issue between 14:00 UTC and 14:50 UTC, resulting in downstream disruptions for users.From 14:00 UTC to 14:23 UTC, authenticated API requests experienced higher-than-normal error rates, with an average of 16.9% and occasional peaks up to 22.2% resulting in HTTP 401 responses for authenticated API requests. From 14:00 UTC to 14:50 UTC, git operations over HTTP were impacted, with error rates averaging 3.8% and peaking at 10.8%. As a result, some users may have been unable to run git commands as expected.This was due to the authentication service reaching the maximum allowed number of database connections. We mitigated the incident by increasing the maximum number of database connections in the authentication service.We are adding additional monitoring around database connection pool usage and improving our traffic projection to reduce our time to detection and mitigation of issues like this one in the future.

investigatingJan 22, 03:22 PM

We have identified an issue in one of our services and have mitigated it. Services have recovered and we have a mitigation but we are working on a longer term solution.

investigatingJan 22, 02:27 PM

Issues is operating normally.

investigatingJan 22, 02:23 PM

Issues is experiencing degraded performance. We are continuing to investigate.

investigatingJan 22, 02:12 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedJan 21, 07:31 PM — Resolved Jan 21, 08:53 PM

Policy pages for Copilot are timing out

5 updates
resolvedJan 21, 08:53 PM

On January 21, between 17:50 and 20:53 UTC, around 350 enterprises and organizations experienced slower load times or timeouts when viewing Copilot policy pages. The issue was traced to performance degradation under load due to an issue in upstream database caching capability within our billing infrastructure, which increased query latency to retrieve billing and policy information from approximately 300ms to up to 1.5s.To restore service, we disabled the affected caching feature, which immediately returned performance to normal. We then addressed the issue in the caching capability and re-enabled our use of the database cache and observed continued recovery.Moving forward, we’re tightening our procedures for deploying performance optimizations, adding test coverage, and improving cross-service visibility and alerting so we can detect upstream degradations earlier and reduce impact to customers.

investigatingJan 21, 08:47 PM

We are rolling out a fix to reduce latency and timeouts on policy pages and are continuing to monitor impact.

investigatingJan 21, 08:12 PM

We are continuing to investigate latency and timeout issues affecting Copilot policy pages.

investigatingJan 21, 07:37 PM

We are investigating timeouts for customers visiting the Copilot policy pages for organizations and enterprises.

investigatingJan 21, 07:31 PM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedJan 21, 11:33 AM — Resolved Jan 21, 12:38 PM

Copilot Chat - Grok Code Fast 1 Outage

3 updates
resolvedJan 21, 12:38 PM

On Jan 21st, 2025, between 11:15 UTC and 13:00 UTC the Copilot service was degraded for Grok Code Fast 1 model. On average, more than 90% of the requests to this model failed due to an issue with an upstream provider. No other models were impacted.The issue was resolved after the upstream provider fixed the problem that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.

investigatingJan 21, 12:09 PM

We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.Other models are available and working as expected.

investigatingJan 21, 11:33 AM

We are investigating reports of degraded performance for Copilot

minorresolvedJan 20, 07:49 PM — Resolved Jan 20, 08:10 PM

Run start delays in Actions

3 updates
resolvedJan 20, 08:10 PM

On January 20, 2026, between 19:08 UTC and 20:18 UTC, manually dispatched GitHub Actions workflows saw delayed job starts. GitHub products built on Actions such as Dependabot, Pages builds, and Copilot coding agent experienced similar delays. All jobs successfully completed despite the delays. At peak impact, approximately 23% of workflow runs were affected, with an average delay of 11 minutes.This was caused by a load pattern shift in Actions scheduled jobs that saturated a shared backend resource. We mitigated the incident by temporarily throttling traffic and scaling up resources to account for the change in load pattern. To prevent recurrence, we have scaled resources appropriately and implemented optimizations to prevent this load pattern in the future.

investigatingJan 20, 07:56 PM

We are investigating delays in manually dispatched Actions workflows as well as other GitHub products which run on Actions. We have identified a fix and are working on mitigating the delays.

investigatingJan 20, 07:49 PM

We are investigating reports of degraded performance for Actions

minorresolvedJan 20, 04:02 PM — Resolved Jan 20, 04:23 PM

Incident affecting actions-runner-controller

3 updates
resolvedJan 20, 04:23 PM

On January 20, 2026, between 14:39 UTC and 16:03 UTC, actions-runner-controller users experienced a 1% failure rate for API requests managing GitHub Actions runner scale sets. This caused delays in runner creation, resulting in delayed job starts for workflows targeting those runners. The root cause was a service to service circuit breaker that incorrectly tripped for all users when a single user hit rate limits for runner registration. The issue was mitigated by bypassing the circuit breaker, and users saw immediate and full service recovery following the fix.We have updated our circuit breakers to exclude individual customer rate limits from their triggering logic and are continuing work to improve detection and mitigation times.

investigatingJan 20, 04:03 PM

GitHub Actions customers that use actions-runner-controller are experiencing errors from our APIs that informs auto-scaling. We are investigating the issue and working on mitigating the impact.

investigatingJan 20, 04:02 PM

We are investigating reports of degraded performance for Actions

minorresolvedJan 16, 11:53 PM — Resolved Jan 17, 02:54 AM

Disruption with some GitHub services

8 updates
resolvedJan 17, 02:54 AM

Between 2026-01-16 16:17 and 2026-01-17 02:54 UTC, some Copilot Business users were unable to access and use certain Copilot features and models. This was due to a bug with how we determine if a user has access to a feature, inadvertently marking features and models as inaccessible for users whose enterprise(s) had not configured the policy.We mitigated the incident by reverting the problematic deployment. We are improving our internal monitoring and mitigation processes to reduce the risk and extended downtime of similar incidents in the future.

investigatingJan 17, 02:54 AM

The fix has been deployed and the issue resolved. We will continue to monitor any incoming reports.

investigatingJan 17, 02:25 AM

The deployment of the fix is still ongoing. We are now targeting 3:00 AM UTC for full resolution.

investigatingJan 17, 02:21 AM

The deployment is still in progress. We are still targeting 2:00 AM UTC for full resolution.

investigatingJan 17, 01:28 AM

Deployment of the fix is in progress. We are still targetting 2:00 AM UTC for full resolution.

investigatingJan 17, 12:08 AM

Some enterprise Copilot CLI users may encounter an "You are not authorized to use this Copilot feature" error. We have identified the root cause and are currently deploying a fix. Expected resolution: within 2 hours.

investigatingJan 16, 11:53 PM

We received multiple reports of 403s when attempting to use the Copilot CLI. We have identified the root cause and are rolling out a fix for affected customers.

investigatingJan 16, 11:53 PM

We are investigating reports of impacted performance for some GitHub services.

majorresolvedJan 15, 04:56 PM — Resolved Jan 15, 06:54 PM

Incident with Issues and Pull Requests

12 updates
resolvedJan 15, 06:54 PM

On January 15, 2026, between 16:40 UTC and 18:20 UTC, we observed increased latency and timeouts across Issues, Pull Requests, Notifications, Actions, Repositories, API, Account Login and Alive. An average 1.8% of combined web and API requests saw failure, peaking briefly at 10% early on. The majority of impact was observed for unauthenticated users, but authenticated users were impacted as well.This was caused by an infrastructure update to some of our data stores. Upgrading this infrastructure to a new major version resulted in unexpected resource contention, leading to distributed impact in the form of slow queries and increased timeouts across services that depend on these datasets. We mitigated this by rolling back to the previous stable version.We are working to improve our validation process for these types of upgrades to catch issues that only occur under high load before full release, improve detection time, and reduce mitigation times in the future.

investigatingJan 15, 06:54 PM

Pull Requests is operating normally.

investigatingJan 15, 06:42 PM

Issues and Pull Requests are experiencing degraded performance. We are continuing to investigate.

investigatingJan 15, 06:36 PM

We are seeing recovery across all services, but will continue to monitor before resolving.

investigatingJan 15, 05:51 PM

API Requests is operating normally.

investigatingJan 15, 05:44 PM

We are seeing some signs of recovery, particularly for authenticated users. Unauthenticated users may continue to see impact across multiple services. Mitigation efforts continue.

investigatingJan 15, 05:35 PM

API Requests is experiencing degraded performance. We are continuing to investigate.

investigatingJan 15, 05:14 PM

Actions is operating normally.

investigatingJan 15, 05:07 PM

A number of services are currently degraded, especially issues, pull requests, and the API. Investigation and mitigation is underway.

investigatingJan 15, 05:06 PM

Actions is experiencing degraded availability. We are continuing to investigate.

investigatingJan 15, 04:57 PM

API Requests is experiencing degraded availability. We are continuing to investigate.

investigatingJan 15, 04:56 PM

We are investigating reports of degraded availability for API Requests, Actions, Issues and Pull Requests

minorresolvedJan 15, 02:24 PM — Resolved Jan 15, 03:26 PM

Actions workflow run and job status updates are experiencing delays

4 updates
resolvedJan 15, 03:26 PM

On January 15th, between 14:18 UTC and 15:26 UTC, customers experienced delays in status updates for workflow runs and checks. Status updates were delayed by up to 20 minutes, with a median delay of 11 minutes.The issue stemmed from an infrastructure upgrade to our database cluster. The new version introduced resource contention under production load, causing slow query times. We mitigated this by rolling back to the previous stable version. We are working to strengthen our upgrade validation process to catch issues that only manifest under high load. We are also adding new monitors to reduce detection time for similar issues in the future.

investigatingJan 15, 03:12 PM

We are continuing to monitor as the system recovers and expect full recovery within the next 20-30 minutes. Impacted users will see that job status appears queued, though the job itself is actually running.

investigatingJan 15, 02:55 PM

We are seeing signs of recovery and are continuing to monitor as we process the backlog of events.

investigatingJan 15, 02:24 PM

We are investigating reports of degraded performance for Actions

minorresolvedJan 14, 08:21 PM — Resolved Jan 14, 09:38 PM

Incident with Webhooks

3 updates
resolvedJan 14, 09:38 PM

On January 14, 2026, between 19:34 UTC and 21:36 UTC, the Webhooks service experienced a degradation that delayed delivery of some webhooks. During this window, a subset of webhook deliveries that encountered proxy tunnel errors on their initial delivery attempt were delayed by more than two minutes. The root cause was a recent code change that added additional retry attempts for this specific error condition, which increased delivery times for affected webhooks. Previously, webhook deliveries encountering this error would not have been delivered.The incident was mitigated by rolling back the change, restoring normal webhook delivery. As a corrective action, we will update our monitoring to measure the webhook delivery latency critical path, ensuring that incidents are accurately scoped to this workflow.

investigatingJan 14, 08:41 PM

Some webhook deliveries are delayed, but we don’t expect meaningful user impact. The delays are currently scoped only to deliveries that, until recently, would have failed more quickly. We will update status if conditions change.

investigatingJan 14, 08:21 PM

We are investigating reports of degraded performance for Webhooks

minorresolvedJan 14, 06:00 PM — Resolved Jan 14, 06:00 PM

[Retroactive] Incident with GitHub Copilot (GPT-5 model)

1 update
resolvedJan 16, 06:56 PM

From January 14, 2026, at 18:15 UTC until January 15, 2026, at 11:30 UTC, GitHub Copilot users were unable to select the GPT-5 model for chat features in VS Code, JetBrains IDEs, and other IDE integrations. Users running GPT-5 in Auto mode experienced errors. Other models were not impacted. We mitigated this incident by deploying a fix that corrected a misconfiguration in available models, rendering the GPT-5 model available again. We are improving our testing processes to reduce the risk of similar incidents in the future, and refining our model availability alerting to improve detection time. We did not status before we completed the fix, and the incident is currently resolved. We are sorry for the delayed post on githubstatus.com.

minorresolvedJan 14, 10:56 AM — Resolved Jan 14, 12:23 PM

Claude Opus 4.5 model experiencing degraded performance

4 updates
resolvedJan 14, 12:23 PM

On January 14th, 2026, between approximately 10:20 and 11:25 UTC, the Copilot service experienced a degradation of the Claude Opus 4.5 model due to an issue with our upstream provider. During this time period, users encountered a 4.5% error rate when using Claude Opus 4.5. No other models were impacted.The issue was resolved by a mitigation put in place by our provider. GitHub is working with our provider to further improve the resiliency of the service to prevent similar incidents in the future.

investigatingJan 14, 11:45 AM

We are continuing to investigate issues with Claude Opus 4.5 and are working to restore performance across our model providers.

investigatingJan 14, 11:00 AM

We are experiencing issues with our Claude Opus 4.5 providers and are investigating remediation.

investigatingJan 14, 10:56 AM

We are investigating reports of impacted performance for some GitHub services.

minorresolvedJan 14, 09:24 AM — Resolved Jan 14, 10:52 AM

Copilot's GPT-5.1 model has degraded performance

5 updates
resolvedJan 14, 10:52 AM

This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.

investigatingJan 14, 10:32 AM

We are continuing to investigate issues with the GPT-5.1 model. We are also seeing an increase in failures for Copilot Code Reviews.

investigatingJan 14, 09:53 AM

We are continuing to investigate issues with the GPT-5.1 model with our model provider. Uses of other models are not impacted.

investigatingJan 14, 09:26 AM

Copilot is experiencing degraded performance when using the GPT-5.1 model. We are investigating the issue.

investigatingJan 14, 09:24 AM

We are investigating reports of degraded performance for Copilot

minorresolvedJan 13, 10:21 PM — Resolved Jan 14, 12:18 AM

Disruption with some GitHub services

4 updates
resolvedJan 14, 12:18 AM

Between 2026-01-13 22:20 and 2026-01-14 00:18 UTC, GitHub Code Search experienced an increase in latency and request timeouts. This was caused by some network transit links between GitHub and Azure Express Route experiencing a small error rate that contributed to applications requests failing, increasing application latency and timeouts. The incident resulted in less than 1% of requests to fail due to timeouts.We mitigated the incident by disabling the links in question. Monitoring each unique network path across providers would have allowed us to mitigate this earlier. We are running root cause analysis with network providers to help us reduce time-to-discover and time-to-mitigate.

investigatingJan 13, 11:36 PM

We are continuing to investigate increased latency with code search service.

investigatingJan 13, 10:53 PM

We are investigating reports of increased latency with code search. We will continue to keep users updated on progress towards mitigation.

investigatingJan 13, 10:21 PM

We are investigating reports of impacted performance for some GitHub services.

majorresolvedJan 13, 09:38 AM — Resolved Jan 13, 10:46 AM

GitHub Copilot failures

9 updates
resolvedJan 13, 10:46 AM

On January 13th, 2026, between 09:25 UTC and 10:11 UTC, GitHub Copilot experienced unavailability. During this window, error rates averaged 18% and peaked at 100% of service requests, leading to an outage of chat features across Copilot Chat, VS Code, JetBrains IDEs, and other Copilot-dependent products. This incident was triggered by a configuration error during a model update. We mitigated the incident by rolling back this change. However, a second recovery phase lasted until 10:46 UTC, due to unexpected latency with the GPT 4.1 model. To prevent recurrence, we are investing in new monitors and more robust testing environments to reduce further misconfigurations, and to improve our time to detection and mitigation of future issues.

investigatingJan 13, 10:46 AM

Copilot is operating normally.

investigatingJan 13, 10:44 AM

We are seeing recovery in the GPT-4.1 model. We continue to monitor for full recovery.

investigatingJan 13, 10:11 AM

We are seeing continued recovery across Copilot services but continue to see issues with the GPT-4.1 model that we are investigating.

investigatingJan 13, 10:11 AM

We are seeing continued recovery across Copilot services but continue to see issues with the GPT-4.1 model that we are investigating.

investigatingJan 13, 10:02 AM

We have identified what we believe to be a configuration issue that may explain the issue. We have rolled back this change and are starting to see signs of recovery.

investigatingJan 13, 09:45 AM

We are investigating an issue that is causing failures in all Copilot requests.

investigatingJan 13, 09:44 AM

Copilot is experiencing degraded availability. We are continuing to investigate.

investigatingJan 13, 09:38 AM

We are investigating reports of impacted performance for some GitHub services.