GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations Operational
Webhooks Operational
Visit www.githubstatus.com for more information Operational
API Requests Operational
Issues Operational
Pull Requests Operational
Actions Operational
Packages Operational
Pages Operational
Codespaces Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Dec 3, 2025

No incidents reported today.

Dec 2, 2025

No incidents reported.

Dec 1, 2025

No incidents reported.

Nov 30, 2025

No incidents reported.

Nov 29, 2025

No incidents reported.

Nov 28, 2025
Resolved - On November 28th, 2025, between approximately 05:51 and 08:04 UTC, Copilot experienced an outage affecting the Claude Sonnet 4.5 model. Users attempting to use this model received an HTTP 400 error, resulting in 4.6% of total chat requests during this timeframe failing. Other models were not impacted.

The issue was caused by a misconfiguration deployed to an internal service which made Claude Sonnet 4.5 unavailable. The problem was identified and mitigated by reverting the change. GitHub is working to improve cross-service deploy safeguards and monitoring to prevent similar incidents in the future.

Nov 28, 08:23 UTC
Update - We have rolled out a fix and are monitoring for recovery.
Nov 28, 07:52 UTC
Update - We are investigating degraded performance with the Claude Sonnet 4.5 model in Copilot.
Nov 28, 07:04 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Nov 28, 06:59 UTC
Nov 27, 2025

No incidents reported.

Nov 26, 2025

No incidents reported.

Nov 25, 2025

No incidents reported.

Nov 24, 2025
Resolved - On November 24, 2025, between 12:15 and 15:04 UTC, Codespaces users encountered connection issues when attempting to create a codespace after choosing the recently released VS Code Codespaces extension, version 1.18.1. Users were able to downgrade to the 1.18.0 version of the extension during this period to work around this issue. At peak, the error rate was 19% of connection requests. This was caused by mismatching version dependencies for the released VS Code Codespaces extension.

The connection issues were mitigated by releasing the VS Code Codespaces extension version 1.18.2 that addressed the issue. Users utilizing version 1.18.1 of the VS Code Codespaces extension are advised to upgrade to version >=1.18.2.

We are improving our validation and release process for this extension to ensure functional issues like this are caught before release to customers and to reduce detection and mitigation times for extension issues like this in the future.

Nov 24, 15:04 UTC
Update - Version 1.18.2 of the GitHub Codespaces VSCode extension has been released. This version should resolve the connection issues, and we are continuing to monitor success rate for Codespaces creation.
Nov 24, 14:26 UTC
Update - We are testing a new version of the GitHub Codespaces VSCode extension that should resolve the connection issues, and expect that to be available in the next 30 minutes.
Nov 24, 14:00 UTC
Update - Codespaces is experiencing degraded performance. We are continuing to investigate.
Nov 24, 13:26 UTC
Update - We are seeing Codespaces connection issues related to the latest version of the VSCode Codespaces extension (1.18.1). Users can select the 1.18.0 version of the extension to avoid issues (View -> Command Palette, run "Extensions: Install specific version of Extension..."), while we work to remove the affected version.
Nov 24, 13:25 UTC
Investigating - We are currently investigating this issue.
Nov 24, 13:10 UTC
Nov 23, 2025

No incidents reported.

Nov 22, 2025

No incidents reported.

Nov 21, 2025
Resolved - Between November 19th, 16:13UTC and November 21st, 12:22UTC, the GitHub Enterprise Importer (GEI) service was in a degraded state, during which time, customers of the service experienced a delay when reclaiming mannequins post-migration.

We have taken steps to prevent similar incidents from occurring in the future.

Nov 21, 00:22 UTC
Update - Processing of these jobs has resumed.
Nov 21, 00:22 UTC
Update - GitHub Enterprise Importer migration systems are currently impacted by a pause to Migration Mannequin Reclaiming.
At 19:43 UTC on 2025-11-19, we paused the queue that processes Mannequin Reclaiming work done at the end of a migration.
This was done after observing load that threatened the health of the overall system. The cause has been identified, and efforts to fix are underway.

In the current state:
- all requests to Reclaim Mannequins will be held in a queue
- those requests will be processed when repair work is complete and the queue unpaused, at which time the incident will be closed

This does not impact processing of migration runs using GitHub Enterprise Importer, only mannequin reclamation.

Nov 19, 16:13 UTC
Investigating - We are currently investigating this issue.
Nov 19, 16:13 UTC
Nov 20, 2025
Resolved - Between November 20, 2025 17:16 UTC to November, 2025 19:08 UTC some users experienced delayed or failed Git Operations for raw file downloads. On average, the error rate was less than 0.2%. This was due to a sustained increase in unauthenticated repository traffic.

We mitigated the incident by applying regional rate limiting and are taking steps to improve our monitoring and time to mitigation for similar issues in the future.

Nov 20, 19:24 UTC
Update - Mitigation has been applied and operations have returned to normal.
Nov 20, 19:24 UTC
Update - We continue to see a small number of errors when accessing raw file content. We are deploying a mitigation.
Nov 20, 18:44 UTC
Update - We're investigating elevated error rates for a small amount of customers when accessing raw file content.
Nov 20, 18:05 UTC
Investigating - We are currently investigating this issue.
Nov 20, 18:04 UTC
Nov 19, 2025
Resolved - On November 19, between 17:36 UTC and 18:04 UTC, GitHub Actions service experienced degraded performance that caused excessive latency in queueing and updating workflow runs and job statuses. Operations related to artifacts, cache, job steps and logs also had significantly increased latency. At peak, 67% of workflow jobs queued during that timeframe were impacted, and the median latency for impacted operations increased by up to 35x.

This was caused by a significant change in load pattern on Actions Cache-related operations, leading to a saturated shared resource on the backend. The impact was mitigated by mitigating the new load pattern.

To reduce the likelihood of a recurrence, we are improving rate-limiting measures in this area to ensure a more consistent experience for all customers. We are also evaluating changes to reduce the scope of impact.

Nov 19, 18:07 UTC
Update - We have applied mitigation and are seeing recovery
Nov 19, 17:59 UTC
Update - We are investigating delays in actions runs and possible errors in artifact and cache creation.
Nov 19, 17:56 UTC
Investigating - We are investigating reports of degraded performance for Actions
Nov 19, 17:48 UTC