GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Jun 14, 2025

No incidents reported today.

Jun 13, 2025

No incidents reported.

Jun 12, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jun 12, 21:07 UTC
Update - All impacted chat models have recovered, and users should no longer experience reduced availability.
Jun 12, 21:07 UTC
Update - We are seeing recovery in success rates for impacted Claude models (Sonnet 4 and Opus 4), and limited recovery in Gemini models (2.5. Pro and 2.0 Flash). We will continue to monitor and provide updates until full recovery.
Jun 12, 20:39 UTC
Update - Copilot is experiencing degraded performance. We are continuing to investigate.
Jun 12, 20:21 UTC
Update - Claude Sonnet 4 and Opus 4 models continue to have degraded availability in Copilot Chat, VS Code, and other Copilot products. Gemini Pro 2.5 and 2.0 Flash are currently unavailable. Our upstream model provider has indicated that they have identified the problem and are applying mitigations.

Jun 12, 20:05 UTC
Update - Gemini (2.5 Pro and 2.0 Flash) and Claude (Sonnet 4 and Opus 4) chat models in Copilot are still experiencing reduced availability. We are actively communicating with our upstream model provider to resolve the issue and restore full service. We will provide another update by 20:15 UTC.
Jun 12, 19:14 UTC
Update - We redirected requests for Claude 3.7 Sonnet to additional partners and users should see recovery when using that model. We still are experiencing degraded availability for the Gemini (2.5 Pro, 2.0 Flash) and Claude (Sonnet 4, Opus 4) models in Copilot Chat, VS Code and other Copilot products.
Jun 12, 18:37 UTC
Update - We are experiencing degraded availability for the Gemini (2.5 Pro, 2.0 Flash) and Claude (Sonnet 3.7, Sonnet 4, Opus 4) models in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Jun 12, 18:23 UTC
Investigating - We are currently investigating this issue.
Jun 12, 18:19 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jun 12, 20:26 UTC
Update - Customers are currently unable to generate attestations from public repositories due to a broader outage with our partners.
Jun 12, 18:56 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jun 12, 18:50 UTC
Jun 11, 2025
Resolved - Between 2025-06-10 12:25 UTC and 2025-06-11 01:51 UTC, GitHub Enterprise Cloud (GHEC) customers with approximately 10,000 or more users, saw performance degradation and 5xx errors when loading the Enterprise Settings’ People management page. Less than 2% of page requests resulted in an error. The issue was caused by a database change that replaced an index required for the page load. The issue was resolved by reverting the database change.

To prevent similar incidents, we are improving the testing and validation process for replacing database indexes.

Jun 11, 01:51 UTC
Update - Fix is currently rolling out to production. We will update here once we verify.
Jun 11, 01:08 UTC
Update - We are working to deploy the fix for this issue. We will update again once it is deployed and as we monitor recovery.
Jun 10, 23:32 UTC
Update - We have the fix ready, once it's ready to deploy we will provide another update confirming that it has resolved the issue.
Jun 10, 22:42 UTC
Update - We have identified the solution to the performance issue and are working on the mitigation. Impact continues to be limited to very large enterprise customers when viewing the People page.
Jun 10, 21:04 UTC
Update - The mitigation to add a supporting index to improve the performance of the People page did not resolve the issue, and we are continuing to investigate a solution.
Jun 10, 20:09 UTC
Update - We are working on the mitigation and anticpate recovery within an hour.
Jun 10, 18:57 UTC
Update - Large enterprise customers may encounter issues loading the People page
Jun 10, 18:35 UTC
Investigating - We are currently investigating this issue.
Jun 10, 18:17 UTC
Jun 10, 2025
Resolved - This incident has been resolved.
Jun 10, 19:08 UTC
Update - We've increased capacity to process the codespaces billing jobs and see are seeing recovery, we expect a full mitigation within the hour.
Jun 10, 18:21 UTC
Investigating - We are currently investigating this issue.
Jun 10, 17:47 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jun 10, 14:46 UTC
Investigating - We are investigating reports of degraded performance for Pull Requests
Jun 10, 14:28 UTC
Jun 9, 2025

No incidents reported.

Jun 8, 2025

No incidents reported.

Jun 7, 2025

No incidents reported.

Jun 6, 2025
Resolved - On June 6, 2025, an update to mitigate a previous incident led to automated scaling of database infrastructure used by Copilot Coding Agent. The clients of the service were not implemented to automatically handle an extra partition. Hence it was unable to retrieve data across partitions, resulting in unexpected 404 errors.

As a result, approximately 17% of coding sessions displayed an incorrect final state - such as sessions appearing in-progress when they were actually completed. Additionally, some Copilot-authored pull requests were missing timeline events indicating task completion. Importantly, this did not affect Copilot Coding Agent’s ability to finish code tasks and submit pull requests.

To prevent similar issues in the future we are taking steps to improve our systems and monitoring.

Jun 6, 23:00 UTC
Resolved - On June 6, 2025, between 00:21 UTC to 12:40 UTC the Copilot service was degraded and a subset of Copilot Free users were unable to sign up for or use the Copilot Free service on github.com. This was due to a change in licensing code that resulted in some users losing access despite being eligible for Copilot Free.
We mitigated this through a rollback of the offending change at 11:39 AM UTC. This resulted in users once again being able to utilize their Copilot Free access.
As a result of this incident, we have improved monitoring of Copilot changes during rollout. We are also working to reduce our time to detect and mitigate issues like this one in the future.

Jun 6, 12:40 UTC
Update - Copilot is operating normally.
Jun 6, 12:40 UTC
Update - We are continuing to monitor recovery and expect a complete resolution very shortly.
Jun 6, 12:18 UTC
Update - The changes have been reverted and we are seeing signs of recovery. We expect impact to be largely mitigated, but are continuing to monitor and will update further as progress continues.
Jun 6, 11:31 UTC
Update - We have identified changes that may be causing the issue and are working to revert the offending changes. We will continue to keep users updated as we work toward mitigation.
Jun 6, 10:39 UTC
Update - We are investigating reports of users unable to utilize Copilot Free after a trial subscription has ended for Copilot Pro. We will continue to keep users updated on progress towards mitigation.
Jun 6, 10:04 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Jun 6, 09:58 UTC
Jun 5, 2025
Resolved - On June 5th, 2025, between 17:47 UTC and 19:20 UTC the Actions service was degraded, leading to run start delays and intermittent job failures. During this period, 47.2% of runs had delayed starts, and 21.0% of runs failed. The impact extended beyond Actions itself - 60% of Copilot Coding Agent sessions were cancelled, and all Pages sites using branch-based builds failed to deploy (though Pages serving remained unaffected). The issue was caused by a spike in load between internal Actions services exposing a misconfiguration that caused throttling of requests in the critical path of run starts. We mitigated the incident by correcting the service configuration to prevent throttling and have updated our deployment process to ensure the correct configuration is preserved moving forward.
Jun 5, 19:29 UTC
Update - We have applied a mitigation and we are beginning to see recovery. We are continuing to monitor for recovery.
Jun 5, 19:02 UTC
Update - Actions is experiencing degraded availability. We are continuing to investigate.
Jun 5, 18:35 UTC
Update - Users of Actions will see delays in jobs starting or job failures. Users of Pages will see slow or failed deployments
Jun 5, 18:30 UTC
Update - Pages is experiencing degraded performance. We are continuing to investigate.
Jun 5, 18:01 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jun 5, 18:00 UTC
Jun 4, 2025
Resolved - On June 4, 2025, between 14:35 UTC and 15:50 UTC , the Actions service experienced degradation, leading to run start delays. During the incident, about 15.4% of all workflow runs were delayed by an average of 16 minutes. An unexpected load pattern revealed a scaling issue in our backend infrastructure. We mitigated the incident by blocking the requests that triggered this pattern.

We are improving our rate limiting mechanisms to better handle unexpected load patterns while maintaining service availability. We are also strengthening our incident response procedures to reduce the time to mitigate for similar issues in the future.

Jun 4, 15:55 UTC
Update - We have applied mitigations and are monitoring for recovery.
Jun 4, 15:39 UTC
Update - We are currently investigating delays with Actions triggering for some users.
Jun 4, 15:19 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jun 4, 15:15 UTC
Jun 3, 2025

No incidents reported.

Jun 2, 2025

No incidents reported.

Jun 1, 2025

No incidents reported.

May 31, 2025
Completed - The scheduled maintenance has been completed.
May 31, 04:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 29, 21:30 UTC
Scheduled - Codespaces will be undergoing global maintenance from May 29, 2025 21:30 UTC to May 31, 2025 4:30 UTC. Maintenance will begin in our Europe, Asia, and Australia regions. Once it is complete, maintenance will start in our US regions. Each batch of regions will take approximately three to four hours to complete.

During this time period, users may experience intermittent connectivity issues when creating new Codespaces or accessing existing ones.

To avoid disruptions, ensure that any uncommitted changes are committed and pushed before the maintenance starts. Codespaces with uncommitted changes will remain accessible as usual after the maintenance is complete.

May 29, 21:01 UTC