GitHub header

All Systems Operational

About This Site

For the status of GitHub Enterprise Cloud - EU, please visit: eu.githubstatus.com
For the status of GitHub Enterprise Cloud - Australia, please visit: au.githubstatus.com
For the status of GitHub Enterprise Cloud - US, please visit: us.githubstatus.com

Git Operations ? Operational
Webhooks ? Operational
Visit www.githubstatus.com for more information Operational
API Requests ? Operational
Issues ? Operational
Pull Requests ? Operational
Actions ? Operational
Packages ? Operational
Pages ? Operational
Codespaces ? Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Oct 26, 2025

No incidents reported today.

Oct 25, 2025

No incidents reported.

Oct 24, 2025
Resolved - On UTC Oct 24 2:55 - 3:15 AM, githubstatus.com was unreachable due to service interruption with our status page provider.
During this time, GitHub systems were not experiencing any outages or disruptions.
We are working our vendor to understand how to improve availability of githubstatus.com.

Oct 24, 14:17 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Oct 24, 10:10 UTC
Update - We have found the source of the slowness and mitigated it. We are watching recovery before we status green but no user impact is currently observed.
Oct 24, 10:07 UTC
Investigating - We are currently investigating this issue.
Oct 24, 09:31 UTC
Oct 23, 2025
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Oct 23, 20:25 UTC
Update - Actions is operating normally.
Oct 23, 20:25 UTC
Update - Actions larger runner job start delays and failure rates are recovering. Many jobs should be starting as normal. We're continuing to monitor and confirm full recovery.
Oct 23, 19:33 UTC
Update - We continue to investigate problems with Actions larger runners. We're continuing to see signs of improvement, but customers are still experiencing jobs queueing or failing due to timeout.
Oct 23, 18:17 UTC
Update - We continue to investigate problems with Actions larger runners. We're seeing limited signs of recovery, but customers are still experiencing jobs queueing or failing due to timeout.
Oct 23, 17:36 UTC
Update - We continue to investigate problems with Actions larger runners. Some customers are experiencing jobs queueing or failing due to timeout.
Oct 23, 16:59 UTC
Update - We're investigating problems with larger hosted runners in Actions. Our team is working to identify the cause. We'll post another update by 17:03 UTC.
Oct 23, 16:36 UTC
Investigating - We are investigating reports of degraded performance for Actions
Oct 23, 16:33 UTC
Oct 22, 2025
Resolved - On October 22, 2025, between 14:06 UTC and 15:17 UTC, less than 0.5% of web users experienced intermittent slow page loads on GitHub.com. During this time, API requests showed increased latency, with up to 2% timing out.

The issue was caused by elevated loads on one of our databases caused by a poorly performing query, which impacted performance for a subset of requests.

We identified the source of the load and optimized the query to restore normal performance. We’ve added monitors for early detection for query performance, and we continue to monitor the system closely to ensure ongoing stability.

Oct 22, 15:53 UTC
Update - API Requests is operating normally.
Oct 22, 15:53 UTC
Update - We have identified a possible source of the issue and there is currently no user impact but we are continuing to investigate and will not resolve this incident until we have more confidence in our mitigations and investigation results.
Oct 22, 15:17 UTC
Update - Some users may see slow, timing out requests or not found when browsing repos. We have identified slowness in our platform and are investigating.
Oct 22, 14:37 UTC
Investigating - We are investigating reports of degraded performance for API Requests
Oct 22, 14:29 UTC
Oct 21, 2025
Resolved - On October 21, 2025, between 13:30 and 17:30 UTC, GitHub Enterprise Cloud Organization SAML Single Sign-On experienced degraded performance. Customers may have been unable to successfully authenticate into their GitHub Organizations during this period. Organization SAML recorded a maximum of 0.4% of SSO requests failing during this timeframe.

This incident stemmed from a failure in a read replica database partition responsible for storing license usage information for GitHub Enterprise Cloud Organizations. This partition failure resulted in users from affected organizations, whose license usage information was stored on this partition, being unable to access SSO during the aforementioned window. A successful SSO requires an available license for the user who is accessing a GitHub Enterprise Cloud Organization backed by SSO.
The failing partition was subsequently taken out of service, thereby mitigating the issue.

Remedial actions are currently underway to ensure that a read replica failure does not compromise the overall service availability.

Oct 21, 17:39 UTC
Update - Mitigation continues, the impact is limited to Enterprise Cloud customers who have configured SAML at the organization level.
Oct 21, 17:18 UTC
Update - We continuing to work on mitigation of this issue.
Oct 21, 17:11 UTC
Update - We’ve identified the issue affecting some users with SAML/OIDC authentication and are actively working on mitigation. Some users may not be able to authenticate during this time.
Oct 21, 16:33 UTC
Update - We're seeing issues for a small amount of customers with SAML/OIDC authentication for GitHub.com users. We are investigating.
Oct 21, 16:03 UTC
Investigating - We are currently investigating this issue.
Oct 21, 16:00 UTC
Resolved - On October 21, 2025, between 07:55 UTC and 12:20 UTC, GitHub Actions experienced degraded performance. During this time, 2.11% workflow runs failed to start within 5 minutes, with an average delay of 8.2 minutes. The root cause was increased latency on a node in one of our Redis clusters, triggered by resource contention after a patching event became stuck.

Recovery began once the patching process was unstuck and normal connectivity to the Redis cluster was restored at 11:45 UTC, but it took until 12:20 UTC to clear the backlog of queued work. We are implementing safeguards to prevent this failure mode and enhancing our monitoring to detect and address problems like this more quickly in the future.

Oct 21, 12:28 UTC
Update - We were able to apply a mitigation and we are now seeing recovery.
Oct 21, 11:59 UTC
Update - We are seeing about 10% of Actions runs taking longer than 5 minutes to start, we're still investigating and will provide an update by 12:00 UTC.
Oct 21, 11:37 UTC
Update - We are still seeing delays in starting some Actions runs and are currently investigating. We will provide updates as we have more information.
Oct 21, 09:59 UTC
Update - We are seeing delays in starting some Actions runs and are currently investigating.
Oct 21, 09:25 UTC
Investigating - We are investigating reports of degraded performance for Actions
Oct 21, 09:12 UTC
Oct 20, 2025
Resolved - From October 20th at 14:10 UTC until 16:40 UTC, the Copilot service experienced degradation due to an infrastructure issue which impacted the Grok Code Fast 1 model, leading to a spike in errors affecting 30% of users. No other models were impacted. The incident was caused due to an outage with an upstream provider.
Oct 20, 16:40 UTC
Update - The issues with our upstream model provider continue to improve, and Grok Code Fast 1 is once again stable in Copilot Chat, VS Code and other Copilot products.
Oct 20, 16:39 UTC
Update - We are continuing to work with our provider on resolving the incident with Grok Code Fast 1 which is impacting 6% of users. We’ve been informed they are implementing fixes but users can expect some requests to intermittently fail until all issues are resolved.

Oct 20, 16:07 UTC
Update - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Oct 20, 14:47 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Oct 20, 14:46 UTC
Resolved - On October 20, 2025, between 08:05 UTC and 10:50 UTC the Codespaces service was degraded, with users experiencing failures creating new codespaces and resuming existing ones. On average, the error rate for codespace creation was 39.5% and peaked at 71% of requests to the service during the incident window. Resume operations averaged 23.4% error rate with a peak of 46%. This was due to a cascading failure triggered by an outage in a 3rd-party dependency required to build devcontainer images.

The impact was mitigated when the 3rd-party dependency recovered.

We are investigating opportunities to make this dependency not a critical path for our container build process and working to improve our monitoring and alerting systems to reduce our time to detection of issues like this one in the future.

Oct 20, 11:01 UTC
Update - We are now seeing sustained recovery. As we continue to make our final checks, we hope to resolve this incident in the next 10 minutes.
Oct 20, 10:56 UTC
Update - We are seeing early signs of recovery for Codespaces. The team will continue to monitor and keep this incident active as a line of communication until we are confident of full recovery.
Oct 20, 10:15 UTC
Update - We are continuing to monitor Codespace's error rates and will report further as we have more information.
Oct 20, 09:34 UTC
Update - We are seeing increased error rates with Codespaces generally. This is due to a third party provider experiencing problems. This impacts both creation of new Codespaces and resumption of existing ones.

We continue to monitor and will report with more details as we have them.

Oct 20, 09:01 UTC
Investigating - We are investigating reports of degraded availability for Codespaces
Oct 20, 08:56 UTC
Oct 19, 2025

No incidents reported.

Oct 18, 2025

No incidents reported.

Oct 17, 2025
Resolved - On October 17th, 2025, between 12:51 UTC and 14:01 UTC, mobile push notifications failed to be delivered for a total duration of 70 minutes. This affected github.com and GitHub Enterprise Cloud in all regions. The disruption was related to an erroneous configuration change to cloud resources used for mobile push notification delivery.

We are reviewing our procedures and management of these cloud resources to prevent such an incident in the future.

Oct 17, 14:12 UTC
Update - We're investigating an issue with mobile push notifications. All notification types are affected, but notifications remain accessible in the app's inbox. For 2FA authentication, please open the GitHub mobile app directly to complete login.
Oct 17, 14:01 UTC
Investigating - We are currently investigating this issue.
Oct 17, 13:11 UTC
Oct 16, 2025

No incidents reported.

Oct 15, 2025

No incidents reported.

Oct 14, 2025
Resolved - On October 14th, 2025, between 18:26 UTC and 18:57 UTC a subset of unauthenticated requests to the commit endpoint for certain repositories received 503 errors. During the event, the average error rate was 3%, peaking at 3.5% of total requests.

This event was triggered by a recent configuration change and some traffic pattern shifts on the service. We were alerted of the issue immediately and made changes to the configuration in order to mitigate the problem. We are working on automatic mitigation solutions and better traffic handling in order to prevent issues like this in the future.

Oct 14, 18:57 UTC
Investigating - We are currently investigating this issue.
Oct 14, 18:26 UTC
Resolved - On Oct 14th, 2025, between 13:34 UTC and 16:00 UTC the Copilot service was degraded for GPT-5 mini model. On average, 18% of the requests to GPT-5 mini failed due to an issue with our upstream provider.

We notified the upstream provider of the problem as soon as it was detected and mitigated the issue by failing over to other providers. The upstream provider has since resolved the issue.

We are working to improve our failover logic to mitigate similar upstream failures more quickly in the future.

Oct 14, 16:00 UTC
Update - GPT-5-mini is once again available in Copilot Chat and across IDE integrations.

We will continue monitoring to ensure stability, but mitigation is complete.

Oct 14, 16:00 UTC
Update - We are continuing to see degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We continue to work with the model provider to resolve the issue.
Other models continue to be available and working as expected.

Oct 14, 15:42 UTC
Update - We continue to see degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We continue to work with the model provider to resolve the issue.
Other models continue to be available and working as expected.

Oct 14, 14:50 UTC
Update - We are experiencing degraded availability for the GPT-5-mini model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.
Other models are available and working as expected.

Oct 14, 14:07 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Oct 14, 14:05 UTC
Oct 13, 2025

No incidents reported.

Oct 12, 2025

No incidents reported.