GitHub header

All Systems Operational

About This Site

Check GitHub Enterprise Cloud status by region:
- Australia: au.githubstatus.com
- EU: eu.githubstatus.com
- Japan: jp.githubstatus.com
- US: us.githubstatus.com

Git Operations Operational
Webhooks Operational
Visit www.githubstatus.com for more information Operational
API Requests Operational
Issues Operational
Pull Requests Operational
Actions Operational
Packages Operational
Pages Operational
Codespaces Operational
Copilot Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Jan 24, 2026

No incidents reported today.

Jan 23, 2026

No incidents reported.

Jan 22, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jan 22, 15:22 UTC
Update - We have identified an issue in one of our services and have mitigated it. Services have recovered and we have a mitigation but we are working on a longer term solution.
Jan 22, 15:22 UTC
Update - Issues is operating normally.
Jan 22, 14:27 UTC
Update - Issues is experiencing degraded performance. We are continuing to investigate.
Jan 22, 14:23 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 22, 14:12 UTC
Jan 21, 2026
Resolved - On January 21, between 17:50 and 20:53 UTC, around 350 enterprises and organizations experienced slower load times or timeouts when viewing Copilot policy pages. The issue was traced to performance degradation under load due to an issue in upstream database caching capability within our billing infrastructure, which increased query latency to retrieve billing and policy information from approximately 300ms to up to 1.5s.

To restore service, we disabled the affected caching feature, which immediately returned performance to normal. We then addressed the issue in the caching capability and re-enabled our use of the database cache and observed continued recovery.

Moving forward, we’re tightening our procedures for deploying performance optimizations, adding test coverage, and improving cross-service visibility and alerting so we can detect upstream degradations earlier and reduce impact to customers.

Jan 21, 20:53 UTC
Update - We are rolling out a fix to reduce latency and timeouts on policy pages and are continuing to monitor impact.
Jan 21, 20:47 UTC
Update - We are continuing to investigate latency and timeout issues affecting Copilot policy pages.
Jan 21, 20:12 UTC
Update - We are investigating timeouts for customers visiting the Copilot policy pages for organizations and enterprises.
Jan 21, 19:37 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 21, 19:31 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jan 21, 12:38 UTC
Update - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Other models are available and working as expected.

Jan 21, 12:09 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Jan 21, 11:33 UTC
Jan 20, 2026
Resolved - On January 20, 2026, between 19:08 UTC and 20:18 UTC, manually dispatched GitHub Actions workflows saw delayed job starts. GitHub products built on Actions such as Dependabot, Pages builds, and Copilot coding agent experienced similar delays. All jobs successfully completed despite the delays. At peak impact, approximately 23% of workflow runs were affected, with an average delay of 11 minutes.

This was caused by a load pattern shift in Actions scheduled jobs that saturated a shared backend resource. We mitigated the incident by temporarily throttling traffic and scaling up resources to account for the change in load pattern. To prevent recurrence, we have scaled resources appropriately and implemented optimizations to prevent this load pattern in the future.

Jan 20, 20:10 UTC
Update - We are investigating delays in manually dispatched Actions workflows as well as other GitHub products which run on Actions. We have identified a fix and are working on mitigating the delays.
Jan 20, 19:56 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jan 20, 19:49 UTC
Resolved - On January 20, 2026, between 14:39 UTC and 16:03 UTC, actions-runner-controller users experienced a 1% failure rate for API requests managing GitHub Actions runner scale sets. This caused delays in runner creation, resulting in delayed job starts for workflows targeting those runners. The root cause was a service to service circuit breaker that incorrectly tripped for all users when a single user hit rate limits for runner registration. The issue was mitigated by bypassing the circuit breaker, and users saw immediate and full service recovery following the fix.

We have updated our circuit breakers to exclude individual customer rate limits from their triggering logic and are continuing work to improve detection and mitigation times.

Jan 20, 16:23 UTC
Update - GitHub Actions customers that use actions-runner-controller are experiencing errors from our APIs that informs auto-scaling. We are investigating the issue and working on mitigating the impact.
Jan 20, 16:03 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jan 20, 16:02 UTC
Jan 19, 2026

No incidents reported.

Jan 18, 2026

No incidents reported.

Jan 17, 2026
Resolved - Between 2026-01-16 16:17 and 2026-01-17 02:54 UTC, some Copilot Business users were unable to access and use certain Copilot features and models. This was due to a bug with how we determine if a user has access to a feature, inadvertently marking features and models as inaccessible for users whose enterprise(s) had not configured the policy.

We mitigated the incident by reverting the problematic deployment. We are improving our internal monitoring and mitigation processes to reduce the risk and extended downtime of similar incidents in the future.

Jan 17, 02:54 UTC
Update - The fix has been deployed and the issue resolved. We will continue to monitor any incoming reports.
Jan 17, 02:54 UTC
Update - The deployment of the fix is still ongoing. We are now targeting 3:00 AM UTC for full resolution.
Jan 17, 02:25 UTC
Update - The deployment is still in progress. We are still targeting 2:00 AM UTC for full resolution.
Jan 17, 02:21 UTC
Update - Deployment of the fix is in progress. We are still targetting 2:00 AM UTC for full resolution.
Jan 17, 01:28 UTC
Update - Some enterprise Copilot CLI users may encounter an "You are not authorized to use this Copilot feature" error. We have identified the root cause and are currently deploying a fix. Expected resolution: within 2 hours.
Jan 17, 00:08 UTC
Update - We received multiple reports of 403s when attempting to use the Copilot CLI. We have identified the root cause and are rolling out a fix for affected customers.
Jan 16, 23:53 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 16, 23:53 UTC
Jan 16, 2026
Jan 15, 2026
Resolved - On January 15, 2026, between 16:40 UTC and 18:20 UTC, we observed increased latency and timeouts across Issues, Pull Requests, Notifications, Actions, Repositories, API, Account Login and Alive. An average 1.8% of combined web and API requests saw failure, peaking briefly at 10% early on. The majority of impact was observed for unauthenticated users, but authenticated users were impacted as well.

This was caused by an infrastructure update to some of our data stores. Upgrading this infrastructure to a new major version resulted in unexpected resource contention, leading to distributed impact in the form of slow queries and increased timeouts across services that depend on these datasets. We mitigated this by rolling back to the previous stable version.

We are working to improve our validation process for these types of upgrades to catch issues that only occur under high load before full release, improve detection time, and reduce mitigation times in the future.

Jan 15, 18:54 UTC
Update - Pull Requests is operating normally.
Jan 15, 18:54 UTC
Update - Issues and Pull Requests are experiencing degraded performance. We are continuing to investigate.
Jan 15, 18:42 UTC
Update - We are seeing recovery across all services, but will continue to monitor before resolving.
Jan 15, 18:36 UTC
Update - API Requests is operating normally.
Jan 15, 17:51 UTC
Update - We are seeing some signs of recovery, particularly for authenticated users. Unauthenticated users may continue to see impact across multiple services. Mitigation efforts continue.
Jan 15, 17:44 UTC
Update - API Requests is experiencing degraded performance. We are continuing to investigate.
Jan 15, 17:35 UTC
Update - Actions is operating normally.
Jan 15, 17:14 UTC
Update - A number of services are currently degraded, especially issues, pull requests, and the API. Investigation and mitigation is underway.
Jan 15, 17:07 UTC
Update - Actions is experiencing degraded availability. We are continuing to investigate.
Jan 15, 17:06 UTC
Update - API Requests is experiencing degraded availability. We are continuing to investigate.
Jan 15, 16:57 UTC
Investigating - We are investigating reports of degraded availability for API Requests, Actions, Issues and Pull Requests
Jan 15, 16:56 UTC
Resolved - On January 15th, between 14:18 UTC and 15:26 UTC, customers experienced delays in status updates for workflow runs and checks. Status updates were delayed by up to 20 minutes, with a median delay of 11 minutes.

The issue stemmed from an infrastructure upgrade to our database cluster. The new version introduced resource contention under production load, causing slow query times. We mitigated this by rolling back to the previous stable version. We are working to strengthen our upgrade validation process to catch issues that only manifest under high load. We are also adding new monitors to reduce detection time for similar issues in the future.

Jan 15, 15:26 UTC
Update - We are continuing to monitor as the system recovers and expect full recovery within the next 20-30 minutes. Impacted users will see that job status appears queued, though the job itself is actually running.
Jan 15, 15:12 UTC
Update - We are seeing signs of recovery and are continuing to monitor as we process the backlog of events.
Jan 15, 14:55 UTC
Investigating - We are investigating reports of degraded performance for Actions
Jan 15, 14:24 UTC
Jan 14, 2026
Resolved - On January 14, 2026, between 19:34 UTC and 21:36 UTC, the Webhooks service experienced a degradation that delayed delivery of some webhooks. During this window, a subset of webhook deliveries that encountered proxy tunnel errors on their initial delivery attempt were delayed by more than two minutes. The root cause was a recent code change that added additional retry attempts for this specific error condition, which increased delivery times for affected webhooks. Previously, webhook deliveries encountering this error would not have been delivered.

The incident was mitigated by rolling back the change, restoring normal webhook delivery.

As a corrective action, we will update our monitoring to measure the webhook delivery latency critical path, ensuring that incidents are accurately scoped to this workflow.

Jan 14, 21:38 UTC
Update - Some webhook deliveries are delayed, but we don’t expect meaningful user impact. The delays are currently scoped only to deliveries that, until recently, would have failed more quickly. We will update status if conditions change.
Jan 14, 20:41 UTC
Investigating - We are investigating reports of degraded performance for Webhooks
Jan 14, 20:21 UTC
Resolved - From January 14, 2026, at 18:15 UTC until January 15, 2026, at 11:30 UTC, GitHub Copilot users were unable to select the GPT-5 model for chat features in VS Code, JetBrains IDEs, and other IDE integrations. Users running GPT-5 in Auto mode experienced errors. Other models were not impacted.

We mitigated this incident by deploying a fix that corrected a misconfiguration in available models, rendering the GPT-5 model available again.

We are improving our testing processes to reduce the risk of similar incidents in the future, and refining our model availability alerting to improve detection time.

We did not status before we completed the fix, and the incident is currently resolved. We are sorry for the delayed post on githubstatus.com.

Jan 14, 18:00 UTC
Resolved - On January 14th, 2026, between approximately 10:20 and 11:25 UTC, the Copilot service experienced a degradation of the Claude Opus 4.5 model due to an issue with our upstream provider. During this time period, users encountered a 4.5% error rate when using Claude Opus 4.5. No other models were impacted.
The issue was resolved by a mitigation put in place by our provider. GitHub is working with our provider to further improve the resiliency of the service to prevent similar incidents in the future.

Jan 14, 12:23 UTC
Update - We are continuing to investigate issues with Claude Opus 4.5 and are working to restore performance across our model providers.
Jan 14, 11:45 UTC
Update - We are experiencing issues with our Claude Opus 4.5 providers and are investigating remediation.
Jan 14, 11:00 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 14, 10:56 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jan 14, 10:52 UTC
Update - We are continuing to investigate issues with the GPT-5.1 model. We are also seeing an increase in failures for Copilot Code Reviews.
Jan 14, 10:32 UTC
Update - We are continuing to investigate issues with the GPT-5.1 model with our model provider. Uses of other models are not impacted.
Jan 14, 09:53 UTC
Update - Copilot is experiencing degraded performance when using the GPT-5.1 model. We are investigating the issue.
Jan 14, 09:26 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Jan 14, 09:24 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Jan 14, 00:18 UTC
Update - We are continuing to investigate increased latency with code search service.
Jan 13, 23:36 UTC
Update - We are investigating reports of increased latency with code search. We will continue to keep users updated on progress towards mitigation.
Jan 13, 22:53 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 13, 22:21 UTC
Jan 13, 2026
Resolved - On January 13th, 2026, between 09:25 UTC and 10:11 UTC, GitHub Copilot experienced unavailability. During this window, error rates averaged 18% and peaked at 100% of service requests, leading to an outage of chat features across Copilot Chat, VS Code, JetBrains IDEs, and other Copilot-dependent products.

This incident was triggered by a configuration error during a model update. We mitigated the incident by rolling back this change. However, a second recovery phase lasted until 10:46 UTC, due to unexpected latency with the GPT 4.1 model. To prevent recurrence, we are investing in new monitors and more robust testing environments to reduce further misconfigurations, and to improve our time to detection and mitigation of future issues.

Jan 13, 10:46 UTC
Update - Copilot is operating normally.
Jan 13, 10:46 UTC
Update - We are seeing recovery in the GPT-4.1 model. We continue to monitor for full recovery.
Jan 13, 10:44 UTC
Update - We are seeing continued recovery across Copilot services but continue to see issues with the GPT-4.1 model that we are investigating.
Jan 13, 10:11 UTC
Update - We are seeing continued recovery across Copilot services but continue to see issues with the GPT-4.1 model that we are investigating.
Jan 13, 10:11 UTC
Update - We have identified what we believe to be a configuration issue that may explain the issue. We have rolled back this change and are starting to see signs of recovery.
Jan 13, 10:02 UTC
Update - We are investigating an issue that is causing failures in all Copilot requests.
Jan 13, 09:45 UTC
Update - Copilot is experiencing degraded availability. We are continuing to investigate.
Jan 13, 09:44 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 13, 09:38 UTC
Jan 12, 2026
Resolved - From January 9 13:11 UTC to January 12 10:17 UTC, new Linux Custom Images generated for Larger Hosted Runners were broken and not able to run jobs. Customers who did not generate new Custom Images during this period were not impacted. This issue was caused by a change to improve reliability of the image creation process. Due to a bug, the change triggered an unrelated protection mechanism which determines if setup has already been attempted on the VM and caused the VM to be marked unhealthy. Only Linux images which were generated while the change was enabled were impacted. The issue was mitigated by rolling back the change.

We are improving our testing around Custom Image generation as part of our GA readiness process for the public preview feature.. This includes expanding our canary suite to detect this and similar interactions as part of a controlled rollout in staging prior to any customer impact.

Jan 12, 10:17 UTC
Update - Actions jobs that use custom Linux images are failing to start. We've identified the underlying issue and are working on mitigation.
Jan 12, 10:09 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Jan 12, 10:06 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 12, 10:02 UTC
Jan 11, 2026

No incidents reported.

Jan 10, 2026
Resolved - From January 5, 2026, 00:00 UTC to January 10, 2026, 02:30 UTC, customers using the AI Controls public preview feature experienced delays in viewing Copilot agent session data. Newly created sessions took progressively longer to appear, initially hours, then eventually exceeding 24 hours. Since the page displays only the most recent 24 hours of activity, once processing delays exceeded this threshold, no recent data was visible. Session data remained available in audit logs throughout the incident.

Inefficient database queries in the data processing pipeline caused significant processing latency, creating a multi-day backlog. As the backlog grew, the delay between when sessions occurred and when they appeared on the page increased, eventually exceeding the 24-hour display window.

The issue was resolved on January 10, 2026, 02:30 UTC, after query optimizations and a database index were deployed. We are implementing enhanced monitoring and automated testing to detect inefficient queries before deployment to prevent recurrence.

Jan 10, 02:33 UTC
Update - Our queue has cleared. The last 24 hours of agent session history should now be visible on the AI Controls UI. No data was lost due to this incident.
Jan 10, 02:33 UTC
Update - We estimate the backlogged queue will take 3 hours to process. We will post another update once it is completed, or if anything changes with the recovery process.
Jan 9, 23:56 UTC
Update - We have deployed an additional fix and are beginning to see recovery to the queue preventing AI Sessions from showing in the AI Controls UI. We are working on an estimate for when the queue will be fully processed, and will post another update once we have that information.
Jan 9, 23:44 UTC
Update - We are seeing delays processing the AI Session event queue, which is causing sessions to not be displayed on the AI Controls UI. We have deployed a fix to improve the queue processing and are monitoring for effectiveness. We continue to investigate other mitigation paths.
Jan 9, 22:41 UTC
Update - We continue to investigate the problem with Copilot agent sessions not rendering in AI Controls.
Jan 9, 21:36 UTC
Update - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.
Jan 9, 21:08 UTC
Update - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.
Jan 9, 20:07 UTC
Update - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.
Jan 9, 19:35 UTC
Update - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.
Jan 9, 19:02 UTC
Update - We continue to investigate the problem with Copilot agent sessions not rendering in ai controls.
Jan 9, 18:39 UTC
Update - Agent Session activity is still observable in audit logs, and this only impacts the AI Controls UI.
Jan 9, 18:08 UTC
Update - We are investigating an incident affecting missing Agent Session data on the AI Settings page on Agent Control Plane.
Jan 9, 17:57 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Jan 9, 17:53 UTC