GitHub header

All Systems Operational

About This Site

Check GitHub Enterprise Cloud status by region:
- Australia: au.githubstatus.com
- EU: eu.githubstatus.com
- Japan: jp.githubstatus.com
- US: us.githubstatus.com

Git Operations Operational
90 days ago
99.93 % uptime
Today
Webhooks Operational
90 days ago
99.67 % uptime
Today
Visit www.githubstatus.com for more information Operational
API Requests Operational
90 days ago
99.92 % uptime
Today
Issues Operational
90 days ago
99.66 % uptime
Today
Pull Requests Operational
90 days ago
99.73 % uptime
Today
Actions Operational
90 days ago
99.35 % uptime
Today
Packages Operational
90 days ago
99.97 % uptime
Today
Pages Operational
90 days ago
99.92 % uptime
Today
Codespaces Operational
90 days ago
99.61 % uptime
Today
Copilot Operational
90 days ago
99.62 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Mar 18, 2026

No incidents reported today.

Mar 17, 2026

No incidents reported.

Mar 16, 2026
Resolved - On 16 March 2026, between 14:16 UTC and 15:18 UTC, Codespaces users encountered a download failure error message when starting newly created or resumed codespaces. At peak, 96% of the created or resumed codespaces were impacted. Active codespaces with a running VSCode environment were not affected.

The error was a result of an API deployment issue with our VS Code remote experience dependency and was resolved by rolling back that deployment. We are working with our partners to reduce our incident engagement time, improve early detection before they impact our customers, and ensure safe rollout of similar changes in the future.

Mar 16, 15:28 UTC
Update - Errors starting or resuming Codespaces have resolved.
Mar 16, 15:27 UTC
Update - We are investigating reports of users experiencing errors when starting or connecting to Codespaces. Some users may be unable to access their development environments during this time. We are working to identify the root cause and will implement a fix as soon as possible.
Mar 16, 15:06 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 16, 15:01 UTC
Mar 15, 2026

No incidents reported.

Mar 14, 2026

No incidents reported.

Mar 13, 2026
Resolved - On March 13, 2026, between 13:35 UTC and 16:02 UTC, a configuration change to an internal authorization service reduced its processing capacity below what was needed during peak traffic. This caused intermittent timeouts when other GitHub services checked user permissions, resulting in four to five waves of errors over roughly two hours and forty minutes. In total, 0.4% of users were denied access to actions they were authorized to perform.

The root cause was a resource right-sizing change deployed to the authorization service the previous day. It reduced CPU allocation below what was required at peak, causing the service's network gateway to throttle under load. Because the change was deployed after peak traffic on March 12, the reduced capacity wasn't surfaced until the next day's peak.

The incident was mitigated by manually scaling up the authorization service and reverting the configuration change.


To prevent recurrence, we are adding further resource utilization monitors across our entire stack to detect throttling and improving error handling so transient infrastructure timeouts are distinguished from authorization failures, enabling quicker detection of the root issue.

Mar 13, 16:15 UTC
Update - We have deployed mitigations and are actively monitoring for recovery. We'll post another update by 17:00 UTC.
Mar 13, 16:02 UTC
Update - We are investigating intermittent performance degradation affecting Actions, Feeds, Issues, Package Registry, Profiles, Registry Metadata, Star, and User Dashboard. Users may experience elevated error rates and slower response times when accessing these services. We have identified a potential cause and are implementing mitigations to restore normal service. We'll post another update by 16:15 UTC.
Mar 13, 15:47 UTC
Update - Packages is experiencing degraded performance. We are continuing to investigate.
Mar 13, 15:20 UTC
Update - We are investigating reports of issues with service(s): Actions, Feeds, Issues, Profiles, Registry Metadata, Star, User Dashboard. We will continue to keep users updated on progress towards mitigation.
Mar 13, 15:14 UTC
Investigating - We are investigating reports of degraded performance for Actions and Issues
Mar 13, 15:12 UTC
Mar 12, 2026
Resolved - On March 12, 2026, between 01:00 UTC and 18:53 UTC, users saw failures downloading extensions within created or resumed codespaces. Users would see an error when attempting to use an extension within VS Code. Active codespaces with extensions already downloaded were not impacted.

The extensions download failures were the result of a change introduced in our extension dependency and was resolved by updating the configuration of how those changes affect requests from Codespaces. We are enhancing observability and alerting of critical issues within regular codespace operations to better detect and mitigate similar issues in the future.

Mar 12, 18:53 UTC
Update - Codespaces IPs are no longer being blocked from Visual Studio Marketplace operations and we are monitoring for full recovery
Mar 12, 17:59 UTC
Update - We're seeing intermittent failures downloading from the extension marketplace from codespaces, caused by IP blocks for some codespaces. We're working to remove those blocks.
Mar 12, 17:20 UTC
Update - We're seeing intermittent failures downloading from the extension marketplace from codespaces and are investigating.
Mar 12, 16:09 UTC
Update - We're seeing partial recovery for the issue affecting extension installation in newly created Codespaces. Some users may still experience degraded functionality where extensions hit errors. The team continues to investigate the root cause while monitoring the recovery.
Mar 12, 15:08 UTC
Update - We have deployed a fix for the issue affecting extension installation in newly created Codespaces. New Codespaces are now being created with working extensions. We'll post another update by 15:30 UTC.
Mar 12, 14:29 UTC
Update - We are continuing to investigate an issue where extensions fail to install in newly created Codespaces. Users can create and access Codespaces, but extensions will not be operational, resulting in a degraded experience. The team is working on a fix. All newly created Codespaces are affected. We'll post another update by 15:00 UTC.
Mar 12, 13:50 UTC
Update - We're investigating an issue where extensions fail to install in newly created Codespaces. Users can still create and access Codespaces, but extensions will not be operational, resulting in a degraded development experience. Our team is actively working to identify and resolve the root cause. We'll post another update by 14:00 UTC.
Mar 12, 13:07 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Mar 12, 13:06 UTC
Resolved - On March 12, 2026 between 02:30 and 06:02 UTC some GitHub Apps were unable to mint server to server tokens, resulting in 401 Unauthorized errors. During the outage window, ~1.3% of requests resulted in 401 errors incorrectly. This manifested in GitHub Actions jobs failing to download tarballs, as well as failing to mint fine-grained tokens. During this period, approximately 5% of Actions jobs were impacted

The root cause was a failure with the authentication service’s token cache layer, a newly created secondary cache layer backed by Redis – caused by Kubernetes control plane instability, leading to an inability to read certain tokens which resulted in 401 errors. The mitigation was to fallback reads to the primary cache layer backed by mysql. As permanent mitigations, we have made changes to how we deploy redis to not rely on the Kubernetes control plane and maintain service availability during similar failure modes. We also improved alerting to reduce overall impact time from similar failures.

Mar 12, 06:02 UTC
Monitoring - Actions is operating normally.
Mar 12, 06:02 UTC
Update - We are continuing investigation of reports of degraded performance for Actions and GitHub Apps
Mar 12, 05:40 UTC
Investigating - We are investigating reports of degraded performance for Actions
Mar 12, 04:46 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 12, 02:45 UTC
Update - We've identified the root cause and are working on resolving the underlying issue. Some users may have encountered intermittent failures and errors. We're continuing to see reduced error rates.
Mar 12, 02:44 UTC
Update - We are investigating elevated error rates. Error rates are now decreasing and we're continuing to monitor the situation.
Mar 12, 02:13 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 12, 01:54 UTC
Mar 11, 2026
Resolved - On March 11, 2026, between 13:00 UTC and 15:23 UTC the Copilot Code Review service was degraded and experienced longer than average review times. On average, Copilot Code Review requests took 4 minutes and peaked at just under 8 minutes. This was due to hitting worker capacity limits and CPU throttling. We mitigated the incident by increasing partitions, and we are improving our resource monitoring to identify potential issues sooner.
Mar 11, 15:53 UTC
Update - Copilot Code Review queue processing has returned to normal levels.
Mar 11, 15:53 UTC
Update - We experienced degraded performance with Copilot Code Review starting at 14:01 UTC. Customers experienced extended review times and occasional failures. Some extended processing times may continue briefly. We are monitoring for full recovery. We'll post another update by 16:30 UTC.
Mar 11, 15:31 UTC
Monitoring - We are investigating degraded performance with Copilot Code Review. Customers may experience extended review times or occasional failures. We are seeing signs of improvement as our team works to restore normal service. We'll post another update by 15:30 UTC.
Mar 11, 14:28 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 11, 14:25 UTC
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 11, 15:02 UTC
Update - We are investigating elevated timeouts that affected GitHub API requests. The incident began at 14:37 UTC. Some users experienced slower response times and request failures. System metrics have returned to normal levels, and we are now investigating the root cause to prevent recurrence.
Mar 11, 15:02 UTC
Investigating - We are investigating reports of degraded performance for API Requests
Mar 11, 14:37 UTC
Mar 10, 2026

No incidents reported.

Mar 9, 2026
Resolved - On March 9, 2026, between 15:03 and 20:52 UTC, the Webhooks API experienced was degraded, resulted in higher average latency on requests and in certain cases error responses. Approximately 0.6% of total requests exceeded the normal latency threshold of 3s, while 0.4% of requests resulted in 500 errors. At peak, 2.0% experienced latency greater than 3 seconds and 2.8% of requests returned 500 errors.

The issue was caused by a noisy actor that led to resource contention on the Webhooks API service. We mitigated the issue initially by increasing CPU resources for the Webhooks API service, and ultimately applied lower rate limiting thresholds to the noisy actor to prevent further impact to other users.

We are working to improve monitoring to more quickly ascertain noisy traffic and will continue to improve our rate-limiting mechanisms to help prevent similar issues in the future.

Mar 9, 17:03 UTC
Update - Webhooks is operating normally.
Mar 9, 17:03 UTC
Update - We are experiencing latency on the API and UI endpoints. We are working to resolve the issue.
Mar 9, 15:56 UTC
Investigating - We are investigating reports of degraded performance for Webhooks
Mar 9, 15:50 UTC
Resolved - On March 9, 2026, between 01:23 UTC and 03:25 UTC, users attempting to create or resume codespaces in the Australia East region experienced elevated failures, peaking at a 100% failure rate for this region. Codespaces in other regions were not affected.

The create and resume failures were caused by degraded network connectivity between our control plane services and the VMs hosting the codespaces. This was resolved by redirecting traffic to an alternate site within the region. While we are addressing the core network infrastructure issue, we have also improved our observability of components in this area to improve detection. This will also enable our existing automated failovers to cover this failure mode. These changes will prevent or significantly reduce the time any similar incident causes user impact.

Mar 9, 03:51 UTC
Update - This incident has been resolved. New Codespace creation requests are now completing successfully.
Mar 9, 03:51 UTC
Update - We are seeing recovery, with the failure rate for new Codespace creation requests dropping from 5% to about 3%.
Mar 9, 03:32 UTC
Update - We are seeing about 5% of new Codespace creation requests failing. We are investigating the root cause and identifying the impacted regions.
Mar 9, 03:04 UTC
Investigating - We are investigating reports of degraded performance for Codespaces
Mar 9, 03:04 UTC
Mar 8, 2026

No incidents reported.

Mar 7, 2026

No incidents reported.

Mar 6, 2026
Resolved - On March 6, 2026, between 16:16 UTC and 23:28 UTC the Webhooks service was degraded and some users experienced intermittent errors when accessing webhook delivery histories, retrying webhook deliveries, and listing webhooks via the UI and API. On average, the error rate was 0.57% and peaked at approximately 2.73% of requests to the service. This was due to unhealthy infrastructure affecting a portion of webhook API traffic.

We mitigated the incident by redeploying affected services, after which service health returned to normal.

We are working to improve detection of unhealthy infrastructure and strengthen service safeguards to reduce time to detection and mitigation of issues like this one in the future.

Mar 6, 23:28 UTC
Update - Webhooks is operating normally.
Mar 6, 23:28 UTC
Update - We have deployed a fix and are observing a full recovery. The affected endpoint was the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. We will continue monitoring to confirm stability.
Mar 6, 23:26 UTC
Update - We are preparing a new mitigation for the issue affecting the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.
Mar 6, 22:35 UTC
Update - The previous mitigation did not resolve the issue. We are investigating further. The affected endpoint is the webhook deliveries API (https://docs.github.com/en/rest/repos/webhooks?apiVersion=2022-11-28#list-deliveries-for-a-repository-webhook) and its organization and integration variants. Overall impact remains low, with under 1% of requests failing for a subset of customers.
Mar 6, 21:34 UTC
Update - We have deployed a fix for the issue causing some users to experience intermittent failures when accessing the Webhooks API and configuration pages. We are monitoring to confirm full recovery.
Mar 6, 20:18 UTC
Update - We continue working on mitigations to restore service.
Mar 6, 19:39 UTC
Update - We continue working on mitigations to restore service.
Mar 6, 19:07 UTC
Update - We continue working on mitigations to restore service.
Mar 6, 18:39 UTC
Update - We continue working on mitigations to restore full service.
Mar 6, 18:07 UTC
Update - Our engineers have identified the root cause and are actively implementing mitigations to restore full service.
Mar 6, 17:43 UTC
Update - This problem is impacting less than 1% of UI and webhook API calls.
Mar 6, 17:19 UTC
Update - We are investigating an issue affecting a subset of customers experiencing errors when viewing webhook delivery histories and retrying webhook deliveries. UI and webhook API is impacted. Engineers have identified the cause and are actively working on mitigation.
Mar 6, 17:12 UTC
Investigating - We are investigating reports of degraded performance for Webhooks
Mar 6, 16:58 UTC
Mar 5, 2026
Resolved - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.
Mar 5, 23:55 UTC
Update - We are close to full recovery. Actions and dependent services should be functioning normally now.
Mar 5, 23:40 UTC
Update - Actions is experiencing degraded performance. We are continuing to investigate.
Mar 5, 23:37 UTC
Update - Actions and dependent services, including Pages, are recovering.
Mar 5, 23:15 UTC
Update - We applied a mitigation and we should see a recovery soon.
Mar 5, 23:00 UTC
Update - Actions is experiencing degraded availability. We are continuing to investigate.
Mar 5, 22:54 UTC
Investigating - We are investigating reports of degraded performance for Actions
Mar 5, 22:53 UTC
Resolved - On Mar 5, 2026, between 16:24 UTC and 19:30 UTC, Actions was degraded. During this time, 95% of workflow runs failed to start within 5 minutes with an average delay of 30 minutes and 10% workflow runs failed with an infrastructure error. This was due to Redis infrastructure updates that were being rolled out to production to improve our resiliency. These changes introduced a set of incorrect configuration change into our Redis load balancer causing internal traffic to be routed to an incorrect host leading to two incidents.

We mitigated this incident by correcting the misconfigured load balancer. Actions jobs were running successfully starting at 17:24 UTC. The remaining time until we closed the incident was burning through the queue of jobs.

We immediately rolled back the updates that were a contributing factor and have frozen all changes in this area until we have completed follow-up work from this. We are working to improve our automation to ensure incorrect configuration changes are not able to propagate through our infrastructure. We are also working on improved alerting to catch misconfigured load balancers before it becomes an incident. Additionally, we are updating the Redis client configuration in Actions to improve resiliency to brief cache interruptions.

Mar 5, 19:30 UTC
Update - Webhooks is operating normally.
Mar 5, 19:17 UTC
Update - Actions is operating normally.
Mar 5, 19:05 UTC
Update - Actions is now fully recovered.
Mar 5, 18:59 UTC
Update - The queue of requested Actions jobs continues to make progress. Job delays are now approximately 6 minutes and continuing to decrease.
Mar 5, 18:15 UTC
Update - We are back to queueing Actions workflow runs at nominal rates and we are monitoring the clearing of queued runs during the incident.
Mar 5, 17:48 UTC
Update - We have applied mitigations for connection failures across backend resources and we are observing a recovery in queueing Actions workflow runs.
Mar 5, 17:25 UTC
Update - We are observing delays in queuing Actions workflow runs. We’re still investigating the causes of these delays.
Mar 5, 16:52 UTC
Update - Webhooks is experiencing degraded availability. We are continuing to investigate.
Mar 5, 16:47 UTC
Update - Actions is experiencing degraded availability. We are continuing to investigate.
Mar 5, 16:41 UTC
Investigating - We are investigating reports of degraded performance for Actions
Mar 5, 16:35 UTC
Resolved - On March 5, 2026, between 12:53 UTC and 13:35 UTC, the Copilot mission control service was degraded. This resulted in empty responses returned for users' agent session lists across GitHub web surfaces. Impacted users were unable to see their lists of current and previous agent sessions in GitHub web surfaces. This was caused by an incorrect database query that falsely excluded records that have an absent field.

We mitigated the incident by rolling back the database query change. There were no data alterations nor deletions during the incident.

To prevent similar issues in the future, we're improving our monitoring depth to more easily detect degradation before changes are fully rolled out.

Mar 5, 01:30 UTC
Update - Copilot coding agent mission control is fully restored. Tasks are now listed as expected.
Mar 5, 01:30 UTC
Update - Users were temporarily unable to see tasks listed in mission control surfaces. The ability to submit new tasks, view existing tasks via direct link, or manage tasks was unaffected throughout. A revert is currently being deployed and we are seeing recovery.
Mar 5, 01:21 UTC
Investigating - We are investigating reports of impacted performance for some GitHub services.
Mar 5, 01:13 UTC
Resolved - On March 5th, 2026, between approximately 00:26 and 00:44 UTC, the Copilot service experienced a degradation of the GPT 3.5 Codex model due to an issue with our upstream provider. Users encountered elevated error rates when using GPT 3.5 Codex, impacting approximately 30% of requests. No other models were impacted.

The issue was resolved by a mitigation put in place by our provider.

Mar 5, 01:13 UTC
Update - The issues with our upstream model provider have been resolved, and gpt-5.3-codex is once again available in Copilot Chat and across IDE integrations. We will continue monitoring to ensure stability, but mitigation is complete.
Mar 5, 01:13 UTC
Update - We are experiencing degraded availability for the gpt-5.3-codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.

Mar 5, 00:53 UTC
Investigating - We are investigating reports of degraded performance for Copilot
Mar 5, 00:47 UTC
Mar 4, 2026

No incidents reported.