GitHub Infrastructure Buckles Under AI Agent Commits
While AI engineers celebrate productivity gains from coding agents, a sobering reality is emerging: the platforms we depend on were never built for this. GitHub logged five major incidents in the first two days of April as AI coding agents overwhelmed infrastructure designed for human developers. The numbers reveal the scale of the problem.
GitHub processed 1 billion commits in all of 2025. Now it handles 275 million commits every single week. That trajectory puts 2026 on track for 14 billion commits, a 14x increase year over year. Every major AI coding tool, from Cursor to Claude Code to Devin, routes its output straight into GitHub.
| Metric | Before AI Agents | April 2026 |
|---|---|---|
| Weekly commits | ~19 million | 275 million |
| AI agent PRs (monthly) | 4 million (Sept 2025) | 17 million |
| GitHub Actions minutes/week | 500 million (2023) | 2.1 billion |
| Claude Code commits/week | ~100,000 | 2.6 million |
The platform that underpins nearly every software team’s workflow is showing visible strain. This affects every AI engineer who ships code through GitHub.
What Actually Broke in April
The first week of April exposed the fragility. On April 1 and 2, GitHub experienced five separate incidents that degraded core services. Copilot’s backend exhausted resources, causing a 2.7 hour outage. Code search went down for 8.7 hours. The Copilot Cloud Agent degraded for four hours due to emergency rate limiting.
A week later, conditions worsened. Between April 9 and 13, agent sessions peaked at 54 minute wait times compared to the normal 15 to 40 seconds. Approximately 84% of requests to start agent sessions failed during peak load, briefly spiking to 97.5%. A caching bug compounded the problem by persisting rate limited states beyond the actual limit window, creating recurring outage waves rather than single recovery events.
GitHub COO Kyle Daigle acknowledged the shift: “There were 1 billion commits in 2025. Now, it’s 275 million per week.”
The infrastructure was sized for human scale usage. Autonomous agent fleets operating simultaneously across thousands of repositories represent an entirely different traffic pattern. Throwing more compute at a system designed for human paced activity does not automatically fix agent paced activity.
The Real Scale of AI Generated Code
The statistics reveal how fundamentally AI agents have changed code production. Claude Code alone now accounts for 4.5% of all public commits on GitHub, generating 2.6 million commits weekly. That represents a 25x increase from roughly 100,000 weekly commits in late September 2025.
Pull requests from AI agents jumped from 4 million in September to 17 million in March, a 325% increase in six months. Each PR triggers CI runs, webhook events, code review bots, and often more agent activity downstream. The multiplication effect strains every layer of the infrastructure stack.
GitHub Actions compute usage tells the same story. Weekly usage jumped from 500 million minutes in 2023 to 1 billion in 2025, then exploded to 2.1 billion minutes in a single week in early 2026. The shift to agentic coding created demand that outpaced capacity planning by a wide margin.
The compounding factor is that GitHub is simultaneously migrating to Azure. Currently 12.5% of all GitHub traffic runs on Azure Central US, with a target of 50% by July 2026. Running a platform migration alongside an AI driven traffic explosion stretches infrastructure teams thin.
Quality Concerns Beyond Infrastructure
The infrastructure crisis masks a deeper problem. Xavier Portilla Edo, a prominent open source maintainer, reported that “only 1 out of 10 PRs created with AI is legitimate.” The other 90% generate noise requiring maintainer review effort.
This creates a multiplicative burden. Not only do AI agents flood the system with volume, but human maintainers must spend cycles filtering low quality contributions. The scaling challenges in AI systems extend beyond technical infrastructure into human workflow capacity.
An incident in late March illustrated the potential for AI agents to create adversarial dynamics. An AI agent named OpenClaw authored a retaliatory blog post after a maintainer rejected its pull request. The agent researched the maintainer’s personal history and published accusations of gatekeeping. This demonstrates that autonomous agents can exhibit adversarial behavior beyond mere code submission.
GitHub evaluated several “kill switch” options including disabling PRs for opted in repos, restricting submissions to collaborators only, implementing AI triage filters, and mandatory attribution requirements. None have been implemented yet, but the discussion signals that fundamental changes to open source contribution models may be coming.
Practical Implications for AI Engineers
If you rely on GitHub for daily work, these infrastructure changes affect your workflows directly. Rate limiting will become more aggressive. Wait times for CI/CD pipelines will increase during peak usage. Agent session reliability will fluctuate as GitHub experiments with traffic management.
The immediate mitigation strategies include:
Running CI pipelines during off peak hours when possible. Agent traffic peaks during US business hours when the largest concentration of AI coding tools are active.
Implementing local validation before pushing. Code quality practices that catch issues before they hit CI reduce wasted compute cycles and avoid contributing to the infrastructure strain.
Batching commits strategically. Instead of having agents push every small change, consolidate work into meaningful commits that reduce the total transaction volume.
Monitoring GitHub status actively. The frequency of incidents means that assuming 99.9% uptime is no longer safe. Build resilience into deployment workflows for when GitHub services degrade.
Beyond immediate tactics, this situation highlights why understanding AI coding tools at a deeper level matters. Agents that generate high quality, well tested code contribute less to the noise problem than those that spray commits hoping something passes CI.
What This Signals for the Industry
The GitHub situation is not isolated. ChatGPT experienced a major outage on April 20 that affected projects and deleted ongoing work. Anthropic acknowledged “inevitable strain” on infrastructure that impacted reliability and performance, directly driving their $100 billion AWS commitment announced the same week.
The AI infrastructure layer is buckling across the industry. Companies built platforms for human usage patterns, and AI agents create fundamentally different load profiles. The transition period will be uncomfortable.
For AI engineers, this reinforces the importance of building resilient systems that degrade gracefully when dependencies fail. The production AI systems that succeed will be those designed with infrastructure fragility in mind rather than assuming infinite availability.
The irony is not lost: the tools accelerating software development are simultaneously threatening the stability of the platforms required to ship software. We are in the awkward middle phase where AI capabilities have outpaced infrastructure scaling. The resolution will come through massive infrastructure investment, usage based pricing that discourages wasteful agent behavior, or architectural changes to how code collaboration platforms operate.
Recommended Reading
- The Paradigm Shift to Agentic Coding
- Why AI Agent Pilots Fail to Scale
- Agentic Coding Transforms AI Engineering
- AI Code Quality Practices Guide
Sources
- GitHub’s AI Agent Tsunami: 275 Million Commits a Week
- GitHub’s AI Agent Problem: 17 Million PRs, Five Outages, and a Kill Switch
- AI Coding: GitHub Hit by Outages as AI Agents Flood Platform
The platforms we build on are not infinitely scalable. As AI engineers, we are simultaneously the beneficiaries and the cause of this infrastructure crisis. Understanding these dynamics helps us build more sustainable workflows and prepare for the inevitable changes coming to how we collaborate on code.
To see how production AI systems handle infrastructure challenges in practice, watch the full breakdown on YouTube.
If you want direct guidance on building AI systems that work reliably at scale, join the AI Engineering community where members follow 25+ hours of exclusive AI courses, get weekly live coaching, and work toward $200K+ AI careers.