Mar 23, 20267 min readinsightaicoding-agents

The Coding Agent Wars: Who's Actually Winning (And It's Not Who You Think)

We analyzed 14 coding agents across 600K+ GitHub events to find out who's really winning the coding agent wars. Stars tell one story — contributor data tells a very different one.

OSSInsight
The Coding Agent Wars: Who's Actually Winning (And It's Not Who You Think)

There are moments in tech when an entire category appears overnight. Search engines in 1998. Mobile apps in 2008. And now, in 2025–2026: coding agents.

In the past 12 months, we've gone from "AI can autocomplete a line of code" to "AI can build your entire project from a single prompt." But with 14+ serious contenders now in the ring, the obvious question is: who's winning?

Most people look at star counts and call it a day. That's a mistake. Stars measure hype. What matters is what happens after the star — do people actually contribute? Do maintainers ship? Does the community stick around?

I pulled data on every major coding agent using OSSInsight's GitHub analytics — stars, forks, contributors, commit velocity, issue activity. Here's what the numbers actually say.

The Leaderboard Nobody Expected

Let's start with the raw numbers. Here are the top 10 coding agents by GitHub stars as of March 2026:

RankAgentStarsForksContributorsLanguageCreated
1OpenCode128,27713,569828TypeScriptApr 2025
2Gemini CLI98,73512,538590TypeScriptApr 2025
3Claude Code81,4376,77749ShellFeb 2025
4OpenHands69,5768,730460PythonMar 2024
5Codex66,9698,953383RustApr 2025
6Cline59,2526,014283TypeScriptJul 2024
7Aider42,2644,063180PythonMay 2023
8Goose33,4533,109402RustAug 2024
9Cursor*32,4942,21532Mar 2023
10Continue31,9974,288501TypeScriptMay 2023

*Cursor's GitHub repo is primarily an issue tracker — the actual source code is proprietary. Its contributor/commit data is not directly comparable to the others.

OpenCode leads. Gemini CLI is second. Claude Code third. The usual suspects.

But here's where it gets interesting.

Stars Lie. Contributors Don't.

Star count is a vanity metric. It tells you how many people clicked a button. What actually matters is: how many people care enough to contribute code?

Look at the contributor-to-star ratio:

AgentStarsContributorsRatio (contributors per 1K stars)
Continue31,99750115.7
Goose33,45340212.0
OpenHands69,5764606.6
OpenCode128,2778286.5
Gemini CLI98,7355906.0
Codex66,9693835.7
Cline59,2522834.8
Aider42,2641804.3
Claude Code81,437490.6

(Cursor excluded — GitHub repo is an issue tracker, not source code)

Continue has 26x the contributor density of Claude Code. Let that sink in.

Claude Code has massive star counts but only 49 contributors. It's essentially a closed-source product with a public GitHub presence for issue tracking and community discussion. Nothing wrong with that — Anthropic ships a great product. But it tells you something about the community dynamics.

Meanwhile, Continue, Goose, and OpenHands have thriving contributor ecosystems. These are genuine open-source communities where external developers are shaping the product.

The Open Issues Signal

Here's a dimension most people miss — open issue count:

AgentOpen IssuesStarsIssues per 1K Stars
Claude Code7,40981K91.0
OpenCode7,324128K57.1
Gemini CLI3,12999K31.7
Codex2,18367K32.6
Aider1,44942K34.3
Continue93432K29.2
Cline71559K12.1
OpenHands33670K4.8
Goose31833K9.5

Claude Code has 91 open issues per 1K stars — nearly 2x the next closest. This suggests massive user demand outpacing the team's capacity to respond. OpenHands, by contrast, has just 4.8 — their community is efficiently triaging and resolving issues.

The Velocity Test: Who's Shipping Fastest?

Stars and contributors are historical. What about right now? Let's look at commits in the last 30 days:

AgentCommits (Last 30 Days)ContributorsCommits/Contributor
OpenCode8238281.0
Codex7543832.0
Gemini CLI6035901.0
Goose2594020.6
OpenHands2474600.5
Continue1305010.3
Cline1172830.4
Claude Code43490.9
Aider251800.1

OpenCode, Codex, and Gemini CLI are shipping at breakneck speed — 600+ commits a month. They're in a full sprint.

Aider, once the pioneer of terminal-based coding agents, has slowed dramatically. 25 commits in a month for a project with 42K stars suggests it may be entering maintenance mode. Or perhaps the solo maintainer is just taking a breath. Either way, the data is the data.

The Three Archetypes

Looking at all this data, I see three distinct models emerging:

1. The Corporate Rockets 🚀

OpenCode, Gemini CLI, Codex, Claude Code

Backed by major companies (or well-funded startups). Massive star counts driven by brand awareness. High commit velocity from internal teams. Low external contributor ratios.

These win on polish, integration, and marketing. But they're not truly community-driven — they're products with a GitHub repo.

2. The Community Champions 🤝

Continue, Goose, OpenHands, Cline

Lower star counts, but dramatically higher contributor engagement. These projects are shaped by their users. They tend to be more extensible, more configurable, and more opinionated about workflow.

If you want a coding agent that adapts to your workflow, this is where to look.

3. The Pioneer Veterans 🏔️

Aider, Cursor, Plandex

These were here first. Aider defined the "AI pair programming in your terminal" category. Cursor pioneered the AI-native IDE. They have loyal user bases but face increasing pressure from the corporate rockets.

The question for these projects: can they evolve fast enough, or will they become the WordPerfect of coding agents?

What This Means For You

If you're choosing a coding agent today, here's my framework:

Pick a Corporate Rocket if you want the most polished experience, don't mind vendor lock-in, and value "it just works" over customization. Start with Codex or Claude Code.

Pick a Community Champion if you want to shape the tool you use, need deep customization, or care about open-source values. Start with Continue or OpenHands.

Pick a Pioneer if you want battle-tested reliability and don't need the latest features. Aider is still excellent at what it does.

The Prediction

Here's where I stick my neck out:

In 12 months, the winner won't be the agent with the most stars. It'll be the one with the best ecosystem.

We're entering the "app store" phase of coding agents. MCP servers are the new plugins. Skills and extensions are the new integrations. The agent that builds the best third-party ecosystem — not just the best core product — will win.

That's why I'm watching Continue and Goose more closely than their star counts suggest I should. Community-driven projects have a historical advantage in building ecosystems. Linux beat commercial Unix. Kubernetes beat Docker Swarm. Android beat Windows Phone.

The coding agent wars are far from over. But the data tells us where to look.


All data in this article was sourced from OSSInsight, which analyzes 10B+ GitHub events in real-time. You can explore any of these projects yourself — just search for a repo name and dive in.

Compare any two agents head-to-head: OpenCode vs Claude Code | Codex vs Gemini CLI | Aider vs Continue