Hot Posts

6/recent/ticker-posts

Rethinking open source mentorship in the AI era

{}

Let me paint a picture for you.

A polished pull request lands in your inbox. It looks amazing at first glance, but then you start digging in, and a few things seem off. Forty-five minutes later, you’ve crafted a thoughtful, encouraging response with a few clarifying questions. Who knows: Maybe this person might be a great new person to mentor, so it’s worth your time if they put in theirs.

And then…nothing. Or the follow-up makes it clear the contributor doesn’t have the context needed to explain the change, often because AI made it easy to submit something plausible before they were ready to maintain it. Or you realize you’ve just spent your afternoon debugging someone’s LLM chat session.

This is becoming more common. Not because contributors are acting in bad faith, but because it’s never been easier to generate something that looks plausible. The cost to create has dropped. The cost to review hasn’t.

Open source is experiencing its own “Eternal September”: a constant influx of contributions that strains the social systems we rely on to build trust and mentor newcomers.

The signals have changed

Projects across the ecosystem are seeing this same occurrence. tldraw closed their pull requests. Fastify shut down their HackerOne program after inbound reports became unmanageable at scale.

The overall volume keeps climbing. The Octoverse 2025 report notes that developers merged nearly 45 million pull requests per month in 2025 (up 23% year over year). More pull requests, same maintainer hours.

The old signals, like clean code, fast turnaround, and handling complexity, used to mean someone had invested time into understanding the codebase. Now AI can help users generate all of that in seconds, so these signals aren’t as telling.

To reduce noise and bring more trust back into open source contributions, platforms, including GitHub, are building longer-term solutions. In fact, our product team just published an RFC for community feedback. If you have thoughts on what we can do, we’d love to hear from you.

But platform changes take time. And even when they arrive, you’ll still need strategies for figuring out how mentorship looks today when signals aren’t as easy to read. Here’s what’s working.

Why this is urgent

Mentorship is how open source communities scale.

If I asked a room of open source contributors how they got started, they’d all say it began with a good mentor.

When you mentor someone well, you’re not just adding one contributor. You’re multiplying yourself. They learn to onboard others who do the same. That’s the multiplier effect.

YearBroadcast (1,000/year)Mentorship (2 every 6 months, they do the same)
11,0009
33,000729
55,00059,049

But maintainers are burning out trying to mentor everyone who sends a pull request. If we lose mentoring newcomers, we lose the multiplier entirely.

We can’t abandon mentorship, especially as many long-time maintainers step back from active contribution. (I wrote more about this generational challenge in Who will maintain the future?) So, we need to be strategic about who we invest in.

The 3 Cs: A framework for strategic mentorship at scale

So how do you decide where to invest your mentorship energy when contribution signals are harder to read? Looking at what’s working across projects, I see three filters maintainers are using. I call them the 3 Cs: Comprehension, Context, and Continuity.

1. Comprehension

Do they understand the problem well enough to propose this change?

Some projects now test comprehension before code is submitted. Codex and Gemini CLI, for example, both recently added guidelines: contributors must open an issue and get approval before submitting a pull request. The comprehension check happens in that conversation.

I’m also seeing in-person code sprints and hackathons thriving in this area, where maintainers can have real-time conversations with potential contributors to check both interest and comprehension.

I’m not expecting contributors to understand the whole project. That’s unrealistic. But you want to make sure they’re not committing code above their own comprehension level. As they grow, they can always take on more.

2. Context

Do they give me what I need to review this well?

Comprehension is about their understanding. Context is about your ability to do your job as a reviewer.

Did they link to the issue? Explain trade-offs? Disclose AI use?

The last one is becoming more common. ROOST has a simple three-principle policy. The Processing Foundation added a checkbox. Fedora landed a lightweight disclosure policy after months of discussion.

Disclosing AI is about giving reviewers context. When I know a pull request was AI-assisted, I can calibrate my review. This might mean asking more clarifying questions or focusing on whether the contributor understands the trade-offs, not just whether the code runs.

There’s also AGENTS.md, which provides instructions for AI coding agents, like robots.txt for Copilot. Projects like scikit-learn, Goose, and Processing use AGENTS.md to tell agents instructions, like follow our guidelines, check if an issue is assigned, or respect our norms. This can help to place the burden of gathering the context needed for a review to the contributor (or their tools).

3. Continuity

Do they keep coming back?

This is the mentorship filter.

Drive-by contributions can be helpful but limit your mentorship investment to people who come back and engage thoughtfully.

Your mentorship can scale up over time:

  • Great first conversation in a pull request → make your review a teachable moment
  • They keep coming back → offer to pair on something, then start suggesting harder tasks
  • If they still keep coming back → invite them to an event, or consider commit access

The takeaway

Comprehension and Context get you reviewed. Continuity gets you mentored.

As a maintainer, this means: don’t invest deep mentorship energy until you see all three.

What this looks like:

PR Lands → Follows Guidelines?  
                NO  → Close. Guilt-free. 
                YES → Review → They Come Back?
                                    YES → Consider Mentorship 

Let’s compare this to our first example above. This time, a polished pull requests lands without following the guidelines. Close it. Guilt-free. Protect your time for contributions that matter.

If someone comes back and is engaged in issues; if they submit a second pull request and respond thoughtfully to feedback, now you pay attention. That’s when you invest.

This is how you protect the multiplier effect. You’re not abandoning newcomers. You’re being strategic.

There’s another benefit too: clear criteria reduces bias. When you rely on vibes, you tend to mentor people who look like you or share your cultural context. The 3 Cs give you a rubric instead of gut feelings, and that makes your mentorship more equitable.

Getting started

Pick a C to implement:

CImplementation
ComprehensionRequire issue before pull request
Host an in-person code sprint for live discussions
ContextAdd AI disclosure or AGENTS.md
ContinuityWatch who comes back

Start with one but look for all three when deciding who to mentor.

This isn’t about restricting AI-assisted contributions. It’s about building guardrails that protect human mentorship and keep communities healthy.

AI tools are here to stay. The question is whether we adapt our practices to maintain what makes open source work: human relationships, knowledge transfer, and the multiplier effect.

The 3 Cs give us a framework for exactly that.

Resources

Adapted from my FOSDEM 2026 talk. Thanks to Anne Bertucio, Ashley Wolf, Daniel Stenberg, Tim Head, Bruno Borges, Emma Irwin, Helen Hou-Sandí, Hugo van Kemenade, Jamie Tanna, John McBride, Juan Luis Cano Rodríguez, Justin Wheeler, Matteo Collina, Camilla Moraes, Raphaël de Courville, Rizel Scarlett, and everyone who shared examples online.

The post Rethinking open source mentorship in the AI era appeared first on The GitHub Blog.

As contribution volume grows, mentorship signals are harder to read. The 3 Cs framework helps maintainers mentor more strategically... without burning out.

The post Rethinking open source mentorship in the AI era appeared first on The GitHub Blog.