Documentation in the era of MCP
"Invisible" docs are good docs
Documentation shares a few characteristics with design. None is more poignant to me as the principle, "design is successful when it is invisible - when it is not noticed." The premise being that the goal of design is to enable the user in such a truly seamless way that they do not even notice how they are being enabled.
Although ease-of-use is reinforcing for us - we will choose what is easiest - it is rare that anyone who isn't a designer notices just how that ease was achieved; no friction, no investigation. Conversely, a hallmark of design failure is when it gets so immediately in the user's way that it becomes the only noticeable thing.
Technical documentation shares this quality: a frictionless, seamless experiences are what we all want, in most cases - to be able to realize our goal, without struggling to achieve it.
Good documentation achieves this level of "invisibility" in a few different ways, but the one I want to focus on right now is staying updated. It's a simple enough premise, but without sufficient automation pipelines in place, this goal is seldom achieved. Yet in the era of AI, this is inconceivable.
So how do we do it? Enter MCP.
From reactive to proactive documentation
The breakthrough of DevOps was to abandon waterfall-style development at scale, to move towards a much more dynamic, adaptive development framework. We are seeing another such pivot today with the advent of generative and agentic AI.
Traditionally, documentation has followed a reactive development model: developers build features, then communicate the changes to writers either directly or via a pipeline of other people. Documentation gets updated, hopefully, but then inevitably drifts out of sync with reality, due to the constraints of this loop.
By the time someone notices the disconnect, though, end-users have already moved on to other solutions. The docs are no longer invisible; the friction is apparent.
Model context protocol (MCP) for proactive documentation
But what if the docs site could update itself, through a process augmented by human oversight? Model context protocol (MCP) is the magic that makes this dream a reality.
Docs as client/server
The gist of MCP is this: there are MCP servers - small programs that expose resources - and MCP clients, small programs that ingest exposed resources for processing.
Leveraging MCP for documentation involves - deploying sites with MCP clients, - which continuously query MCP servers that host critical product information, - which is then processed intelligently, in whatever way the Docs Engineer desires.
This loop enables things like:
-
Continuous monitoring: Documentation sites either run scheduled jobs to query MCP servers for information like API schemas, configuration options, feature flags, or system states, or are triggered to run when a change is detected, depending on how the architecture is configured. These jobs compare the current product state against the documented content, and are then able to detect discrepancies, such as new endpoints, changed parameters, or deprecated features.
-
Intelligent change detection: Rather than generating noisy bulk updates, the system can be configured to create focused pull requests targeting specific APIs or components.
-
Contextual updates: Each PR includes rich context about why changes occurred, sourced from commit messages or changelogs, and may include things like pre-populated validation results that can be run autonomously, and confidence scores.
Smart automation with good practice
It's not enough merely to automate and leave, however. In order to ensure quality control, there are various checks that can help auditors - whether human or machine. Here are just some of the mechanisms that may be implemented for better authorship with MCP.
Note
In future posts, I'll delve into each of these in more detail.
Confidence scoring
Every automated change can include reliability indicators that help reviewers and stakeholders understand the quality of each suggestion. An example scale of confidence scores may be something like:
- High confidence (90-100%): Straightforward additions like new API endpoints with complete specs
- Medium confidence (60-89%): Parameter type changes or URL modifications that need validation
- Low confidence (30-59%): Complex changes requiring additional reviewer attention
- Very low confidence (<30%): Conflicting information that converts to human investigation issues
Confidence scoring changes the review paradigm - when accurate and well-implemented, scores can greatly reduce the amount of time required for a given review by triage, direct SME reviewers to specific changes where additional verification is needed, or even prompt another agent LLM to do some additional checking.
Circuit breakers
"Circuit breakers" in this system refers to mechanisms that prevent PR spam during major refactors by detecting unusual change patterns. For example, when PR activity exceeds normal thresholds, it creates summary issues instead of individual PRs, then resumes automation after activity normalizes.
Conflict resolution
Smart detection identifies when automated changes conflict with recent human edits, then flag these for manual review, rather than creating problematic PRs.
Enhanced review experience
Given the proper setup, during PR reviews, SMEs can invoke MCP-powered tools through GitHub bot commands. For example:
/check-accuracy
triggers real-time validation against current APIs/related-docs
surfaces documentation requiring similar updates, if not already included in the PR/context [component]
retrieves current system state information
MCP as a cornerstone of the ecosystem
The ideal MCP documentation ecosystem has the power to transform how developers learn and implement the protocol everywhere.
Living documentation as protocol authority
By operating documentation sites as MCP clients that continuously validate examples, specifications, and integration patterns against live MCP servers, we create self-healing documentation, where:
- Code examples are automatically tested (i.e. "docs-as-tests" practices), alleviating work for the writer or QA engineer (critically, the tests are run against live implementations of complex systems, not just static models)
- Deprecated patterns are flagged in real-time
- Security compliance is verified continuously
How do we teach more people about MCP magic?
I'm a strong believer in "learn by doing." It's one of the reasons I made starlight-mcp, a very barebones, basic example with working tests to demonstrate how a mock API running its own MCP server could help a docs site running an MCP client could ingest changes to the API schema, and then automatically publish them. (Plus I'm learning more about Starlight, so this seemed like a fun way to incorporate both.)
A note on starting with APIs
Although I do think that API docs are one of the best places to start, I also think API docs as a first encounter with MCP run the risk of being slightly less impressive to those who already have really solid CI/CD pipelines for autogenerated API docs - going from programmatic API generation to MCP-augmented API generation is a little bit less impressive, I think, than leveraging MCP to ingest really complex, non-standardized data. But, again, the API demo is a really easy place to start. I'll return to this topic in another blog post.
We are already in a world where a plethora of MCP servers and clients are already popping up all over the place, but I'd love to see more readily-accessible, open source demos for beginners that can be pulled down from a repo and walk the user through the process.
Welcome to the future of invisible
MCP-powered approaches to documentation development transform documentation development from bottlenecked, static waterfalls to dynamic, self-healing, three-dimensional ecosystems. By creating self-maintaining documentation ecosystems where accuracy improves continuously through automated monitoring, intelligent change detection, and enhanced human review processes, we enable documentation that scales.
With self-healing, self-updating documentation, the developer experience gets that much closer to being the good kind of "invisible."
References
- MCP: Model Context Protocol, an open protocol for enabling LLMs to interact with external systems in a structured, secure, and auditable way.
- SME: Subject Matter Expert, typically a technical reviewer or domain expert responsible for validating documentation changes.
- Docs-as-tests: A practice where documentation code examples are automatically executed as tests to ensure accuracy and prevent drift.
- For more on Starlight and MCP integration, see: starlight-mcp example repo.
- For curated lists of MCP servers and clients, see awesome-mcp-clients and awesome-mcp-servers.