Engineering Manager Playbook as a Living LLM Wiki

Onboarding a new engineering manager fails the same way every time: the playbook is out of date before the new hire finishes their second week. Rebuilding it as a living LLM wiki fixes that.


Why an Engineering Manager Playbook Exists

Onboarding engineering managers without a written playbook is expensive. They spend the first month asking the same questions every previous hire already asked. Tribal knowledge lives in Slack DMs, old wiki pages, and the heads of whichever senior engineers happen to be free that afternoon. A written engineering manager playbook compresses months of that into a few days of guided reading. Across several hires, the document has been doing real work. It answers the standard questions about how teams are structured, who owns what, how decisions get made, and where to find the parts of the system that matter. The hard part was never whether to have a playbook. It was how to keep it honest.


What Goes Into an Engineering Manager Playbook

A useful playbook covers the structural layer of the job, not the personal-style layer. The topics worth writing down are the ones that change hands when a role changes hands:

  • Engineering metrics: DORA measurements, cycle time, delivery rate, tech-debt ratio.
  • Performance reviews: cadence, competency model, how feedback is aggregated, how recognition works.
  • Goals framework: how business and technical goals are defined, who owns which, how RACI is applied.
  • Incident management: severity definitions, alerting, on-call, the incident response loop.
  • Observability: how metrics, traces, and logs are split across the stack.
  • The 30/60/90 onboarding plan for the role itself.
  • Team topology, tools catalog, roles, meeting cadence, and the Slack channel map.

None of this is revolutionary. What makes it valuable is that it is written down in one place, cross-linked, and correct on the day the new manager reads it. That last condition is where static documents lose.


Why Engineering Manager Playbooks Go Stale

A hand-maintained playbook rots for structural reasons, not from laziness. Every time a team is renamed, a tool is replaced, a process is revised, or a channel is retired, somebody has to remember to update the doc. Nobody does, consistently. Ingesting a new source means opening the file, finding the right section, editing it, checking the cross-references, and hoping no other page now contradicts the change. The friction is high enough that updates get skipped. After a quarter or two, the playbook describes an organization that no longer exists, and new hires quietly learn to stop trusting it.

The underlying problem is that a playbook has been treated as a document. It should be treated as an index over raw material.


Karpathy’s LLM Wiki Pattern

The turning point came from Andrej Karpathy’s LLM wiki gist. The pattern is simple and load-bearing. Raw notes, articles, meeting transcripts, and ad-hoc documents go into a raw/ folder. An LLM ingests them and produces a structured wiki with three kinds of pages: concepts, entities, and source summaries. Every ingest updates an index file and appends to a log. Pages cross-link each other using plain markdown. When a new source contradicts an existing claim, the LLM flags the contradiction rather than silently overwriting.

No retrieval-augmented generation is needed. The index is the routing layer. When a question comes in, the model reads the index, pulls the relevant pages, and answers with citations to the pages it used. When new material arrives, the same index tells it what already exists and where to merge. The wiki compounds instead of bloating. It lints itself for contradictions, orphan pages, and missing cross-references on request.

Applying this to the engineering manager playbook turned a brittle document into a living one. Raw notes go in, the wiki absorbs them, and the next reader gets current information instead of a frozen snapshot.

flowchart LR
    A["Raw notes<br/>(articles, meeting<br/>transcripts, PDFs)"] -->|ingest| B(("Claude Code"))
    B --> C["Concept pages<br/>(processes, methods)"]
    B --> D["Entity pages<br/>(teams, tools, roles)"]
    B --> E["Source summaries"]
    C --> F[["index.md<br/>+ activity log"]]
    D --> F
    E --> F
    F -->|query| G["Cited answer"]
    F -.->|next ingest| B
    classDef hub fill:#cc785c,stroke:#cc785c,color:#0d1117,font-weight:700
    classDef page fill:#1c2128,stroke:#30363d,color:#e6edf3
    classDef raw fill:#21262d,stroke:#30363d,color:#8b949e
    class B hub
    class C,D,E,F page
    class A,G raw

Claude Code as the Interface, Obsidian as the Map

The day-to-day interface is Claude Code. Open the wiki repository in it and the whole thing behaves like a person who has read every page. Ask about the goals framework and it cites the relevant page. Paste a meeting note and say “ingest this” and it rewrites the three or four pages that actually need to change. Run a lint pass and it returns a checklist of contradictions and gaps. That is the part that is difficult to explain to anyone who has not tried it: the wiki stops feeling like documentation and starts feeling like a colleague with perfect recall of their own notes. Claude Code is remarkably good at this kind of work, and it is the first tool where a wiki has actually felt maintained instead of merely stored.

Obsidian sits alongside as a reading surface. It lacks the live interaction of Claude Code, but it is an excellent IDE for a markdown knowledge base. The graph view exposes link structure at a glance, backlinks make navigation instant, and keyboard-driven browsing is fast. Claude Code is how the wiki is maintained and queried. Obsidian is how it is read and explored.


Documentation that regenerates itself is not a gimmick. It is the only kind that survives contact with a fast-moving engineering organization.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • How to Register a Google Play Developer Account for Your LLC: A Step-by-Step Guide
  • Summarize RSS Feeds with Local LLMs: Ollama, Open-WebUI, and Matcha Guide
  • Cloudflare Pages: Deploy a Site for $10 a Year
  • Capacitor WebView Cache: Why New Builds Show Old Assets
  • Claude Code Best Practices: How I Use AI to Build Faster and Smarter