Context Engineering: Building a Knowledge OS with Claude
TL;DR
What happens when you stop prompting and start architecting context. A practitioner's account of building a git-versioned Knowledge OS — and what it taught me about working with LLMs.
Reasoning Seed
A Reasoning Seed is a structured prompt you can copy into your AI reasoning tool (Claude, ChatGPT, Obsidian, Notion). It contains the article's thesis, its core tension, and our lab context — ready for your own analysis.
Click the button below to copy as Markdown. More ways to interact with this content in the discussion questions below.
Tension: If context determines the quality of the answer — does knowledge architecture become the actual core competency?
Lab context: A practice report from the lab: how we achieve better AI results through structured context — and why the Knowledge OS has become our central tool for knowledge work.
Key Insights
1 — From Prompting to Context Architecture
The shift: prompting is writing instructions. Context engineering is designing the environment in which an LLM thinks. The difference matters — a good prompt gets a good answer; a good context system gets consistently good answers across sessions, tasks, and domains. My setup: a git-versioned Markdown repository (Obsidian vault) with a 3-layer context principle that feeds every LLM interaction.
2 — The 3-Layer Context Principle
Layer 1 (Global): CLAUDE.md + system files — who I am, how I work, what conventions apply. Always present. Layer 2 (Project): Each track has its own README and status — scoping context to the active domain. Layer 3 (Task): Individual files, skills, commands — the specific material for the task at hand.
Why it works: the LLM gets progressively narrower context without losing the big picture. No redundant re-explaining. No context drift across sessions.
3 — CLAUDE.md as a Living Document
Not a static config file — a continuously evolving document that encodes working style, conventions, project landscape, and decision frameworks. It’s the single most impactful file in the system. Every change to roles, tools, or projects propagates through a dependency matrix to keep the entire system consistent.
4 — LLM-Agnostic by Design
The Knowledge OS is tool-agnostic: plain Markdown, git-versioned, no proprietary formats. Claude Code is the primary interface today, but the architecture doesn’t depend on it. Skills and commands are the only Claude-specific layer — their logic is documented and transferable. This is a conscious design decision: the knowledge should outlast any single tool.
5 — What Actually Changed in My Practice
Concrete outcomes across six domains:
- Reflection: Structured dialogues that challenge my own thinking — not validation, but productive friction
- Research: Market analysis, decision frameworks, problem-space exploration — faster and more systematic
- Production: Concepts, emails, workshop prep, prototypes — from hours to minutes for first drafts
- Writing: A dialogue that stimulates writing and rigorously edits what’s written
- Development: Functional prototypes (HTML/CSS/JS/React/Tailwind) as a non-engineer
- Organization: Knowledge base maintenance, task structuring, learning paths
6 — Lessons Learned: What Doesn’t Work
- Context overload: more context isn’t always better — there’s a sweet spot between completeness and noise
- Maintenance debt: a living system needs gardening. Outdated context actively misleads
- The automation paradox: building the system to save time costs significant time upfront
- Permission friction: tool configurations that block flow are a real productivity killer
- Transfer limits: what works for one person’s brain doesn’t automatically work for another’s
What This Means for Design Leaders
Why Designers Should Care
- Context engineering is information architecture for AI — a design discipline, not an engineering problem
- Design leaders who understand context systems can shape how teams interact with AI — not just use it
- The skills transfer: user research, systems thinking, service design all apply directly
What’s Still Missing
- Team-scale patterns: this is a solo practitioner’s setup — collaborative context systems are unsolved
- Evaluation frameworks: how do you measure whether your context architecture actually works?
- Onboarding: the learning curve is real — this isn’t a weekend project
- Standardization: every practitioner builds their own system from scratch
Discussion Questions for the Next Lab
01 Context as Product: If context engineering is the new UX for AI — what would a “context design system” look like? Reusable patterns, shared conventions, composable layers?
02 Design Leadership: How do we help non-technical team members build effective context systems? Is there a low-barrier entry point, or does this require technical fluency?
03 Knowledge Debt: Every knowledge system accumulates debt. What’s a sustainable maintenance practice — and when does the cost of maintenance exceed the value of the system?
04 Beyond Solo: This is a personal knowledge OS. What changes when you scale context engineering to a team of 5, 15, 50? What stays, what breaks?
Sources
- Anthropic — Claude Code Documentation
- Simon Willison — Context Engineering
- Tobi Lutke — Context Engineering (X/Twitter, 2025)
- Wikipedia (DE) — Context Engineering
Glossary
Context Engineering Designing the persistent information environment in which an LLM operates — beyond individual prompts. Includes system prompts, file structures, conventions, and dependency systems.
Knowledge OS A structured, git-versioned knowledge repository designed to serve both human thinking and LLM context. Plain Markdown, tool-agnostic, with layered context architecture.
3-Layer Context Architecture pattern: Global context (identity, conventions) → Project context (domain, status) → Task context (specific files, commands). Progressively narrows scope without losing coherence.
CLAUDE.md A project-level instruction file read by Claude Code at session start. Functions as the “system prompt” for a repository — encoding identity, conventions, and navigation.
Curated by David Latz · Panoptia March 2026
Related Field Notes
LLM Knowledge Bases: Why Everyone Lands on the Same Stack
Apr 3, 2026 · Andrej Karpathy
Agent Memory: Why Your AI Has Amnesia and How to Fix It
Mar 27, 2026 · Casius Lee (Oracle)
Documentation Is the New Interface — and Design Systems Are the Testing Ground
Apr 3, 2026 · Figma (Developer Documentation)