✳︎ Panoptia Labs

The Agile Manifesto Needs an Update — for Working with AI Agents

March 19, 2026 · David Latz

TL;DR

Hundreds of individual AI setups, but no shared common sense. What's missing: a playbook for agentic collaboration in small teams.

Reasoning Seed

A Reasoning Seed is a structured prompt you can copy into your AI reasoning tool (Claude, ChatGPT, Obsidian, Notion). It contains the article's thesis, its core tension, and our lab context — ready for your own analysis.

Click the button below to copy as Markdown. More ways to interact with this content in the discussion questions below.

Tension: If agents make their own decisions and carry their own context — who is responsible for the outcome: the human who orchestrates, or the team that trusted the agent?

Lab context: A standpoint from lab practice: our collaboration with AI agents has not replaced agile principles, but has fundamentally changed how we interpret them.

Thesis

We have the agile manifesto. We have Scrum, Kanban, Shape Up. We have hundreds of AI evangelists with individual setups, prompting guides, and tool recommendations. What we don’t have: a shared framework for how humans and AI agents collaborate in teams.

Not for individuals. For teams of 2–5 people working with Claude Code, Cursor, or similar agents — who are realizing the old rituals no longer fit.

Most people think AI agents are faster tools. But agents carry their own context, their own session history, their own decisions. Working this way doesn’t call for better prompts — it calls for new forms of collaboration.

What has changed

Small teams adopt agentic workflows faster than organizations. Two people with structured context and capable agents deliver what used to require a team of five. That’s not an efficiency gain — it’s a different working model.

But the gap between “I use ChatGPT” and “we orchestrate agents as a team” grows wider every day. And nobody is documenting the collaboration patterns emerging in practice. Every team reinvents them. Every week.

The paradigm shift: partner, not tool

The most important shift isn’t technical. It’s mental: from “I prompt a tool” to “I work with a partner.”

Sounds like an overstatement — until you do it. An agent that knows the project context, reads specs, builds on previous sessions behaves differently than a search field. It makes decisions. It proposes structures. It pushes back when the spec doesn’t hold up.

What this means for teams: responsibility distributes differently. Review becomes more important than briefing. Trust models replace control loops — not because control is unnecessary, but because it works differently when the counterpart is neither an employee nor a passive instrument.

Three areas that need new answers

How do we decompose work? Agents can handle parallel workstreams — but only if issues have the granularity that enables autonomous work. A “human-readable” issue isn’t enough. It needs machine-readable context: clear acceptance criteria, scoped technical boundaries, named dependencies. This changes how we slice epics and write specs.

How do we manage dependencies? In traditional teams, dependencies exist between people. In agentic teams, new types emerge: agent waiting on human review, agent building on agent output, two agents working on the same codebase. The human becomes the reviewer bottleneck — not because they’re too slow, but because the orchestration is missing.

How do we hand off context? The hardest question. Agentic work is session-based. Knowledge trapped in chat history is lost the moment a session ends. Structured context — Markdown, specs, issue bodies, versioned configuration — becomes the actual communication medium. Not Slack, not meetings. Files.

Not a playbook. Principles.

The temptation is to write an “Agile 2.0 Playbook” — with roles, ceremonies, and templates. But that would be the wrong approach. What works are principles and patterns — like the agile manifesto itself: brief, opinionated, adaptable.

What sets this apart: prompt engineering guides solve an individual problem. AI governance frameworks address organizations. What’s missing is the layer in between — collaboration patterns for small teams building with agents. Not regulated, not individual, but shared.

Critical Assessment

What Holds Up

  • The observation that individual AI setups don’t scale to teams matches practice: tool competence is personal, collaboration is structural
  • Structured context as communication medium isn’t wishful thinking — teams working with CLAUDE.md, specs, and issue bodies consistently report better agent outcomes
  • The agile manifesto analogy holds: there too, it was principles rather than processes that enabled the shift
  • The gap between prompt engineering guides (individual) and AI governance (organizational) is real and unoccupied

What Needs Context

  • Early practice, little evidence: The patterns described stem from personal work and a small circle of early adopters — no broad validation
  • Selection bias: Those working with AI agents in teams today are not a representative cross-section. The patterns may work for this audience and not for others
  • Agile analogy has limits: The agile manifesto addressed an industry with shared problem understanding. For agentic collaboration, that shared understanding doesn’t exist yet — a manifesto without community is just a blog post
  • Tool dependency: The experience base is heavily tied to Claude Code and Anthropic products. Whether the patterns transfer tool-agnostically remains open

Discussion Questions

01 Team Readiness: At what point does it make sense to formalize agentic collaboration — and when is it too early? Is there a minimum level of individual agent competence before team patterns take hold?

02 Context as Medium: If structured files replace conversations as the communication medium — what happens to the tacit knowledge that currently emerges in meetings and discussions? Are we losing something essential?

03 Orchestration: Who orchestrates in an agentic team? Does a new role emerge — the “agent coordinator” — or is it distributed across everyone? And how do you prevent the human from becoming the bottleneck?

04 Scaling: The thesis addresses teams of 2–5. What happens at 10, at 20? Does it then require roles and ceremonies after all — or do principles scale better than processes?

05 Culture vs. Structure: Is the real obstacle to agentic collaboration structural (missing patterns) or cultural (missing trust in AI as a partner)?

Sources

Glossary

Agentic Collaboration A working model where AI agents are integrated into team processes not as passive tools but as autonomous partners with their own context, session history, and decision-making capacity.

Collaboration Patterns Repeatable patterns for teamwork — here specifically for interaction between humans and AI agents. They address the layer between individual prompts and organizational governance.

Structured Context Machine-readable project information in versioned files (Markdown, YAML, JSON) — as an alternative to ephemeral chat history. Serves as the shared communication medium between human and agent.

Session-based Work A working mode where knowledge and progress are bound to individual chat sessions. Without explicit persistence (in files, issues, commits), context is lost when a session ends.

Trust Models Decision frameworks for when agent output is accepted without review and when human verification is needed. Replaces binary control (review everything / review nothing) with graduated trust levels.

Curated by David Latz · Panoptia March 2026