✳︎ Panoptia Labs

Glossary

Terms from the field.

98 terms · 15 articles

3

3-Layer Context

Architecture pattern: Global context (identity, conventions) → Project context (domain, status) → Task context (specific files, commands). Progressively narrows scope without losing coherence.

Source: Context Engineering: Building a Knowledge OS with Claude

A

Adoption Barrier

The effort a person or organization must invest to use a new tool productively. The higher the barrier, the slower the adoption.

Source: When Visualization Becomes Cheap, Clarity Becomes Expensive

Agent Compliance

The degree to which AI-generated output conforms to the constraints of a design system. A potential DesignOps metric for quality assurance of agent work.

Source: 2026-04-22-designops-ai-agents-en

Agent Memory

A persistent, evolving state that gives AI agents context across sessions. Not to be confused with the context window (volatile) or RAG (stateless).

Source: Agent Memory: Why Your AI Has Amnesia and How to Fix It

Agentic Collaboration

A working model where AI agents are integrated into team processes not as passive tools but as autonomous partners with their own context, session history, and decision-making capacity.

Source: The Agile Manifesto Needs an Update — for Working with AI Agents

Agents (as Consumer Class)

Software systems that act autonomously, navigate via LLM steering, and execute tasks. Differ from human users (GUIs) and traditional computer programs (APIs) through human-like behavior at computer-like execution speed.

Source: Software 3.0 — What Karpathy's Theses Mean for Interface Design

AI Agent

An AI agent is an artificial intelligence system that autonomously executes multi-step tasks — not just providing single answers, but planning, acting, evaluating results, and iterating. Unlike a chatbot, an agent can use tools, write and execute code, edit files, and interact with external systems.

Source: The Software Development Lifecycle Is Dead , Something Big Is Happening

AI Ethics

Research field addressing moral questions of AI system deployment. Encompasses technical approaches (algorithmic fairness, explainability), regulatory frameworks (EU AI Act), and philosophical groundwork. Internally heterogeneous, no unified methodological approach.

Source: Critical Theory as a Diagnostic Tool for AI Systems

AI Incident Database (AIID)

A public database of documented AI security incidents, curated via Stanford HAI. Foundation for the incident curve in the visualization. Licensed under CC BY.

Source: Claude Code's Source Code Leaked — What the Architecture Reveals About the Future of AI Agents

Alignment

The process of ensuring AI systems act in accordance with human values, intentions, and safety requirements. Goal: the system reliably does what humans want — even in unforeseen situations.

Source: AI 2027: A Scenario

Ambient AI

Hypothetical design pattern: AI systems that work in the background (updating context, checking consistency, preparing suggestions) and only move to the center of attention when something relevant occurs. A counter-proposal to the dominant chat paradigm.

Source: 2026-04-14-calm-technology-ai-age-en

APE (Automatic Prompt Engineer)

System that automatically generates a pool of prompt candidates, evaluates them via scoring function, and selects the best. Demonstrates that LLMs can develop prompts at or above human level.

Source: Meta-Prompting: LLMs Crafting & Enhancing Their Own Prompts

ASI (Artificial Superintelligence)

A hypothetical AI that surpasses human intelligence across all domains — not just narrow tasks like chess or coding, but generally.

Source: AI 2027: A Scenario

Autonomy Slider

Design pattern where users can contextually control the automation level of an AI interaction — from minimal assistance to full delegation.

Source: Software 3.0 — What Karpathy's Theses Mean for Interface Design

C

Calm Technology

Design philosophy formulated in 1995 by Mark Weiser and John Seely Brown (Xerox PARC). Core idea: The best technologies require minimal attention and move fluidly between the periphery and center of perception.

Source: 2026-04-14-calm-technology-ai-age-en

Chain-of-Thought (CoT)

Technique where the model thinks step by step before responding. Improves accuracy on complex tasks — math, logical reasoning, analysis.

Source: Meta-Prompting: LLMs Crafting & Enhancing Their Own Prompts

CLAUDE.md

A project-level instruction file read by Claude Code at session start. Functions as the “system prompt” for a repository — encoding identity, conventions, and navigation.

Source: Context Engineering: Building a Knowledge OS with Claude

CoALA

2023 Princeton framework defining four memory types for AI agents — Working, Procedural, Semantic, Episodic Memory — continuing the cognitive science tradition beginning with Atkinson/Shiffrin.

Source: 2026-04-10-drei-speicher-modell-en

CoALA (Cognitive Architectures for Language Agents)

Framework from Princeton (2023) defining four memory types for AI agents — derived from the cognitive SOAR architecture. The field’s lingua franca.

Source: Agent Memory: Why Your AI Has Amnesia and How to Fix It

Collaboration Patterns

Repeatable patterns for teamwork — here specifically for interaction between humans and AI agents. They address the layer between individual prompts and organizational governance.

Source: The Agile Manifesto Needs an Update — for Working with AI Agents

Communicative Action

Jürgen Habermas’s term (1981) for mutual understanding under conditions that enable equal participation for all involved: comprehensibility, the possibility of objection, no structural asymmetry.

Source: Critical Theory as a Diagnostic Tool for AI Systems

Compiled Wiki (Karpathy)

A Markdown wiki that an LLM generates from raw data and incrementally maintains — with summaries, categories, and backlinks. Read-heavy, with the LLM as maintainer.

Source: LLM Knowledge Bases: Why Everyone Lands on the Same Stack

Compute

The computational capacity required to train and run AI models. Typically measured in GPU-hours. Concentration of compute among a few actors is a central geopolitical issue.

Source: AI 2027: A Scenario

Context Architecture (Kontextarchitektur)

Design of the information environment in which AI agents operate — file structures, documentation formats, conventions, retrieval paths. In the DesignOps context: the infrastructure that enables agents to work with design systems.

Source: 2026-04-22-designops-ai-agents-en

Context Builder

Tools like Gitingest or DeepWiki that prepare information for efficient LLM and agent processing. Address the problem that existing web formats (HTML) are poorly suited for AI consumption.

Source: Software 3.0 — What Karpathy's Theses Mean for Interface Design

Context Drift

A phenomenon where an AI agent’s internal state representation increasingly diverges from reality over long sessions — similar to the “telephone game” effect in human communication.

Source: Claude Code's Source Code Leaked — What the Architecture Reveals About the Future of AI Agents

Context Engineering

Context Engineering refers to the ability to provide an AI agent with the right context — all the information, rules, and constraints it needs to solve a task well. The better the context, the better the result. In agent-driven workflows, this competency increasingly replaces manual coding as the engineer’s central skill.

Source: The Software Development Lifecycle Is Dead

Continuous Deployment

Continuous Deployment means that code changes are automatically delivered to the production environment without a manual release step — as soon as all automated tests pass. Combined with feature flags and observability, it enables fast iteration cycles with controllable risk.

Source: The Software Development Lifecycle Is Dead

Convergent Evolution (here: technological)

Independent actors develop the same solution to the same problem — not through coordination, but through identical selection pressures. In biology: wings in birds and bats. Here: Markdown + Git + LLM across different practitioners.

Source: LLM Knowledge Bases: Why Everyone Lands on the Same Stack

Critical Theory

Tradition of social philosophy, originating at the Institute for Social Research in Frankfurt in the 1920s. Combines social analysis with the normative goal of human emancipation. Works with immanent critique rather than external ideal standards.

Source: Critical Theory as a Diagnostic Tool for AI Systems

Cross-Tenant Failure

A security vulnerability in multi-tenant systems where data or access leaks between different customers (tenants). New category since 2024 — arises from AI-specific architecture patterns like shared model instances or embedding stores.

Source: Claude Code's Source Code Leaked — What the Architecture Reveals About the Future of AI Agents

D

Daemon

A software process that runs in the background without direct user interaction. In the KAIROS context: an agent that autonomously executes tasks after a session ends.

Source: Claude Code's Source Code Leaked — What the Architecture Reveals About the Future of AI Agents

Data Design

The design of data visualizations and data-driven interfaces — from charts to dashboards to interactive exploration tools.

Source: When Visualization Becomes Cheap, Clarity Becomes Expensive

Decay Function

A mathematical function that reduces a memory’s relevance score over time — imitating biological forgetting. Semantic similarity × exponential time decay.

Source: Agent Memory: Why Your AI Has Amnesia and How to Fix It

DesignOps (Design Operations)

Optimization of processes, tools, and structures that design teams need to work effectively and at scale. Encompasses People, Process, and Craft/Tool Operations.

Source: 2026-04-22-designops-ai-agents-en

Dialectic of Enlightenment

Horkheimer and Adorno’s core thesis (1944): reason and rational control can turn into their opposite — into domination, normalization, oppression — when they become ends in themselves rather than means for human liberation.

Source: Critical Theory as a Diagnostic Tool for AI Systems

Distribution

Access to users through existing channels and products. Google reaches billions via search, Meta via social media — OpenAI has no comparable channel and must acquire users directly.

Source: How Will OpenAI Compete?

Doubling Rate

The time interval at which measurable AI model capabilities double. According to METR data, currently at approximately 7 months, trending toward 4 months.

Source: Something Big Is Happening

DSPy (Declarative Self-improving Python)

Framework that treats prompt pipelines as declarative programs and automatically optimizes them at compile time. Replaces manual prompt tuning with systematic optimization.

Source: Meta-Prompting: LLMs Crafting & Enhancing Their Own Prompts

Dual Interface

The pattern of maintaining two documentation layers in parallel — one for human developers (readable, contextual), one for AI agents (parsable, imperative, granular).

Source: 2026-04-22-designops-ai-agents-en

E

Emancipation

In the context of Critical Theory: the expansion of human agency and self-determination — as a benchmark for social systems. In the AI context: the question of whether a system expands judgment or replaces it through dependency.

Source: Critical Theory as a Diagnostic Tool for AI Systems

Engagement Curve

A graphical representation of how intensively and frequently users engage with a product. A “flat” curve means: many users, but shallow usage depth — most use the product rarely and superficially.

Source: How Will OpenAI Compete?

Episodic Memory

Logs of past experiences: conversation logs, action sequences, few-shot examples. The agent’s “autobiographical memory.”

Source: Agent Memory: Why Your AI Has Amnesia and How to Fix It

Exponential Curve

A growth pattern where capability doubles at regular intervals rather than increasing linearly. In the AI context: model capabilities grow faster than human planning typically anticipates.

Source: Product Management on the AI Exponential

F

Feature Flag

A feature flag is a toggle in the code that allows new features to be selectively enabled or disabled — without redeploying the code. Teams use feature flags to release new features to a small group of users first and instantly roll back if problems occur. In agent-driven workflows, they serve as a central safety net for continuous deployment.

Source: The Software Development Lifecycle Is Dead

Fractional (Engagement)

A fractional engagement means that an expert or team doesn’t work full-time for a client but contributes a defined share of their capacity — for example, 2 days per week. Common in consulting, design, and strategy. As AI agents take over expert tasks, the pricing and scoping of such engagements change fundamentally.

Source: Something Big Is Happening

Fractional Design Lab

An organizational model in which multiple Fractional Design Leaders form a network and share infrastructure — design systems, knowledge base, methods, AI tooling. Scales the model beyond the individual.

Source: 2026-04-24-fractional-design-leadership-en

Fractional Design Leadership

Strategic design guidance in the fractional model: design system governance, team building, DesignOps, product design strategy — on a part-time basis as an embedded leadership role.

Source: 2026-04-24-fractional-design-leadership-en

Fractional Executive

A leader who works on a part-time basis for multiple companies simultaneously. Takes on strategic responsibility but is not permanently employed. Typical: 1–3 days per week, self-employed.

Source: 2026-04-24-fractional-design-leadership-en

Frankfurt School / New School

Shorthand for the intellectual tradition of the Institute for Social Research — first in Frankfurt (1923), in exile at the New School for Social Research in New York (from 1933), returning to Frankfurt after 1945. Three generations: Horkheimer/Adorno, Habermas, current continuations (Lindgren, Feenberg).

Source: Critical Theory as a Diagnostic Tool for AI Systems

G

Greenfield

A greenfield project starts without existing code, without legacy systems, and without technical debt — on a clean slate. The opposite is brownfield, where you work with existing software. Many theses about agent-driven development primarily apply to greenfield scenarios; in brownfield environments with evolved codebases, the challenges are different.

Source: The Software Development Lifecycle Is Dead

I

Immanent Critique

Method of Critical Theory: a system is measured not against external norms but against its own promises and claims. Targets the tension between self-representation and reality.

Source: Critical Theory as a Diagnostic Tool for AI Systems

Information Literacy

The ability to find, evaluate, and communicate information. Here: the ability to visually interpret and present data.

Source: When Visualization Becomes Cheap, Clarity Becomes Expensive

J

Jagged Intelligence

Phenomenon where LLMs exhibit unintuitive capability distributions: brilliant at complex tasks, unreliable at seemingly simple ones. Contrasts with human competence development, where capabilities grow in correlated fashion.

Source: Software 3.0 — What Karpathy's Theses Mean for Interface Design

K

KAIROS

Internal codename for an unreleased Claude Code feature: a background daemon that continues working autonomously after a session ends, including nightly memory consolidation.

Source: Claude Code's Source Code Leaked — What the Architecture Reveals About the Future of AI Agents

Knowledge OS

A structured, git-versioned knowledge repository designed to serve both human thinking and LLM context. Plain Markdown, tool-agnostic, with layered context architecture.

Source: Context Engineering: Building a Knowledge OS with Claude , LLM Knowledge Bases: Why Everyone Lands on the Same Stack

L

Levels of Processing

Alternative model arguing processing depth, not storage location, determines retention. Deep processing produces more stable memories than shallow approaches.

Source: 2026-04-10-drei-speicher-modell-en

M

Meta-Prompt

A prompt that does not directly solve a task but generates, evaluates, or optimizes other prompts. Shifts work one abstraction level up — from content to structure.

Source: Meta-Prompting: LLMs Crafting & Enhancing Their Own Prompts

METR

METR (Model Evaluation and Threat Research) is an organization that systematically measures how long real-world tasks take that AI models can solve end-to-end without human help. METR data shows an accelerating doubling rate of AI capabilities and is one of the few quantitative sources for tracking the performance progress of AI agents.

Source: Something Big Is Happening

METR (Model Evaluation & Threat Research)

An independent research organization that evaluates AI models for capabilities and risks. Provides standardized benchmarks for task complexity and agent performance.

Source: Product Management on the AI Exponential

Moat

A strategic competitive advantage that durably protects a company from competition — like a castle moat. Controversial for AI models: if models are interchangeable, the moat is absent.

Source: How Will OpenAI Compete?

Model Weights

The learned parameters of a neural network — the actual “knowledge” of the model. Whoever has the weights can operate the model. Weight theft is a central scenario in the text.

Source: AI 2027: A Scenario

O

Observability

Observability describes the ability to understand the internal state of a system from the outside — through logs, metrics, and traces. In agent-driven development, observability becomes the central feedback mechanism: instead of humans reviewing code, automated systems monitor whether the shipped software does what it’s supposed to.

Source: The Software Development Lifecycle Is Dead

P

PARA (Projects, Areas, Resources, Archives)

Organization method by Tiago Forte for personal knowledge management. Four categories based on actionability, not topic. Foundation of many knowledge management systems now being combined with LLM agents.

Source: Agent Memory: Why Your AI Has Amnesia and How to Fix It

Periphery / Center

The two modes of attention in Weiser/Brown’s model. Periphery = unconscious perception (informs without cognitive cost). Center = conscious focus (requires active attention). Good technology shifts seamlessly between both.

Source: 2026-04-14-calm-technology-ai-age-en

Platform Play

A strategic approach where a company opens its technology as a platform for others to build on. Examples: App Store (Apple), GPT Store (OpenAI). Goal: value creation through an ecosystem rather than a single product.

Source: How Will OpenAI Compete?

Principles Paradox

The observation that the growing number of AI ethics guidelines does not lead to more ethical action. The existence of principles creates the impression of solved problems while the structural causes remain unchanged.

Source: Critical Theory as a Diagnostic Tool for AI Systems

Procedural Memory

Codified behavioral rules: system prompts, tool definitions, decision logic. An agent’s “muscle memory.”

Source: Agent Memory: Why Your AI Has Amnesia and How to Fix It

Product Thinker / Product Builder

Hypothetical role profiles from a stronger generalization of today’s product roles (PM, PD, Dev). Thinker = strategy, framing, decision. Builder = implementation, prototyping, iteration.

Source: When Visualization Becomes Cheap, Clarity Becomes Expensive

Pull Request

A pull request (PR) is a proposal to merge code changes into a shared project. Other team members review the code, provide feedback, and approve the change. In traditional teams, PR review is a central quality assurance step — in agent-driven workflows, it’s increasingly discussed as a bottleneck when agents generate hundreds of PRs daily.

Source: The Software Development Lifecycle Is Dead

R

Rehearsal

Control process maintaining information in working memory. Maintenance rehearsal involves simple repetition; elaborative rehearsal links information to existing knowledge — only the latter reliably produces long-term storage.

Source: 2026-04-10-drei-speicher-modell-en

Release Hygiene

The set of practices ensuring software artifacts don’t contain unintended content when deployed (source maps, debug symbols, internal references). In context: the weakest link in Anthropic’s security architecture.

Source: Claude Code's Source Code Leaked — What the Architecture Reveals About the Future of AI Agents

RLHF (Reinforcement Learning from Human Feedback)

A training method that uses human evaluations to guide an AI model’s behavior. Goal: the model should give helpful, honest, and harmless responses.

Source: AI 2027: A Scenario

Runtime Safety vs. Release Safety

Two distinct security disciplines: runtime safety protects against attacks during program execution (prompt injection, code injection). Release safety prevents confidential artifacts from being published during deployment. Most AI safety investments flow into runtime — most incidents happen in release.

Source: Claude Code's Source Code Leaked — What the Architecture Reveals About the Future of AI Agents

S

SDLC (Software Development Lifecycle)

The Software Development Lifecycle describes the classic sequence of phases in software development: gather requirements, design the system, write code, test, review, deploy, and monitor. Traditionally, these steps run sequentially, with clear handoff points between each phase. In practice, many teams are seeing these phases dissolve — especially when AI agents execute large parts of the cycle simultaneously.

Source: The Software Development Lifecycle Is Dead

Self-Improvement

The ability of an AI system to contribute to its own improvement — such as debugging its own training or managing its own deployment. GPT-5.3 Codex is considered the first documented example.

Source: Something Big Is Happening

Self-Refine

Iterative process: the model generates a response, critiques it, and produces an improved version. Repeatable until quality threshold is met.

Source: Meta-Prompting: LLMs Crafting & Enhancing Their Own Prompts

Semantic Memory

Accumulated factual knowledge: user preferences, extracted facts, knowledge bases. Grows over time.

Source: Agent Memory: Why Your AI Has Amnesia and How to Fix It

Sensory Register

Initial memory stage holding sensory impressions for fractions of a second. Unlimited capacity, extremely brief retention. Filters by relevance before information reaches working memory.

Source: 2026-04-10-drei-speicher-modell-en

Session-based Work

A working mode where knowledge and progress are bound to individual chat sessions. Without explicit persistence (in files, issues, commits), context is lost when a session ends.

Source: The Agile Manifesto Needs an Update — for Working with AI Agents

Side Quest

A self-directed experiment outside the official product roadmap. Serves exploratory innovation in environments with high uncertainty about future feasibility.

Source: Product Management on the AI Exponential

Software 3.0

Karpathy’s term for the third software paradigm: natural language as programming language. Prompts replace code (1.0) and trained weights (2.0) as the primary programming interface.

Source: Software 3.0 — What Karpathy's Theses Mean for Interface Design

Source Map (.map)

A debug file that maps compiled/minified JavaScript back to the original TypeScript source code. Not intended for production releases — its publication exposes the complete source code.

Source: Claude Code's Source Code Leaked — What the Architecture Reveals About the Future of AI Agents

Sprint Planning

Sprint Planning is a ritual from agile software development (Scrum) where a team plans which tasks to complete in the next one to four weeks. Tasks are estimated, prioritized, and assigned. In agent-driven workflows, Sprint Planning loses relevance because the cycle from idea to delivery is drastically shortened.

Source: The Software Development Lifecycle Is Dead

Structured Context

Machine-readable project information in versioned files (Markdown, YAML, JSON) — as an alternative to ephemeral chat history. Serves as the shared communication medium between human and agent.

Source: The Agile Manifesto Needs an Update — for Working with AI Agents

System Prompting

Instructions given to a language model before the actual user query to steer behavior, tone, and capabilities. Less system prompting with better models suggests the model infers more context independently.

Source: Product Management on the AI Exponential

T

TDD (Test-Driven Development)

Test-Driven Development is a development method where automated tests are written first, followed by the code that should pass those tests. The cycle: write test, write code, pass test, clean up code. With AI agents, TDD increasingly becomes default behavior — the agent generates tests and code simultaneously.

Source: The Software Development Lifecycle Is Dead

Technical Code

Concept by Andrew Feenberg (1991). The implicit value decisions inscribed in a technical system’s architecture — before any explicit ethics policy. Makes visible that “neutral” design decisions have political and social consequences.

Source: Critical Theory as a Diagnostic Tool for AI Systems

TextGrad

Method that treats prompt optimization like gradient descent — using natural language feedback instead of numeric scores. Published in Nature (2025).

Source: Meta-Prompting: LLMs Crafting & Enhancing Their Own Prompts

Three-Store Model (Atkinson-Shiffrin Model)

1968 model proposing three serial memory stages: sensory register, short-term memory, long-term memory. Historically influential; psychologically outdated.

Source: 2026-04-10-drei-speicher-modell-en

Tiago Forte

Author of “Building a Second Brain” and developer of the PARA method. One of the most influential thinkers in personal knowledge management — his framework is increasingly being adapted as a basis for AI-augmented knowledge systems.

Source: Agent Memory: Why Your AI Has Amnesia and How to Fix It

Tight Loop

A tight loop describes a very short, tightly coupled feedback cycle between action and feedback. In the context of agent-driven software development, it refers to the cycle: formulate intent, agent builds, observe result, adjust — without the long wait times of traditional development processes.

Source: The Software Development Lifecycle Is Dead

Trust Models

Decision frameworks for when agent output is accepted without review and when human verification is needed. Replaces binary control (review everything / review nothing) with graduated trust levels.

Source: The Agile Manifesto Needs an Update — for Working with AI Agents

U

Ubiquitous Computing

Term coined by Mark Weiser (1991) for pervasive, invisible computing technology. The theoretical framework from which Calm Technology emerged.

Source: 2026-04-14-calm-technology-ai-age-en

Undercover Mode

A Claude Code function that removes all references to Anthropic’s internal infrastructure from commits when contributing to external repositories.

Source: Claude Code's Source Code Leaked — What the Architecture Reveals About the Future of AI Agents

W

Working Memory

The current conversation context — what the agent is actively “thinking” about. Corresponds to the context window. Fast but volatile.

Source: Agent Memory: Why Your AI Has Amnesia and How to Fix It

Working Memory Model

1974 advancement replacing passive short-term memory with an active system featuring phonological loop, visuospatial sketchpad, central executive, and episodic buffer.

Source: 2026-04-10-drei-speicher-modell-en

Y

YoY Growth (Year-over-Year)

Percentage change of a value compared to the previous year. For AI incidents: 149 (2023) → 233 (2024) = +56.4% YoY. Shows acceleration independent of absolute numbers.

Source: Claude Code's Source Code Leaked — What the Architecture Reveals About the Future of AI Agents

← All Field Notes