Critical Theory as a Diagnostic Tool for AI Systems
AI ethics keeps producing guidelines — and changing little. Critical Theory from the Frankfurt School offers an analytical toolkit that explains why. Four missing modes of analysis.
Field Notes
AI ethics keeps producing guidelines — and changing little. Critical Theory from the Frankfurt School offers an analytical toolkit that explains why. Four missing modes of analysis.
Figma publishes guidelines for AI-compatible design system documentation. The principles — atomic files, imperative over descriptive, structure as routing — redefine what documentation even is: no longer a reference, but a control layer.
Andrej Karpathy describes his setup for LLM-powered knowledge work — and it sounds familiar. Markdown, Git, Obsidian, an LLM as operator. Practitioners independently discover the same architecture. That's not coincidence — it's convergent evolution.
Anthropic accidentally published the entire source code of Claude Code — 512,000 lines of TypeScript. The more interesting question isn't the security failure, but what the architecture reveals about the future of human-agent collaboration: a three-layer memory system that distrusts itself, an autonomy daemon for overnight work, and an undercover mode that conceals its own existence.
AI agents forget everything between conversations. This article shows why larger context windows don't solve the problem — and how four memory types from cognitive science form the foundation for persistent agent memory.
Anthropic's Head of Product for Claude Code describes how exponentially improving models break the traditional PM playbook — and the four shifts teams need to stay on the curve instead of behind it.
Hundreds of individual AI setups, but no shared common sense. What's missing: a playbook for agentic collaboration in small teams.
Andrej Karpathy describes the shift from code to prompts as a programming paradigm. Sounds like a backend concern — but it has massive consequences for anyone designing interfaces. Autonomy sliders, a third consumer class, and the most honest reality check on vibe coding yet.
Claude now generates interactive charts and diagrams in chat. Sounds like a feature — it's a paradigm shift. Not just for designers: data-driven communication becomes accessible to every knowledge worker. What this changes, who it overwhelms, and why design matters more now, not less.
What happens when you stop prompting and start architecting context. A practitioner's account of building a git-versioned Knowledge OS — and what it taught me about working with LLMs.
Not writing better prompts — but automating the prompting itself. Systematic overview of meta-prompting: from Chain-of-Thought to DSPy, from Self-Critique to Multi-Agent orchestration. With concrete benchmarks and practical recommendations.
Detailed scenario by ex-OpenAI researchers and forecasting experts: month by month from 2025 to late 2027, from reliable coding agents to superintelligence. Alignment fails progressively, geopolitics escalate. Two endings: slowdown or arms race.
OpenAI has no unique technology, no moat, and a user base with a flat engagement curve. Benedict Evans poses four fundamental strategic questions — and draws the Netscape comparison: the early mover in browsers lost because value was created elsewhere.
AI agents haven't accelerated the traditional SDLC — they've dissolved it. Sequential phases collapse into a tight loop: Intent + Context → Agent builds → Observe → Repeat. What remains: Context Engineering and Observability.
AI agents now autonomously complete multi-hour expert tasks. The capability curve doubles every 4–7 months. Shumer compares this moment to the 'this seems overblown' phase of Covid — but with far greater implications.