✳︎ Panoptia Labs

Critical Theory as a Diagnostic Tool for AI Systems

April 6, 2026 · David Latz

TL;DR

AI ethics keeps producing guidelines — and changing little. Critical Theory from the Frankfurt School offers an analytical toolkit that explains why. Four missing modes of analysis.

Reasoning Seed

A Reasoning Seed is a structured prompt you can copy into your AI reasoning tool (Claude, ChatGPT, Obsidian, Notion). It contains the article's thesis, its core tension, and our lab context — ready for your own analysis.

Click the button below to copy as Markdown. More ways to interact with this content in the discussion questions below.

Tension: If AI ethics structurally resembles a critical theory but doesn't use its methods — does it produce solutions, or just the appearance of solutions?

Lab context: Critical Theory as an analytical tool for product development and AI deployment — relevant for the question of how to operationalize ethical positions without getting stuck in lists of principles.

Key Insights

1 — The Promise and the Gap

AI ethics as a field has come remarkably far in a short time: principles have been formulated, regulatory frameworks established, checklists developed. What the field struggles with is the question of why — why do well-formulated principles fail in practice, why do well-intentioned systems reproduce the very power structures they were supposed to dismantle.

Philosopher Rosalie Waelen argued in “Why AI Ethics Is a Critical Theory” (Philosophy & Technology, 2022) that AI ethics resembles a critical theory in structure and ambition — it seeks not merely to describe but to change, and is fundamentally tied to the concept of human empowerment. The field does this, Waelen argues, without being aware of it — and without drawing on the methodological resources that the tradition of Critical Theory has developed. What is missing is not another set of principles but an analytical method that explains why organizational incentives systematically undermine ethical guidelines and why AI systems that promise fairness can reproduce inequalities.

Giuseppe De Ruvo takes this a step further in “Algorithmic Objectivity as Ideology” (Topoi, 2025): the claim of algorithmic objectivity itself functions ideologically — it conceals the normative decisions embedded in every system by framing them as technical necessity. The more principles are formulated, the more it appears the problem is solved — while the structural causes remain unchanged.

A tradition that has dedicated itself to precisely this question has existed for nearly a century: the Critical Theory of the Frankfurt School. This text attempts to draw that connection — not as a claim to completeness, but as a working thesis.

2 — What the Frankfurt School Was After

In the early 1930s, a group of philosophers and sociologists at the Institute for Social Research in Frankfurt were preoccupied with a question that seemed more urgent than any other: Why had reason failed? The Enlightenment had promised science, technology, and rational planning as instruments of human liberation. What Europe experienced was the opposite — fascism, mass manipulation through the culture industry, the total mobilization of society through industrial logic. Max Horkheimer, Theodor Adorno, Herbert Marcuse, Walter Benjamin: they all tried to explain this contradiction.

Their answer was not a simple one. Reason itself, according to the central thesis of Horkheimer and Adorno in the “Dialectic of Enlightenment” (1944), can turn into its opposite — not through malice, but through a specific form of rationalization: when the purpose of a system — mastering nature, increasing efficiency, controlling risk — becomes the sole measure of value, it displaces all others. The tool begins to determine its context. What started as a means becomes the logic.

From this diagnosis they developed a method: immanent critique. It measures a system not against external, idealized standards but against its own promises. What does this system claim to be? What does it actually deliver? And whose interests remain invisible? This method is not an ideology — it is an analytical tool applicable to any social context in which rationality exercises power.

The Institute was closed by the National Socialists in 1933. Most of its members emigrated to the United States, many finding refuge at the New School for Social Research in New York — a university founded in 1919 to enable thinking beyond established academic dogma, and which became the intellectual home of a transatlantic exile network. Jürgen Habermas developed the tradition in the second generation: in his “Theory of Communicative Action” (1981), he shifted the focus from systemic critique to the question of under what conditions mutual understanding is possible at all — and when structural asymmetries prevent it.

3 — Four Modes of Analysis

The following modes are not a finished method — more a first attempt to describe what Critical Theory could offer as a diagnostic tool for AI systems.

Immanent critique: measuring systems against their own promises. The most productive question one can ask of an AI system is not: Is this ethical? But rather: What does this system promise to be — explicitly in its documentation, implicitly in its design behavior — and what does it actually deliver? This form of critique has an important property: it requires no external moral authority. It works with the material of the system itself. The method traces back to Horkheimer’s programmatic essay “Traditional and Critical Theory” (1937) and is universally applicable — and uncomfortable, because it allows no excuse through complexity.

Dialectical analysis: where does rationalization tip into control? Horkheimer and Adorno’s warning was precise: every form of rationalization — the systematic capturing, controlling, and optimizing of a domain — carries the potential to displace the very values it was originally deployed to serve. In the context of AI systems, this means: the question is not only whether a system works well, but what it does to its environment when it works well. Personalization algorithms that promise diversity and produce homogeneity; recommendation systems that suggest autonomy and generate dependency; autonomous agents that produce transparency retrospectively rather than simultaneously — whether these are neutral technical properties or structural power shifts can be worked out through this mode of analysis.

Communicative legitimacy: under what conditions is a decision contestable? Habermas articulated what legitimacy requires: comprehensibility, the possibility of objection, no structural asymmetry between participants. An AI-driven decision that fails to meet these conditions is not legitimate — even if it is efficient. This applies to automated credit decisions as much as to content moderation or recruiting filters. The question is not whether an algorithm decides “fairly,” but whether those affected can understand, contest, and correct the decision.

Technical code: reading values in architectures. Andrew Feenberg showed in “Critical Theory of Technology” (1991) that architectural decisions carry implicit value decisions: who controls, who participates, which options are even visible. Feenberg calls these embedded values “technical code” — design decisions that appear as technical necessity and become politically effective precisely because of this. In AI systems, this concerns every default setting, every taxonomy, every decision about which data is collected and which is ignored.

4 — Emancipation as the Touchstone

What holds the four modes together is a normative standard that Critical Theory has carried from the beginning: emancipation as the goal. In Horkheimer’s original formulation, Critical Theory pursues the goal of human agency — the expansion of self-determination, the reduction of structural dependency. For AI systems, this means: efficiency gains are not a sufficient criterion for ethical deployment. The more decisive question is whose agency is expanded by a system — and whether that expansion comes at the cost of others. A system that makes knowledge work faster but systematically replaces its users’ judgment rather than augmenting it has not emancipated — it has replaced one dependency with another.

This touchstone applies to each of the four modes: immanent critique asks whether the promise holds; dialectical analysis, whether rationalization tips into control; communicative legitimacy, whether those affected can object; technical code, whose values are inscribed. The answer to all these questions is measured by whether there is more agency at the end — or less.

5 — What This Framework Can Do — and What It Cannot

Critical Theory is no substitute for technical diligence, legal compliance, or the detailed work of algorithmic fairness. It offers a different level: the ability to describe structural patterns that only become visible when one asks how a system undermines its own promises, whose interests are inscribed in its architecture, and whether its deployment expands or constrains agency.

Gerard Delanty and Neal Harris show in “Critical Theory and the Question of Technology” (Thesis Eleven, 2021) that the Frankfurt School developed technology critique across three generations — from Adorno’s culture industry through Habermas’s colonization of the lifeworld to Feenberg’s democratic technology design. The tradition is not closed. Simon Lindgren’s “Critical Theory of AI” (2024) formulates the current state of this work: an analysis that does not consider technical systems in isolation but as part of societal structures of power and communication.

Whether this is sufficient to close the methodological gap Waelen describes remains open. Critical Theory itself has a weakness: it is better at diagnosing than constructing. It explains what goes wrong but rarely offers concrete design guidance. Feenberg’s extension — the question of technical code as a design problem — attempts to close this gap. Whether this can be operationalized for AI development practice remains an open question. This text is a first attempt at framing — grounding one’s own practice in it, in product decisions, in evaluating tools, in the question of when AI deployment makes sense and when it does not, is still ahead.

Einordnung

This text is written from the perspective of a product designer who uses Critical Theory as an analytical tool — not as a philosopher who knows the tradition from the inside. The selection of concepts is selective: it follows the question of what is useful for the practice of AI-augmented product work. Aesthetic theory (Adorno), psychoanalysis (Fromm, Marcuse), state theory (Neumann) — large parts of the Frankfurt School remain unaddressed here because they are less immediately applicable to the purpose of this text.

What this perspective can see: the gap between ethical aspirations and operational practice in product development. What it cannot see: the philosophical-historical subtleties of the tradition, the controversies between generations, the limits of transferability to non-Western contexts. Someone from philosophy would weight the concepts differently. Someone from the Global South would ask whether the Frankfurt School, as a European tradition, is even the right starting point — or whether postcolonial critique poses the more urgent questions.

Kritische Einordnung

Was hält stand

  • Waelen’s central thesis — AI ethics has a method problem — is empirically well supported: the number of ethics guidelines grows, as do the documented cases of harm
  • The four modes of analysis (immanent critique, dialectical analysis, communicative legitimacy, technical code) are methodologically coherent and applicable to concrete systems
  • Feenberg’s concept of technical code has proven its value as an analytical tool — the idea that architecture is politics can now be found in Science and Technology Studies (STS), Critical Design, and policy work
  • The historical continuity across three generations shows the tradition is capable of learning — not a closed system but an open framework

Was man einordnen muss

  • The direct transfer of concepts developed for industrial-capitalist societies of the 1930s–1940s to AI systems requires mediating steps that are not fully accomplished here
  • Between Habermas’s ideal speech situation and the reality of algorithmic decision systems lies a translation problem that this text names but does not solve
  • The Frankfurt School is a European, predominantly male tradition — its universalist claims stand in tension with its particular context of origin
  • “Emancipation” remains normatively vague — what counts as expansion of agency is context-dependent and open to interpretation
  • The EU AI Act, standardization initiatives, and corporate governance frameworks are already attempting to address the method problem through regulation — whether Critical Theory adds value there is an open question

Diskussionsfragen

01 Method transfer: How far does immanent critique carry as a method for teams building AI systems — and where does it need supplementation from other analytical tools?

02 Emancipation and context: Whose definition of agency applies when an AI system can simultaneously be “emancipatory” for some user groups and constraining for others?

03 Technical code in practice: What implicit value decisions are embedded in the default settings of the AI tools used in one’s own daily work — and how consciously were they made?

04 Limits of the tradition: What can the Frankfurt School, as a European post-war tradition, contribute to global AI ethics — and where are other theoretical starting points needed?

05 Constructive turn: How could the diagnostic strength of Critical Theory be combined with the constructive orientation of design methods — without losing analytical rigor?

Quellen

Glossar

Critical Theory Tradition of social philosophy, originating at the Institute for Social Research in Frankfurt in the 1920s. Combines social analysis with the normative goal of human emancipation. Works with immanent critique rather than external ideal standards.

Immanent Critique Method of Critical Theory: a system is measured not against external norms but against its own promises and claims. Targets the tension between self-representation and reality.

Dialectic of Enlightenment Horkheimer and Adorno’s core thesis (1944): reason and rational control can turn into their opposite — into domination, normalization, oppression — when they become ends in themselves rather than means for human liberation.

Communicative Action Jürgen Habermas’s term (1981) for mutual understanding under conditions that enable equal participation for all involved: comprehensibility, the possibility of objection, no structural asymmetry.

Technical Code Concept by Andrew Feenberg (1991). The implicit value decisions inscribed in a technical system’s architecture — before any explicit ethics policy. Makes visible that “neutral” design decisions have political and social consequences.

Emancipation In the context of Critical Theory: the expansion of human agency and self-determination — as a benchmark for social systems. In the AI context: the question of whether a system expands judgment or replaces it through dependency.

Principles Paradox The observation that the growing number of AI ethics guidelines does not lead to more ethical action. The existence of principles creates the impression of solved problems while the structural causes remain unchanged.

Frankfurt School / New School Shorthand for the intellectual tradition of the Institute for Social Research — first in Frankfurt (1923), in exile at the New School for Social Research in New York (from 1933), returning to Frankfurt after 1945. Three generations: Horkheimer/Adorno, Habermas, current continuations (Lindgren, Feenberg).

AI Ethics Research field addressing moral questions of AI system deployment. Encompasses technical approaches (algorithmic fairness, explainability), regulatory frameworks (EU AI Act), and philosophical groundwork. Internally heterogeneous, no unified methodological approach.

Curated by David Latz · Panoptia April 2026