Deflating “Hype” Won’t Save Us
The problem with AI isn’t hype. The problem is who and what it’s useful for.

We find ourselves in a moment where big tech has allied with the far right, where AI mass produces fascist slop aesthetics, where visas are getting revoked because AI tools took issue with someone’s speech, and where an AI job crisis is unfolding. Meanwhile, many criticisms of AI latch on to the lies and exaggerations used in marketing chatbots and other technologies. They focus on identifying and deflating hype (or on calling AI snake oil, suggesting that it’s all a bubble, a con, a scam, a hoax etc). This kind of critical frame is dangerously inadequate for understanding what is actually going on, let alone for doing something about it. Even if the AI peddlers listened and the hype disappeared, the problems would remain. So the problem with AI isn’t hype. The problem is who and what it’s useful for.
Before we get into the weeds of what is and isn’t the problem, some important caveats: First, we’re not denying that there is plenty of hype. Marketing departments and even journalists spread constant misinformation. There is ubiquitous talk of language models thinking, holding attitudes, having feelings and desires, or even plotting and scheming, as if they could do any of that. Clearly, pushing back against false claims is and remains important. But it is rarely sufficient for addressing the most pressing AI-related issues.
Second, this is a critique of a particular way of framing the issue. The deflationary frames home in on marketing tricks, and thereby implicitly or explicitly conceptualize hype as the root of the problem. In our experience, most people who use these frames are actually working hard to overcome their many pitfalls. They don’t actually think that lie-free advertising would magically fix things, they are well aware of the connections between the far right and big tech, they worry about political, social, or environmental impacts and so on. In fact, we admire the work of many people who use these frames. Nonetheless, we think that calling things hype is increasingly getting in the way. It conflates technical issues with social, political, and economic ones, and risks obscuring things, rather than clarifying them.
With those caveats out of the way, let’s start with a very quick definition of AI hype: It’s a promise that some AI tool can (or will soon be able to) do something it really can’t (or won’t). It’s a lie of exaggeration. In that sense, calling something hype is, at its heart, a technical claim about feasibility or factuality. But as computer scientist Ali Alkhatib has laid out clearly, AI is not (and arguably never has been) merely a technical tool that we could evaluate solely on technical grounds. It’s always been an unfolding series of intertwined technical and political projects: Technologies, after all, have affordances and uses, and are developed in conjunction with evolving needs and goals. So, let’s lay out a few issues that AI interfaces with in our current society, and explore why the hype frame is an obstacle to clarifying what we’re facing.
First—AI, bullshit, and fascist propaganda
Without a doubt, AI has already automated the production of bullshit—that is text and images that do not commit the producers of that content to any kind of truth. Where these are intended to act as political speech, we can see the resonance between AI and far right politics, where texts and images are often used to manipulate the emotional valence of a particular topic and amplify its affective resonances. Take images: AI-generated political bullshit is everywhere, from the ghiblification of ICE deportations to cult-of-personality glorifications of Trump and Trumpist policies (Trump as a golden statue in Gaza, Trump as the next pope, Trump with deepfakes of Black voters, etc). AI plays a major role in the propaganda side of what far right political strategist Steve Bannon calls “flooding the zone”. The ultimate aim is to foreclose any possibility for good faith deliberation.

Donald Trump’s Truth Social post
AI is used to flood the zone with more than propaganda. There’s DOGE using AI for anything from intimidating federal workers to rewriting federal housing regulations. Tariff rates are supposedly being written by language models too. Clearly, AI is crucial to the daily execution of Bannon’s strategy, aimed at relentlessly stoking scandal and outrage to the point where nobody can keep track anymore of the attacks on democracy, science, immigrants, Black people, queer people, women, veterans, or workers. The aim: Make people lose their sense of what is even going on, of what is real altogether, so they get politically paralyzed and feel unable to act.
Is any of this an issue of hype, of some discrepancy between promised capabilities and real capabilities? Or is the root issue that the very real affordances of very real AI potentiate fascist political aims and methods? Is the problem that some model output is untrue, or is it that AI models operate akin to the online troll, who just asks questions and just says (bullshit) things for the lulz? Of course, AI tools have no intentions. They cannot care for the lulz. But they are, by design, incapable of not bullshitting. They cannot recognize that “their reality” (bytes, letters, pixels, etc) is distinct from reality as such.
Yet, truth is not a property of the world, nor simply a property of statements. Truth pertains to a relation between the two, a statement can only be true about something. But models lack that duality, that distinction. There is no difference, here, between measurement and the measured, between a representation of the world and the world, between the map and the land that it describes, between the text and the con-text—there is only ever the measurement, the representation-no-longer-representing, the map, the pixel, the text. And hence the concept of “truth” (even as a desideratum) disappears. AI is not merely a machine for the production of bullshit, but inherently unsuited to ever distinguish bullshit from non-bullshit.
Fascism, meanwhile, is committed to a play of power and aesthetics that regards a desire for truthfulness as an admission of weakness. It loves a bullshit generator, because it cannot conceive of a debate as anything but a fight for power, a means to win an audience and a following, but never a social process aimed at deliberation, emancipation, or progress towards truth. Fascists do, of course, try to exploit the very prerequisites for discourse (a willingness to assume good faith, to treat equality if not as a condition, then at least as a laudable aim of social progress, etc). Take, for example, the free speech debates as an attempt to blindside the enemy (that is, us). Fascists are continually proclaiming their defense of and love for free speech.

A day-one executive order from Donald Trump
They are also arresting people for speech, banning books, and attacking drag story hours. To take this as an inconsistency, or an intellectual mistake is to misunderstand the very project—fascists are not in it for consistency, nor for making a rational, reasonable world with rules that free and equal people give themselves. The apparent contradiction is, instead, part of the same family of strategies as flooding the zone is—to use whatever tool is necessary in order to accrue power, strengthen hierarchies, and entrench privilege.
Is this political attack on the possibility of good faith discourse enabled by hype? Would things be better if the models were improved, or the advertisement toned down? Or is it rather the case that the models’ inability to distinguish words from the world is precisely what makes them so useful to fascist aims? If that is the case, calling AI hype may be like thinking the fascists are simply stupid or making a mistake by being inconsistent. But the fascists are confused neither about the nature of free speech, nor the nature of the “speech” that is produced by an LLM. They know what they want, they know what they’re doing, and they know where it leads.
Second—AI as a tool for political terror
Let’s consider a related but distinct use: AI as a tool for surveillance, control, and even terror. Right now, the US government is using AI to scan social media posts and gather “intelligence” at scale. The state department’s “Catch and Revoke” initiative, for example, uses AI to cancel the visas of non-citizens whose speech is insufficiently aligned with the government’s foreign policy goals. Is this a problem of hype?
Calling AI “hype” highlights a gap between a sales pitch and the model’s real performance. When a hyped model is put to real world use, it will make “mistakes” (or make them at an unreasonable frequency, etc). What about the state department’s AI tools? Their student surveillance AI will certainly make “mistakes”—it will flag people who aren’t holders of student visas, or it will flag people for unrelated speech. Hype? Perhaps. It’s entirely possible that someone at the state department fell for it, and really believes that these models are better at surveillance than they actually are. We don’t really know. And we don’t think it matters—because whether or not hype is involved seems rather radically besides the point.
Journalist and researcher Sophia Goodfriend calls this whole thing an AI drag net and observes quite acutely: “Where AI falters technically, it delivers ideologically”. Indubitably, people are getting falsely classified as having engaged in speech unsanctioned by the self-proclaimed free speech absolutists. But those misclassifications are mistakes only in a narrow technical sense. After all, we can actually be quite certain about the real political aims of Marco Rubio’s state department. They’re invested in a) an increase in deportations, and b) the suppression of particular kinds of speech—and neither of those depends on amazing accuracy (hitting the “right” people). They depend mostly on scale (hitting a lot of people). For the first goal—more false positives means more cancelled visas, means more deportations. Check. As for the second aim, the suppression of speech depends on a sufficient reach to make everyone feel like “it could be me next, (unless I censor myself/make myself quiet and small/self-deport/etc)”. And here, too, the tool certainly delivers. In our book, Why We Fear AI, we argue that it is in fact precisely the error-prone and brittle nature of these systems that makes them work so well for political repression. It’s the unpredictability of who gets caught in the drag net, and the unfathomability of the AI black box that make such tools so effective at producing anxiety, that make them so useful for suppressing speech.
We can make the same point with respect to, say, the police’s use of facial recognition technologies. Certainly, the harm done due to AI algorithms misidentifying people is very real. Let’s acknowledge first that even if the algorithms were perfect and never misidentified anyone, it would aid a racist system that is frequently aimed at violent dehumanization, at supplying humans as billables and forced labor for the prison-industrial complex. But today, the algorithms certainly do misidentify people, there certainly is a claim that models can do X (identify faces accurately), when it really can’t. Does calling this hype help us understand the problem?
We certainly know that the “mistakes”, the misidentifications, aren’t randomly distributed. In fact, we’ve known at least since Joy Buolamwini and Timnit Gebru’s 2018 Gender Shades paper, that facial recognition algorithms are, among other things, more likely to misidentify Black people than white people. When the AI errors are so clearly distributed unequally, and the errors are a source of harm—of false arrests and possible police violence—then it is obviously unhelpful to simply call them “errors”, or theorize this through a lens of “hype”. What this system produces isn’t errors, it’s terror. And this terror has a history and clear throughlines of racist pseudoscience and racist political structures: from phrenology to facial recognition, and from slavery to Jim Crow, to today’s New Jim Crow (Michelle Alexander) and New Jim Code (Ruha Benjamin). Once we draw those throughlines, the semi-randomness, the “errors”, the “false” arrests appear not as accidents, but as part of AI as a political project.
These not-quite-random errors live in precisely the gap between the sales pitch and reality. The gap provides plausible deniability: “It’s the algorithm that messed up”, they will surely tell us, and that therefore, no person is really responsible for racist false arrests. But the very fact that the misidentifications are predictable at the level of populations (we know what groups will be misidentified more often), and unpredictable at the level of individuals (nobody knows in advance who in particular will be misidentified) also enhances its usefulness to the political project of producing political terror: Again, it’s about producing the widespread feeling that “it could happen to any of us, it could happen to me”. This is as true of AI as it is of other, older tools of political terror and police intimidation. And yet, nobody ever suggested that the sales pitches for “non-lethal” weapons like rubber bullets was hype. A rubber bullet will sometimes blind people or even kill them, and that, too, is a semi-randomly distributed “error”, a gap between the sales pitch and reality. Rubber bullets, like the surveillance AI and the facial recognition system, function as tools of political control, precisely because that gap does things, it is functional, not incidental, to the system. In the words of cybernetician Stafford Beer, there is "no point in claiming that the purpose of a system is to do what it constantly fails to do". And to focus primarily on what the system can’t actually do (as the hype frame does) risks distracting from what it is actually doing.
Third—AI as a tool for crushing wages
Last, but certainly not least, let’s talk economics: From Microsoft to universities, from OpenAI to the government, we are told all kinds of lies of exaggeration by all kinds of powerful institutions. Take whatever task you like: As long as it has something to do with intellectual activity (and is at least somewhat reasonably paid) you will find someone claiming that task is about to be automated, that it will soon be done by an AI “employee”, or at the very least that AI-assisted labor will soon outcompete anyone not using AI.
Now, critics like Emily Bender and Alex Hanna have, quite correctly, pointed out that such claims (and their incessant repetition by uncritical media) are essentially forms of advertising. But we think it’s crucial to foreground who exactly the sales pitch is directed at: Investors and corporate customers for whom replacing workers with tools might be good for the bottom line. To workers, these words certainly don’t ring like advertising, but like a threat. It looks less like glossy pages full of exaggerated promises, and more like the slides from a mandatory workplace meeting on the supposed “dangers” of collective bargaining: If you don’t do this (refrain from unionizing/start using AI), then the competition will crush us, and you’ll lose your job. “Shut up and lower your expectations, for you are replaceable!” is the constant refrain of union busters and AI peddlers alike.
It’s that -able suffix in “replace-able” that is crucial here. For the threat, it’s raising the possibility of replacement that counts. This is what the laws of supply and demand imply when it comes to the supply of people (with particular skills)—an increased supply of a possible substitute (AI) will lower the price of a commodity (workers’ wages).
The union busting pitch gives the real economic purpose of AI away: It’s a tool to depress wages. And for that goal, whether the tool can actually replace the work done by people at the same level of quality is often largely irrelevant. After all, replaceability is not about simple equivalence, but more often than not about price-quality tradeoffs. People like to buy whatever they think is a good bang for the buck. Companies do too, often substituting cheaper inputs (skills, stuff, etc) to drive down their costs even if it reduces quality. That’s the obvious horizon for AI—not full automation, but the model of fast fashion or IKEA: Offer somewhat lower quality at significantly lower prices and capture the market from the bottom up.
Again, note that this is perfectly compatible with there being hype, with the models remaining in the just so-so level of quality, with sales pitches exaggerating the quality of model outputs/behaviors. Lots of IKEA furniture doesn’t survive more than one move (if that), and plenty of fast fashion items dissolve after a few months. That didn’t stop their spread—and there’s no reason to believe that it will be any different with sub-par AI.
From coders to paralegals, from tutors to therapists, investments to produce cheap AI-assisted versions are well underway, and they will likely depress wages. The particular strategies will surely vary (perhaps an actual licensed therapist will have to oversee 10 different chat windows simultaneously, or perhaps the spread of “vibe coding” will mean that junior programmers are increasingly people with a few weeks’ training in a prompting academy who can thus be paid as unskilled labor). As happened in fashion and furniture, the middle market—mid level both in terms of quality of the goods/services produced, and in terms of the jobs that are available for making them—will increasingly be crowded out.
Particular instances of hype are perhaps a problem for particular investors (without a doubt, plenty of AI companies will fail and go bankrupt along the way). But the real economic problem isn’t hype. The attack on workers, on the quality of jobs, and the quality of the things we make and consume, is the problem—and that problem exists quite regardless of the hype. Unless you’re a venture capitalist, you aren’t the target of the AI advertisement—you’re the target of the threat. We have no use for terms that warn investors that they might be making bad investments, we need terms that are useful for fighting back.
Admittedly, we personally don’t know what terms, frames, and phrases will be most useful either. But we do know that hype, and similar deflationary frames aren’t it. They don’t capture anything about the political and economic AI projects we just described: they do not clarify what’s happening when AI bullshit is flooding the zone, when errors are used to spread political terror, or why even a low quality LLM could still be used to depress wages. Ultimately, hype itself is too stuck in a narrowly technical perspective, too focused on identifying lies, rather than identifying political projects. We should not make the mistake of thinking that just because a statement is a lie, it can be disregarded. Contrary to what the hype frame may suggest, once you figure out that a lie is a lie, the work is only just beginning. Let’s take the lies and the ads seriously—as things that can be telling even if they aren’t true.
So let’s focus on the political projects, and look beyond deflating their blown up lies and egos. Let’s call AI what it is, a weapon in the hands of the powerful. Take the wage-depression project—let’s call it class war through enshitification, automated union busting, a bullshit machine for bullshit jobs, or Techno-Taylorism. Let’s take some inspiration from the Luddites, who called the big tech innovation of their day, the steam engine, “a demon god of factory and loom”, or “a tyrant power and a curse to those who work in conjunction with it.” Let’s make up better words, better phrases, and better frames that clarify the political stakes. Let’s de-shitify the world!
Featured image is a fake image generated by AI during Trumps 2024 campaign