Persuasion is a mode of influence over others’ attitudes and behaviours that has two interesting features. First, when someone tries to persuade you of something, they want you to do or think something in particular—they want to influence your behaviour or attitudes in a particular direction. This is how persuasion differs from simple informing. When I inform you of the evidence about the efficacy of a vaccination, I am not necessarily trying to get you to get vaccinated, whereas when I try to persuade you of its efficacy I am doing so with an end in mind, most likely to get you to take the vaccine.
Second, persuasion often takes the form of offering arguments. You try to persuade someone to do something by highlighting the reasons to do it. You try to persuade someone to adopt a view because of the reasons for thinking it is true. This is how persuasion differs from obviously malign forms of influence like brainwashing, indoctrination, and manipulation.
Recently, there has been a lot of interest in the persuasive powers of generative AI, in particular large language models (LLMs). But LLMs (ChatGPT, Claude, etc.) are different from human persuaders in that they can argue convincingly for any side of any given topic. Ask ChatGPT to explain why eating meat is ethically wrong, and it will craft a cogent argument about animal suffering and environmental impact. Ask it to defend meat consumption, and it will eloquently discuss cultural traditions and nutritional benefits. Unlike humans, who generally argue for what they believe, or want someone to think they believe, “AI persuaders” have no skin in the game – they are "arguers for hire," willing to craft persuasive content for any position.
Many are worried about AIs becoming super intelligent persuaders that can manipulate humans at will. I'll argue that the real concern lies elsewhere: in what happens when a handful of powerful companies can deploy armies of AI persuaders at scale.
The Reality of AI Persuasion
Let's start with what we know about AI's current persuasive capabilities. A recent study from Stanford found that the LLM GPT-3 can create persuasive messages that are as effective as human-written ones across various political and policy issues, from gun control to carbon taxes. Interestingly, according to the study, people rated AI-generated arguments as "more evidence-based and well-reasoned," while human arguments were seen as relying more on "experiences, stories, and vivid imagery." While this may be more of an artefact of the design of the current generation of LLMs than an intrinsic feature of AI persuasion (a LLM could be trained to craft arguments that rely on experiences, stories, and vivid imagery) it does speak to the capacity for LLMs to construct rational arguments.
More strikingly, researchers in Switzerland discovered that, when AI chatbots were given basic demographic information about their debate opponents—things like age, education level, and political affiliation—they became significantly (that is: statistical significance!) more persuasive than humans. The chatbots could effectively tailor their arguments to resonate with specific individuals in ways that human debaters couldn't match.
But we need to be careful. There is—at least right now—absolutely no reason to panic about super intelligent AI manipulators. It’s crucial to understand that persuasion itself is inherently difficult. Political scientists have consistently found that persuasion efforts typically produce small effects, if any effect at all. Humans are naturally skeptical and hard to convince, especially on politically charged issues. We've evolved to be epistemically vigilant—primed to look for signs that someone isn't trustworthy or doesn't have our best interests at heart.
What we're seeing with AI isn't a revolutionary new form of mind control. It’s just some evidence that LLMs are as good, maybe slightly better (or maybe not), at doing something that humans already do. But humans aren’t terribly good at persuading other humans, and—at least right now—AI isn’t either.
Understanding AI Persuasion
But what exactly do we mean by "AI persuasion"? There's an important conceptual challenge here: persuasion is traditionally understood as an intentional act—if I influence your attitudes by accident (maybe you copy what I do, without me knowing it), I haven't really persuaded you to do anything. I’ve just influenced you, in ways that I did not intend. This raises an interesting question: can AI systems, like LLMs, which (many think) lack intentions or any mental states at all, actually engage in persuasion?
I don’t have strong views about whether LLMs have intentions or mental states. I want to stay as far away from debates about “AI consciousness” as I possibly can. Happily, we don’t need to get bogged down in these debates here. We can helpfully think about AI as a "persuasive technology"—a tool for creating and transmitting persuasive messages. Just as earlier technologies like radio, TV, and the internet transformed mass communication by creating new media for persuasion, AI represents a new medium for persuasion. The key difference is that, while those earlier technologies simply transmitted human-created persuasive content, AI can generate persuasive content itself.
Quite apart from the philosophical difficulties, this distinction is important because it places the focus on the humans and companies that might deploy AI systems to persuade, rather than on the systems themselves. When we talk about "AI persuasion," we might want to talk about how humans use AI tools to influence others’ beliefs and behaviours, rather than on the particular means by which AI systems persuade.
The Rationality Question
When I first encountered the studies on the persuasive power of AI (via the excellent Nonzero Newsletter), I was a bit alarmed. But was I right to be? What, exactly, is worrying about AI persuasion?
One natural worry is that it might undermine the rationality of our beliefs. If we're being persuaded by machines rather than humans, perhaps this makes our resulting attitudes and behaviour less rational. But this concern doesn't hold up to scrutiny. If an AI system presents valid arguments based on true premises, there's nothing irrational about being convinced by them. The source of an argument doesn't determine its validity—good reasoning is good reasoning, whether it comes from a human or a machine.
You might worry about source credibility—after all, we often evaluate arguments partly by assessing the trustworthiness of who's making them. But there's no clear reason to think AI systems are systematically less reliable than humans. Famously, AI systems make mistakes and have biases. They can be engineered for certain kinds of dishonesty and deception. But—equally famously—humans lie, make mistakes, and have biases too. There is absolutely no reason to think that persuasive AI systems will be more likely to mislead or deceive than human attempts to persuade are likely to mislead or deceive.
A more sophisticated version of this argument might claim that, while it's not irrational to be persuaded by AI, it's somehow less rational than being persuaded by other means. But what would these other means be? Thinking everything through completely on our own isn't realistic—we inevitably rely on others for information and arguments. (Thinking everything through completely on our own also doesn’t seem terribly reliable either). While some human sources might be better than AIs (like trusted experts in their fields), others would likely be worse. Maybe it is better to form your attitudes about complicated geopolitical issues via detailed engagement with genuine experts on the relevant issues than by talking to ChatGPT. But it’s probably better to form these attitudes by talking to ChatGPT than by scrolling X, Bluesky, or any other social media feed for a few hours.
The Millian Problem
A more compelling worry comes from John Stuart Mill's insight that truly understanding an issue requires engaging with it from multiple perspectives. As Mill put it in On Liberty,
"He who knows only his own side of the case, knows little of that."
Even if you aren’t persuaded that there anything wrong with an individual not knowing the other side’s case, there clearly is something wrong with a situation where nobody in a community knows what the other side thinks, or why. More generally, the concern is that widespread reliance on AI-generated arguments might lead to a kind of collective superficiality. Imagine a future where we could parrot the arguments on our side of a debate—this is why I think what I think—but were at a loss to really explain what the other side thinks, or to respond to their arguments if they were put to us.
Think about what happens when we use AI to summarize complex topics. Unlike a human expert who has deeply engaged with a range of primary sources and competing viewpoints, AI systems—when prompted—essentially provide sophisticated summaries of existing knowledge. While these summaries might be accurate (at least, as accurate as the summaries produced by humans), relying on them means you lack the deep understanding that comes from wrestling with difficult ideas firsthand.
It also threatens the possibility of diversity of interpretation. Two people can read a difficult text—choose your favourite work of philosophy, or just your favourite novel—and come away with different interpretations. Relying on AI to do the hard work of reading for us may well lead to everyone having more or less the same interpretation. This means that important insights may well be lost.
There is something to all these worries. Those of a pessimistic bent (this includes me) might imagine a future where humans are pure consumers of AI-curated information and never develop the intellectual muscles that come from genuine engagement with challenging ideas. But we need to acknowledge that this is just a possible imagined future. What reasons do we have for thinking that we are heading towards it?
If the AI evangelists are to be believed, we are at the start of a transformation in human culture that rivals other great transformations, like the industrial revolution, the invention of the printing press, or the invention of writing itself. Just as the invention of writing transformed human culture and cognition in ways that would have been hard—if not impossible—to accurately predict, the shift to AI-mediated discourse might have profound and unexpected effects on how we think and reason collectively. The problem, though, is that predictions about what the results of a new transformative technology will be are very unlikely to be accurate. The AI evangelists may be right that a lot will change, but simply trusting their predictions of what will change is foolish.
Sophistry at Scale
The most serious concern about AI persuasion isn't about individual rationality or even collective understanding—it's about power. Concerns about the ethics of persuasion, and about the various rhetorical tools that can be used in the service of persuasion, are not new. Concerns about the use of persuasive technologies—papers, radio, TV—by powerful commercial actors and states are also not new. But AI persuasion adds a new dimension to these concerns.
In ancient Greece, the Sophists were teachers who, for a fee, would teach you how to argue for any position, regardless of its truth or their personal beliefs. They were "arguers for hire," if you like, much like today's AI systems. But AI takes sophistry to a whole new level. While human sophists were limited by time and energy, AI systems can generate thousands of persuasive arguments instantly. More importantly, the ability to create and deploy these AI systems is concentrated in the hands of a tiny number of powerful companies (Anthropic, OpenAI, Alphabet, Microsoft, Meta).
Why should we be troubled by this? Because it creates a troubling dynamic in what we might call the "marketplace of arguments”: when a small number of actors can flood this marketplace with AI-generated arguments, they gain unprecedented power to shape public discourse. Let me explain what I mean by this in more detail.
When you have a view about some issue, be it political, philosophical, or whatever, you often go looking for reasons to support it. Sometimes, you can come up with good enough reasons on your own. But, a lot of the time, you go looking for reasons and arguments that others have produced. This means there is a demand for arguments, and one of the jobs of “knowledge professionals” (people like me) is to produce these arguments. But it is hard for knowledge professionals to get their arguments out there, and so within the marketplace of arguments there is a role for intermediaries—the institutions by which knowledge professionals put their arguments “out into the world” (media, both traditional and social, publishing, etc.).
In a healthy discourse, a representative sample of arguments on different sides of important issues are available. It isn’t possible for all the arguments to be represented, but you want a good balance of the important ones. Marketplaces of arguments are unhealthy when you don’t have a representative sample of arguments on all sides. Perhaps arguments for one side dominate, or you have all the good ones for one side but only the weaker ones for another side.
There are lots of reasons why one of these marketplaces might be unhealthy. But a common one is that a small number of actors have disproportionate power over which arguments get represented. This is, at least to my mind, the best reason to be worried about monopolies in the media—they give a small number of individuals disproportionate power over the marketplace of arguments. Moreover, this is precisely the reason why we should be worried about persuasive AI.
It's worth being precise about the two key factors that make this particularly concerning:
AI systems can generate vast amounts of persuasive content with minimal marginal cost. Even if the quality isn't perfect, the sheer quantity can dominate discourse.
The barriers to entry for creating sophisticated AI systems are enormous. The computational resources, technical expertise, and data required mean that only a handful of companies can operate at the cutting edge.
This combination—low marginal costs for content generation and high barriers to entry—creates the potential for a few actors to have unprecedented influence over public discourse via control over the marketplace of arguments. This is particularly worrying when you might already think, as I do, that the actors who are likely to have this power (that is: large tech companies) already have far too much power to shape public discourse.
The Political Challenge
Some might argue we could solve these problems with ethical guidelines—rules ensuring AI systems present balanced viewpoints and true information. But the problem with this approach is that we need to actually agree on a set of guidelines, and that will require agreeing on what counts as balanced and on what information is true.
Take public health communication as an example: imagine AI companies use persuasive AI to encourage vaccine uptake during a pandemic. Would this be acceptable under a good set of guidelines? Well, whether it is acceptable to any political actor is going to depend on whether they think the vaccine is safe, and on their views about the legitimacy of public health persuasion, both in general and during this pandemic. Recent history suggests that there is likely to be a lot of disagreement on both scores. Those who trust the vaccine's safety would see this as a legitimate use of persuasive technology, while vaccine skeptics would view it as dangerous propaganda. Because of this disagreement, regulation will become a site of political contestation.
This is the deep political problem: on many important issues where AI persuasion might be deployed, there's fundamental disagreement about what constitutes truth and balance. Any attempt to regulate these systems will inevitably become entangled in the very political and cultural battles they're meant to help resolve. This problem would likely be exacerbated by the fact that the job of implementing any set of ethical guidelines would likely fall to the same small group of companies that control these technologies, adding to worries about a troubling concentration of power.
The Future of Persuasion
The worry that AI will become super persuasive is almost comforting. It suggests that, if we can just keep AI’s persuasive powers in check, whether by reducing the speed of technological development or stringent regulation, everything will be fine. It also suggests that what we really have to fear are future—even if near future—technological developments, not what is already possible with existing technology.
The fundamental challenge is not controlling AI’s persuasive abilities. It may well be that these are limited not by the power of AI technologies but by human persuadability—it is just hard to persuade humans to do or think new things. The danger isn't that AI will become too persuasive. It is rather that we'll end up with a public sphere dominated by sophisticated but ultimately empty argumentation, controlled by a handful of powerful companies. This challenge is doubly hard because, as I have argued, the usual response to this kind of challenge—regulation, regulation, regulation—is liable to run into political problems similar to attempts to regulate misinformation.
A very persuasive post :) surely the author has not missed the self-reference loops here. Thanks for the clear thinking and the delightful writing.
> It’s crucial to understand that persuasion itself is inherently difficult.
I think it may also be important to consider whether persuasiveness is a stable trait of persuaders or if it varies by subject-matter. It may not be reasonable to extrapolate from some agent's ability to persuade on a politically-contested topic (which have basically been pre-screened to have plausible arguments on both sides) to them being persuasive on any topic.