7 Comments
User's avatar
Peter McLaughlin's avatar

I think there are two questions here which, if not quite orthogonal to each other, are certainly not perfectly aligned. Both questions involve social and epistemic expertise coming apart, but in different ways, and I think that your concluding thoughts - 'What forms of independent research generate value? Under what conditions?' - are well-served by thinking about the differences.

Near the start of the post, you write:

> Critics of independent research often focus on cases where distrust in experts seems clearly unjustified—climate change denial, flat Earthers, and so on. But what about cases where distrust in recognised expert authority might be justified?

This is a question about the epistemic expertise of social experts: what happens / ought to happen when the group of people we _do_ accord social expertise turns out _not_ to have epistemic expertise?

But then, for the rest of the post, you analyse patient activists, and the question becomes: what happens / ought to happen when the group of people we _don't_ accord social expertise turns out to _actually have_ epistemic expertise?

I think the implied link between these two questions (correct me if you had something else in mind!) is something like 'in a disagreement, at most one group is correct'. If the socially-recognised experts say P, and the patient activists say not-P, then the former group can be incorrect if and only if the latter group is correct; insofar as being wrong is evidence against expertise and being right is evidence for it, then, evidence that the patient activists are epistemic experts just is evidence that the socially-recognised experts aren't.

But there are cases where the patient activists can be demonstrating genuine epistemic expertise, without this affecting the claim to epistemic expertise from the researchers who disagree with them. I'm thinking especially of inductive risk considerations: two groups are equally intimately familiar with the subject-matter and data, but on the basis of differing understandings of the inductive risks, group A generalises to P while group B generalises to Q, where ~(P&Q).

So, for example, you highlight one particular aspect of AIDS patient activism - claims about compliance rates in double-blind trials. Frankly, this doesn't impress me all that much (although I admit this is a point where people can reasonably disagree): researchers are typically very aware of compliance issues, and there are often good reasons to insist on double-blinding regardless. I don't think this is a case where social experts were straightforwardly making an explicit claim about the quality of their data while patient activists were straightforwardly committed to the negation of that exact claim. I think rather that it's a matter of what inductive risks the two groups were willing to take. Patient activists were willing to generalise from lower-quality studies not only because they thought the benefits of double-blinding were smaller than might naively be assumed (on which point socially-recognised experts sometimes agreed!), but also because they thought the costs associated with double-blinding - not just in terms of impact on those given placebos, but also the resources and (especially) time it took to set up 'high-quality' trials - weren't worth it. Patients needed drugs more quickly than existing processes were set up to allow for, and even if that meant they got lower-quality or even ineffective treatments, that couldn't possibly be worse than 'no drugs, just die'.

In this case, I think you're completely right that many AIDS patient activists achieved genuine 'lay [epistemic] expertise'. But recognising this does not necessarily mean denying epistemic expertise to the socially-recognised experts. I think the AIDS patient activists seem commendable because we today can look back in hindsight and see that their urgency and understanding of the risks was mostly justified, while the mainstream scientific community and associated bureaucracies had a view of the risks that we don't share (in many cases because of homophobia). Of course, understanding the risks can itself be a kind of expertise, but it's a different kind of expertise: I think we'd generally want to say that a person could be an expert in infectious diseases even if their understanding of the infectious disease burden and the stakes associated with their research was completely off. And this type of expertise also doesn't come along 'for free' with lived experience; indeed, if anything it can be the opposite, as people tend to overgeneralise from the situations of those socially close to them, which can be systematically misleading.

So I guess one consideration for 'What forms of independent research generate value? Under what conditions?' is 'Are the factors relevant to inductive risk themselves subject to expertise, or are they "purely" value-driven? If the former, what communities are most likely to be able to develop expertise about the risks - and are those the same communities that we currently socially recognise as object-level experts?' I think this is tricky precisely because it's very convenient to reach for 'the right evaluation of risk will come from people with lived experience', and this was pretty much correct in the case of AIDS patient activism, but it is wrong in general (and not just because 'the right evaluation of risk' is obviously value-laden).

Expand full comment
Robin McKenna's avatar

Thanks! Just to clarify: the right way to think about what Epstein is doing in his book is descriptive. He is describing the process by which the then current level of scientific knowledge was created and highlighting the role that patient activists played in that. He isn’t making claims about how terribly impressive all this was; rather, he was highlighting how patients made the sorts of fairly mundane small contributions to knowledge that researchers make.

One thing I like about his book is that it says very little about “lived experience”. Yes, having first hand experience experience of (and intense interest in) an issue might motivate you to think a lot about it, and you might have useful experience to draw on, but no amount of lived experience gives you interactional or contributory expertise. I think the book is a nice counterpoint to the unfortunate current tendency to focus on lived experience as the source of a kind of lay expertise. (Even if it is, it’s a very narrow form of expertise—knowledge of “what it’s like”).

One reason why I included the other examples is that it is natural, especially if you’re coming at this form philosophy of science, to view Epstein’s book as a story about inductive risk. That’s obviously part of it, but when you read the whole thing it becomes clear that it isn’t the whole. One of the problems is that, at least as I understand it, the change in trial methodology was the most tangible impact these groups had, and that does look like a case of inductive risk. But there are all sorts of contributions mentioned in the book that are very small in the grand scheme of things—but then working scientists typically make very small contributions.

Editing to say: I used the RAGE example precisely because it’s not really about inductive risk. It’s about noticing a problem and doing research into the causes of that problem—causes which were overlooked for the standard reasons why big institutions (here the NHS) tend to overlook problems. It’s not that conventional experts couldn’t have uncovered these problems; they didn’t want to, or didn’t think to. But, again, doing the work to uncover these problems requires real expertise.

Expand full comment
Peter McLaughlin's avatar

Thanks, this is a helpful clarification that makes what the post is doing a lot more transparent to me – my apologies that I clearly partly missed your intentions first time around.

I confess (if it weren't already apparent) to not having read Epstein's book, but the question I'd want to ask is: were the narrow 'descriptive' contributions made by lay experts the kinds of things scientists wouldn't have discovered themselves?

Not 'couldn't have', just – were they discoveries in areas the scientists weren't looking at? did they involve thinking about a certain problem in a new way that probably wouldn't have come up otherwise? The RAGE example tells me that the answer is sometimes 'yes'. But in cases where the answer is 'no', that might suggest that the contributions of lay experts in that case were essentially fungible to those of scientists: the impact of lay experts was essentially just to pile more resources into an area, not necessarily a bad thing! but the ratio between the number of cases where the answer is 'yes' and where the answer is 'no' is not irrelevant.

This is because there's always the possibility of pseudo-expertise. I mean that in a rather strict sense: not, say, someone declaring they're an expert on climate change because they've spent hours a day for the last two months on climate sceptic forums; but actual epistemic communities with all the trappings of epistemic expertise, that look very much like other lay expert communities, but which are actually systematically oriented away from the truth. I'm thinking, for example, about 'chronic Lyme disease' here – an epistemic community that for all intents and purposes is indistinguishable from the outside from cases of genuine lay expertise, but whose 'expertise' pertains to a disease that does not exist. They are proceeding from a false premise, and (as parapsychology, plausibly recent Alzheimer's research, and {all religions you don't believe in} show us) it is remarkable how possible it is to build up a structure of pseudo-expertise on top of an unambiguously false base. And the harms that can result are serious: not just encouraging distrust of genuine expertise, but more directly, harmful 'treatment' ('chronic Lyme' sufferers often take huge doses of antibiotics for extremely long and continuous periods of time).

In many ways, this is my biggest worry about lay expert communities: you have to choose your fundamental premises at the outset, before you develop any expertise, and thus at the point in time when your judgment is least reliable. People usually decide 'yeah, my symptoms could be the result of having got Lyme disease' on the basis of having seen the suggestion at an emotional low point. And from this point, they erect a structure of expertise that increasingly makes the whole thing self-supporting. If pseudo-expertise in this strict sense were less prevalent or less easy to develop (if there were something inherent to false propositions that made it extremely difficult to develop sophisticated epistemic communities around them), I'd be less worried. But I think it's actually quite common.

I worry that there's a dilemma here. If, in the majority of cases, lay expertise is ~fungible with socially-recognised expertise, then it's all to the good, but it's also not _particularly_ valuable: efforts taken to encourage the development of lay expertise might more effectively be put into trying to progress scientific knowledge in more 'leveraged' ways. But if lay experts bring something qualitatively different to the table, if their own perspective is very important, then the risk of that perspective being systematically false could well outweigh any potential benefits.

Maybe this is a false dilemma because, as my mention of Alzheimer's research above suggests, institutional communities of scientists are hardly immune to this kind of pseudo-expertise. I would hope that our structures of education would mean that institutional scientists are starting from premises taken from a solid understanding of well-supported biological and chemical theory. But then, I've met enough doctors that I should perhaps be a bit more realistic.

All of this is why I think thinking about all this in an inductive risk fashion is helpful. Your use of the word 'descriptive' is, I think, misleading: since Heather Douglas, philosophers have been trained to look at the degree to which inductive risk is a value-laden consideration; but it's not just a matter of values. It's also a matter of 'sociological knowledge',* and can require expertise to reliably judge. I think emphasising the degree to which lay experts might have unproblematically descriptive knowledge of this kind makes it easier to tell a story about why we would specifically want lay experts – which could, but doesn't have to, take the forms of 'academics cooped up in the ivory tower don't understand the impact of their decisions' or 'elite scientists reproduce elite biases'.

*By which I mean, the kind of knowledge of society that philosophers in our ignorance often call 'sociological' despite the fact that this is not what the discipline of sociology is about in general.

Expand full comment
Bryan Frances's avatar

All this sounds right to me. I'm glad to have learned from it.

Even so, I think your emphasis is a bit skewed. There are zillions of people out there who won't trust what their doctors, for instance, say about minor matters like their sore throat, their diarrhea, a nasty scratch on their kid's arm, and so on. They "do their own research" on Reddit or TikTok, and are led wildly astray. That's the worry I have about DYOR.

This is not the worry about "the wackiest bunch of YouTube conspiracy theorists you can think of", as you put it. This isn't conspiracy nonsense. It's more tame but significantly more prevalent than that.

Expand full comment
Robin McKenna's avatar

Thanks! I don’t know if this is too high-level a response, but here’s how I’m approaching this. It’s not really a mystery why people end up with false beliefs, whether about health or anything else—the world is complicated, it’s hard to figure things out, and there are so many ways of going wrong. Let’s set aside cases where people go wrong as the “base case” and look at cases where something goes right. So what can we learn from cases where something goes right? Contemporary philosophy of science asks this question about contemporary science: it’s doing something right, but what? The answer (at the highest level) is that it has to do with the social organization of science. My interest in things like patient activism is that they provide examples where you can find the kind of organization that produces knowledge outside of institutional science and the academy. (The organization might not be the same; the equivalence is functional). Once you approach things this way, you can think of what these successful groups are doing as very different from what communities of conspiracy theorists are doing because, while it might superficially look like the same kind of thing, it isn’t because you’ve got very different structures in place.

Maybe that doesn’t answer your basic Q about individuals going out and getting totally wrong answers to their questions on Reddit. I guess I’d push back a little and say it isn’t that hard to get good information as a individual doing some searching online—we manage it all the time, as witnessed by the fact that some people (not me) do amazing diy projects, some parents (hopefully me) do a reasonable job of looking after their kids’ health, some (definitely not me) make lots of money in investments, and so on. But I just agree with the standard line in social epistemology that group inquiry can do a lot more than individual inquiry. In that sense the “do your own research” is a bit of a red herring: what I’m really defending is doing your own research as part of a collective that has the right sort of organization to facilitate successful research. I don’t think that slogan will catch on though.

Expand full comment
Bryan Frances's avatar

"My interest in things like patient activism is that they provide examples where you can find the kind of organization that produces knowledge outside of institutional science and the academy." For me, that was the main insight of your post--and I really didn't know it before at all. For real: I was totally unaware of it. If you like, this is a shining example of the good side of DYOR.

Another good side is the one you talk about at the end of your note: it's not hard to figure out tons of cool stuff now that was hard to do before, such as how to dress a nasty wound when you're hiking, how to hang a bear bag, what the history of US Supreme Court was like, and so on. That's another big point in favor of DYOR.

My only issue is that there's a huge negative side to DYOR, which has this pattern: I go online to do my own research and as a consequence I go explicitly against legit experts who really do know what they're doing (e.g., vaccines, assessments of the extent of fraudulent voting, and so on). I think that's a huge problem today. Just look at RFK being the Secretary of Health and Human Services, or that guy who went to that pizza joint thinking there was some child sex trafficking thing there.

So, I guess that there are huge pros and cons to DYOR as it's done today.

Expand full comment
Robin McKenna's avatar

For sure. I guess I don’t really know what you can do about people believing crazy things. The best I can say is that, if you want people to trust a system that system needs to be trustworthy and at least from the outside US healthcare doesn’t meet this criterion. Then you have people like RFK who make a career out of taking advantage of well deserved distrust in the system, and a whole movement to erect a parallel system that would be even worse. I don’t know what you can do about all this but it’s a symptom of a fractured society.

Expand full comment