Discussion about this post

User's avatar
Daniel Greco's avatar

Fascinating stuff!

One quick thought on incentives and social science. The idea that people only respond to incentives in familiar textbook ways (eg, buying less of something when the price increases) when they recognize the incentives as such strikes me as a mistake.

There's a fruitful tradition in economics of evolutionary game theory, where the basic insight is that economic actors do *not* have to decide what to do by reference to concepts from economics in order for those concepts to play a role in predicting and explaining market outcomes.

If firms that make profits tend to grow, and firms that experience losses tend to shrink and go out of business, then even if managers are making decisions randomly, without anything like a conscious attempt to maximize profits, you should expect that over time the firms that remain in the market will be pursuing (roughly) game theoretically optimal strategies. This is analogous to the idea that animals will tend to behave in "rational" ways (eg, not leaving calories or mating opportunities on the table) even though they're usually not consciously optimizing; conspecifics in earlier generations who pursued less effective strategies failed to reproduce.

Expand full comment
Andries's avatar

Thank you for this very interesting summary and partial critique of Friedman's work. But I find it puzzling that Friedman (and McKenna) discuss these issues without referring to the unmissable work by Philip Tetlock and his collaborators on expert forecasters. (Which generally think in a totally different way from the overconfident 'experts' who serve as talking heads in the media) These expert forecasters are foxes rather than hedgehogs, sound something like Bayesian reasoners, constantly updating their predictions, and make probabilistic predictions. Laypeople who think this way can predict geopolitical events better than CIA analysts with access to classified information not available to the laypeople. But the predictions of even the best forecasters are not spectacularly better than chance - though they would do well in constantly repeated bets. And (if I remember correctly), if they have to predict events further in the future, at about six months even the best forecasters cease doing any better than chance. The other thing I miss in Friedman's account is the possibility of doing smaller scale experiments before introducing new policies, and in general the Popperian notion of piecemeal engineering - doing policy changes in small increments and seeing what happens after one smallish step before setting the next smallish step. I've just started reading Nate Silver's The Village and the River, and the River mindset (silicon Valley, poker players) may be related to that of Tetlock's Superforecasters. Successful venture capitalists must be able to predict the future well enough to have their hits outweigh their misses. The reason for their success is not that they are good at avoiding misses. I don't know whether rethinking in probabilistic terms all the issues Friedman discusses could lead to a less dire view than the one Friedman presents. But meanwhile kudos to him and McKenna for developing the view that we normally utterly overestimate what technocracy can achieve.

Expand full comment
13 more comments...

No posts