Largely agree with Ben's points here, some other thoughts...

In physics, mathematical models are either basically correct or not. Newton's laws either hold in all cases or are a wrong (or at least, incomplete and approximate) view of the world https://twitter.com/ben_golub/status/1338175642932715520
We're taught modern physics through famous experiments that show "edge cases" (double slit experiment, gravity bending light, etc.) which falsify classical physics
The implicit philosophy of science here is that any evidence demonstrating a case where theory doesn't hold implies

- The old theory is wrong
- A new/broader theory is needed
I think this is the implicit thinking behind the ergodicity critique: if we can find _one_ fundamental internal inconsistency behind econ, the whole thing needs to be replaced with a more correct theory, maybe with the old theory as an approximate/edge case
IMO, this is not how econ works. "All models are wrong, but some are useful" really applies much more to econ than physics. We very often use "hacks" in modelling: Calvo fairies, Epstein-Zin, habit formation preferences, quasilinear utility...
These are quite obviously wrong, but allow our models to speak to reality at a certain level of abstraction (for better or worse!) Debugging the abstractions is very important! e.g. research looks into micro-evidence on price setting, risk aversion, income effects... etc.
But the exercise of trying to describe very complex systems in a way that is totally true to microfoundations just isn't realistic (not even in physics! e.g. Newtonian mechanics, "orbitals" as an approximate solution to QM... etc.)
In this kind of setting, debugging "approximations" is useful, but really has to be done in a very context-specific way. Modelling hacks are OK in some cases, not in others, in a way that's very dependent on the particular application and counterfactuals in mind
e.g. some version of the ergodicity critique is important in finance. The past may not be the same as the future, and locally for time horizons that are relevant they may be very different. Using past time-averages to approximate future behavior can blow up badly
But it's more useful to discuss these kind of bugs in the context of a particular problem/application (is the possibility of a big US inflation event priced into the markets? How do we evaluate its probability given it hasn't happened in 40 years?)
Our models are wrong -- they're wrong by design! Asking how the models are wrong and when that has important issues is very important. Pointing out yet another case where our models clearly are wrong, somewhat less useful
You can follow @AnthonyLeeZhang.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.