This whole debacle illustrates a major issue in how many psychologists (me included) approach data.> https://twitter.com/mindismoving/status/1274073023793037312
Instead of thinking about the data generating mechanism and the parameters we’re *actually* interested in, we directly jump to “how can I manipulate the numbers to answer my question?”>
So we end up taking ratios of stuff, standardizing variables and correlating them, subtracting something from something else etc.>
On the upside, it sure does give the impression that we’re doing sciencey science stuff here, and there’s things that we can teach students. On the downside, I don’t think there’s any reason to think that this approach leads to valid conclusions.>
So we get things wrong frequently, we lack the proper training to even just notice the discrepancy between what we’re doing and what we’re aiming to do, and in some cases, this will have catastrophic consequences.>
Of course, this leads back to calls for more formal modeling, simulations, *proper* theory. Some parts of psych are doing very well on that front, but the parts of social, personality, and well-being psych I’ve seen show zero awareness.>
Theory is just nice plausible verbalizations and plots with boxes and arrows, nothing that really guides analyses. Stats may look super fancy but they’re “not even wrong” because they don’t correspond to the question the researchers want to ask.