"The inherent demands of computational modeling guide us towards better science by forcing us to conceptually analyze, specify, and formalise intuitions which otherwise remain unexamined — what we dub “open theory”." 2/
"If psychology continues to eschew computational modeling, we predict more replicability “crises” and persistent failure at coherent theory-building. This is because without formal modeling we lack open and transparent theorising." 3/
"The scientific inference process is a function from theory to data—but this function must be more than a state function to have explanatory force—it is a path function which must step through theory, specification, and implementation before [it] can have explanatory force..." 4/
"Almost every paper in psychological science can be boiled down to introduction, methods, analysis, results, and discussion. The way we approach science is near identical: we ask nature questions by collecting data & report p-values, more rarely Bayes-factors or ... " 5/
"Computational models do not feature in the majority of psychology’s scientific endeavours. Most psychological researchers are not trained in modeling beyond constructing statistical models of their data, which are typically applicable of-the-shelf." 6/
"In contrast, a subset of researchers—formal, mathematical, or computational modelers—take a different route (...) They construct models of something other than the data directly; they create semiformalised or formalised versions of scientific theories ..." 7/
"In the best of possible worlds, modeling makes us think deeply about what we are going to model, (e.g., which phenomenon or capacity), in addition to any data, both before and during the creation of the model, and both before and during data collection." 8/
“One of the core properties of models is allowing us to “safely remove a theory from the brain of its author” (A. Wills, personal communication, May 19, 2020 ...). Thus allowing the ideas in one’s head to run on other computers.” 9/ https://twitter.com/o_guest/status/1220786717399113728
"If we don't make explicit our thinking through formal modeling & if we do not bother to execute, i.e., implement & run our specification through comp modeling, we can have massive inconsistencies in our understanding of our own model(s). We call this issue the pizza problem" 10/
"Computational modeling—when done the way we describe, since it requires the creation of specifications and implementations — affords open theorising to go along
with open data, open source code, etc." 11/

On specifications vs implementations, see also: https://twitter.com/IrisVanRooij/status/1223668740853837830?s=20
“This tendency to ignore these levels [theory, specification, implementation] is a result of the same process by which theory and hypothesis are conflated (...), and by which models of the data are taken to be models of the theory ...” /12
"A specification is a formal(isable) description of a system to be implemented based on a theory (...) It provides a means of discriminating between theory-relevant, closer to the core claims of the theory, and theory-irrelevant, auxiliary assumptions." 13/
"A comp implementation is a codebase written in one or more programming languages (...) it might appear to be the hardest step. This is arguably not the case (...) large proportion of the heavy lifting is done by all the previous steps [framework, theory, specification]" 14/
"implementations are the most disposable (...) This is not entirely damaging to our enterprise since [what] we want to evaluate are the theory & specification. If the comp model is not re-implementable given the specification, it poses serious questions for the theory" 15/
"Hypothesis testing is unbounded without iterating theory, specification, implementation... [These] levels constrain the space of possible hypotheses to-be-tested. Testing hypotheses in an ad hoc way—hypo-hacking—is to the hypothesis layer what p-hacking is to the data layer" 16/
"Researchers can concoct any hypothesis and given big enough data a significant result is likely to be found when comparing, e.g., two theoretically-baseless groupings. Another way to hypo-hack is to atheoretically run pilot studies until something “works”." 17/
"Any theories based on hypo-hacking will crumble if no bidirectional transitions in [theory, specification, implementation] were carried out (...) Having built a computational account researchers can avoid the confirmation bias of hypo-hacking, which cheats & skips levels." 18/
"Arguably—and this is one of the core points of this article— had we not ignored the steps in red and created a theory, specification, and implementation explicitly, we would have been on better footing from the start." 19/
"As shown using the pizza example, non-modelers remain unaware of pizza problems and may not realise they are implicitly running a different model (in their head) to what they specify." /20
"We imagine a “best of all possible” massively collaborative future where scientists allow machines to carry out the least creative steps [hypothesis testing & data analysis] & set themselves free to focus wholly on computational modeling, theory generation, and explanation." /21
So far my highlights for this thought provoking paper by @o_guest and @andrea_e_martin. I much enjoyed (re)reading it, and I recommend you do the same. Find link to full paper here: 👇 https://twitter.com/IrisVanRooij/status/1344050638200647682?s=20 /fin
You can follow @IrisVanRooij.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.