As a quantitative researcher in education, I have a lot of thoughts about “objectivity” and “conflicts of interest”. To be clear, I LOVE education research. I think it provides value. But I wouldn’t say it’s objective…
1. Conflict of interests are a big problem. When those promoting an intervention are involved in the study, you see upwardly biased results. I’ve also found evidence of bias even when developers hired external evaluators but funded the study. https://sree.memberclicks.net/index.php?option=com_dailyplanetblog&view=entry&year=2020&month=10&day=09&id=10:effect-sizes-larger-in-developer-commissioned-studies-than-in-independent-studies
Why? A good bit of this bias comes from lackluster reports being buried. As an external evaluator, I wrote 26 reports. Only 10 of those are publicly available. Me when I see someone promoting an intervention I’ve secretly evaluated 👇.
Tip: If you’re talking to a developer about a product, ask them to provide you with ALL of the research studies on their products conducted within the past few years. If you only get glowing results, it’s not great.
2. There is no one right way to analyze data. If fact, researchers make *hundreds* of decisions in data cleaning and analysis. Most of it is never disclosed because no one cares.
Tip: Look for consistency in research across studies and different research teams. That gets you closer to the truth.
3. Funding for research from groups with ideological stances is really interesting, and I wish that there was more work on this (which group is gonna fund that?!). Here’s just one example that I found interesting, but it’s nothing more than an anecdote...
@seanfreardon investigated whether income-based "achievement gaps" have increased over time in the US, and found that yes, yes they have. His org has a hodgepodge of funding sources listed below.
Another study funded by Walton found exactly the opposite. Which do you trust more? (don’t answer that) In fact, Walton has a whole webpage devoted to how the "achievement gap" has decreased over time.
I wouldn’t throw out all research funded by a big name. Rather, the funding is a data point to consider in deciding whether you trust the study and how to interpret the results. Funding is a part of the story. Disclose it.
Tip: Ask who funded the study and then decide for yourself how much you trust the results.
4. Quantitative research has traditionally focused on the average, which is not super informative in some cases. Like “DC has the highest student growth in the nation” doesn’t sound as good when you realize that the high growth is largely driven by affluent students.
We need more work on distributions, or what worked for different groups of people. The problem is that the sample is often not large enough to do this well. Researchers conclude that there are no differences among groups, but it could be that there are, we just can’t see them.
Tip: Ask for more than one number. Ask for additional analyses, or graphs with a bunch of dots on them to get a sense of what’s going on.
There are other things not mentioned in this thread, but it's a start to understanding how to make sense of research.
You can follow @betsyjwolf.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.