Alright, I'm reading this rubbish Yale study with y'all, but we're gonna focus on metrics and ✨jargon✨ bc that's my favourite way to demystify and dismantle research conclusions (I'm not a killjoy, what are you talking about)
There were 42 autistic participants and 22 allistic participants. We don't know much about demographics, but the autistic participants had been referred to Yale by their parents or doctors for diagnostic assessment.
This is the US, so having enough access to healthcare by 22 months old that someone would notice and suggest evaluation of autistic traits implies above-median socio-economic status, and I'd guess the sample skews white (POC tend to get diagnosed with conduct disorders)
Meanwhile, the neurotypical participants were recruited through "advertisements". We don't get any more specificity than that 🤷
Ok now we're getting to the "fear-inducing probes". The authors state that the probes were "adapted with minor adjustment from the Lab-TAB - locomotor version". Let's have a wee look at that, shall we?
The original, published in Explorations In Temperament: International Perspectives on Theory and Measurement (1991), page 264, is basically an infant personality test (psychologists don't like to use the term "personality" for young children, so they call it ✨temperament✨)
The dimensions it measures are activity level, fearfulness, anger proneness, interest/persistence, and joy/pleasure, and it details specific stimuli to be used to elicit responses. Because this is about terrifying autistic children though, they binned everything except fear.
You see, in its original form, there was a 10-minute warm-up with a "familiar experimenter", and warns experimenters to "avoid consecutive, potentially stressful episodes in the same room" due to "carryover effects" (and probably also ethics, but dunno)
The original form also started with a "nonstressful episode drawn from the Pleasure of Interest domains." Fear and Anger are interspersed, and the Free Play episode is in the middle to give the child a long break (?) The primary caretaker is also asked prediction/follow-up Q's
about how the child would/did respond. The reactions themselves are scored based on latency (how long before it starts), duration, intensity ratings, quality of vocalisations, and interpretations of "motoric acts".
The "minor adjustments" that the Yale investigators made? They included NO. POSITIVE. STIMULI. Oh, and the "fear" stimuli from the 1991 metric? They were "large, novel, remotely controlled toy enters room; mechanical toy dog races across table towards child; male stranger
approaches and picks up child; and plastic masks of human-like faces". That wasn't good enough for the Yale researchers though, so they used a "large mechanical spider crawling towards the child", "mechanical dinosaur with red light-up eyes approaching the child", and "a female
stranger dressed in dark clothes and wearing three grotesque masks in succession (e.g. vampire, Star Wars character)". Why the fuck would you up the ante on the stimuli while also getting rid of the positive ones and minimising breaks???
Further, the 1991 metric was not validated in any way with autistic children. The only other study I'm finding that explores its use in autistic toddlers has the same first author as this one, and it compares intensity of displays of emotion.
This 2018 paper found "a muted response to threat and an accentuated response to global blockage, whereas the ability to express positive emotions remains intact". All of this is about outward expression though, not internal experience.
The problem ends up being that we KNOW that there is a difference in emotional response between the groups, but there's no longitudinal data (which would allow for more meaningful conclusions about what the between-group differences mean).
This is an IMPORTANT QUESTION, because these authors are trying to posit things about the emotional stability of the subjects over time. It would be a different story if it was about how loved ones reacted to them, but no, this is about autistic mental health.
Moving along, the authors are coding direction of gaze, and have decided that this indicates what the toddlers are attending to. This 👏 is 👏 not 👏 necessarily 👏 true. Many autistic people don't look directly at things they're paying attention to.
I'd at this point like to rewind to the selection criteria for the autistic participants. They were evaluated using the Autistic Diagnostic Observation Schedule - 2 Toddler Module, which I'd initially glanced at and decided not to write about, BUT THEN I REALISED.
There are 2 versions of the ADOS-2 Toddler (12-20 months/21-30 months if nonverbal, and verbal 21-30 months). The 12-20/21-30NV version is mostly (you guessed it) about eye contact, gaze, "facial expressions directed to others", "intonation of vocalizations or verbalizations"
Basically, the exact things that qualified the autistic toddlers to be in the study in the first place (!) are being measured by the study (as against a control group). And we're supposed to be surprised by the results?
The researchers then coded emotional regulation (ER) strategies by category, and scored for presence or absence in each trial. Mind, there wasn't any distinction between using an ER strategy for 1 second versus 15. For some (unexplained) reason, we're measuring *variety*
Funny, because I feel like duration and intensity measures of things like thumb-sucking and hand flapping might...tell you more about autistic emotional states than the other stuff, but psh, what do I know?
Here's the thing about psych research: it relies on what's called operationalizations, which is a fancy way of saying that you generally can't measure directly the thing you want to know about, so you have to figure out something you *can* measure that correlates
But you have to be careful how many assumptions you make about how well that correlates, and whether it's reliable with all populations (spoiler alert: nothing is). But a proactive researcher looks for potential issues with tools.
Further, when you get into too many layers of abstraction or you start to involve a lot of predeveloped tools, you can lose sight of the fact that your independent variable (the thing that differs between your experimental group and the control) and dependent variable (what
you're measuring) are the same fucking thing. Also, I have a feeling I know how they got this by the IRB, and it's in part due to the description of the "fear-inducing probes" as "adapted with minor adjustments" from something that seems fairly benign when used in full.
When you adapt a tool, you need to be thoughtful about how you're adapting that tool, and what the changes mean for the functionality and ethics of the tool. Eliminating components explicitly intended to soothe children is BAD. DON'T DO THAT.
You can follow @LpsdSolipsist.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.