1/ Someone replied, "Say more please" so here's a quick thread on the thesis:

"Most multiple choice exams are trash: a play in 6 acts"

*Note: Education researchers have been doing this work a LONG TIME, pls give them a follow for more details & informed opinions. https://twitter.com/UMassWalker/status/1323231373176561668
2/ Act 1: On most m.c. exams are scored 1 correct answer= 1 pt & all items are of equal weight. This scoring assumes classical measurement theory wherein all items are of equal "difficulty" & as a whole, comprehensive & representative of the knowledge domain. Rarely is this true.
3/ Act 2: CMT assumptions are rarely met because most m.c. exams used in classrooms lack a test blueprint or underlying theoretical framework. These are usually *implied* at best, not cited & wholly reliant on the faculty who make subjective guesses about what info "counts".
4/ Act 3: Standardized tests like m.c. exams in classrooms are rarely if ever 'standardized' or validated except in post-hoc analyses, opening door WIDE to embedded & unchecked ableism, racism & other -isms especially when these exams are timed, proctored & faculty-generated.
5/ Act 4: Most multiple choice exams test some element of recall/memorization, & are not open book. In this digital age, this plays right into the hands of #EdTech/ #SurveillanceTech purveyors who benefit from stoking adversarial relationships between students/faculty re: cheating
6/ Act 5: To be of high quality, item stems & response sets in multiple choice exams should be "well-written, unambiguous & fair" HOWEVER unlike other instruments in nursing/health care these assumptions are never checked in peer-reviewed research or piloting with diverse users.
8/ Bonus encore: In Nursing education, the retort "But they need to practice for the NCLEX!" is a tired one, rarely if ever made explicit in course objectives, & NOT a replacement for sound pedagogy. Provide formative feedback on NCLEX-taking skill if you must but-
9/ But if summative evaluation of ACTUAL ability to apply knowledge/skills mapped to course objectives is required, make your theoretical assumptions & evaluation methods explicit and tailor them to those objectives NOT a deeply flawed entrance exam that uses Item Response Theory
10/ I continue to be astounded by the number of nurse educators who SWEAR BY their own faculty-generated multiple choice exams in the name of NLCEX prep but cannot name the measurement theory or statistics underlying NCLEX, those assumptions, limitations & implications. STOP IT.
11/ Finally, there ARE lots of educators who use auto-generated item stats on m.c. exams to assess exam quality, but with no more critical thought than arbitrarily chosen cut-off scores for items of concern. This lends a false air of scientific weight to a deeply flawed practice.
12/ Just because the scanning software popped out a number (usually some sort of internal consistency stat) does not mean you understand what that number means, nor does it make your exam scientific or valid. Again, stop it. Let's stop using numbers to do violence to students.
You can follow @UMassWalker.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled:

By continuing to use the site, you are consenting to the use of cookies as explained in our Cookie Policy to improve your experience.