1/
Below, I'm compiling thoughts on writing up qual results, sparked by a recently completed ref report. There aren't always many codified norms, so some reflect my understanding of solid qualitative (here, largely interview and observation) data collection and analysisâŠ

2/ as well as some strong personal â but I think reasonable -- preferences. Of course, welcome others to contribute & to refine my thoughts (incl my future self). By commenting on what should be in write-up, I'm obliquely making comments on collection & analysis processes too.
3/ For example, happy to hear from @PierottiRachael, @RachelStrohm, @Urmy_Shukla, @Qual_Analytics, and many others.
4/ I am largely speaking about qual in the context of public health and global development and am making no claims about the approach of historians and other scholars that work primarily with texts. I focus mostly on interview data, with some wider points.
5/ Remember that the goal of your write-up is to give your reader the tools they need to understand and believe your data and analysis, then to consider your interpretations in light of strengths and gaps in data collection and analysis. Work to convince!
6/ This point is wider than qual (though qual should be A1 on bringing in context) but: *please* help your reader situate your study in time with a timeline figure showing your data collection, the policy or phenom of interest, and other relevant contextual events.
7/ Next, make your research questions clear. If you have specific hypotheses or priors, even if you arenât formally testing or updating, make them clear. If you were guided by a theory or framework, make that clear. If you went in tabula rasa and totally exploratory, make clear.
8/ Qual data comes in four key flavors: text (including interview notes & transcripts), talk/speech, observations, and images. Make clear what type(s) you have and why that was the appropriate type or combo given your research questions.
9/ Site & sample selection are important parts of credible qual work â you need people to believe you went to the right places, talked to the right people, read the right things, and observed the right things to make claims about them, with ample opportunity to be contradicted.
10/ Explain and justify how you choose the sites you are investigating. It is insufficient to say they were chosen âpurposivelyâ and leave it. What was the purpose? How do selected sites fit into the universe of possible sites?
11/ Why should we think that your sites are sufficient and what might have been missed? How are the ways in which your sites are similar or different from one another help us build the understanding we want?
12/ Similar points on sample selection once you have your sites selected. Yes, theoretical and/or purposive sampling may be the most apt strategy to build your sample but you need to say why and you need to make some justificationâŠ
13/ Of the sufficiency of your sample composition and size. Ideally with reference to your research questions and hypotheses.
14/ What heterogenous views or experiences were necessary to understand your phenom of interest? How well did you capture them? If a relatively small # of relevant stakeholders, did you speak with or see all? If not, how did you fill in the gaps?
15/ If the sample frame is much wider (e.g., not a small # of stakeholders involved in a particular decision), situate your sample in this wider context. Are they meant to be typical? Exceptional? Are they? (See: @evliebâs nested analysis.)
16/ For all primary data collection, explain what consent you received and what compensation you administered.
17/ Not all qualitative researchers think thematic saturation is an appropriate goal but some reference to saturation, redundancy, or other ways of showing that you have captured the relevant breadth and depth of views and experiences are important.
18/ If you do something with intention, explain it. For example, if you segregate focus groups by men and women, explain why this was important given the context and/or topic in order to get high-quality data. No points for no-reasons sex segregation.
19/ Especially for interview and obs data, you need to discuss positionality vis-Ă -vis the respondent. Who was in the room (interview and notetaker)? What kind of benefits and drawbacks exist given interviewer characteristicsâŠ
20/ In terms of helping people open up and be honest or inducing one or more forms of bias. These could be inherent qualities (sex, accent) and could be intentional (clothing and transportation choices, say).
21/ If positionality could have induced bias, what steps were taken to mitigate this? What threats remain of which the reader should be aware?
22/ How were interviews recorded? Audio? Video? Was there a notetaker there? How were observations of context and non-verbal cues incorporated into the interview notes or transcripts for analysis?
23/ For interviews & obs, how un/semi/fully structured were your guides â & why? Why was approach right for your research set-up given the skill of the interviewer, the # of interviewers, and the research set-up (e.g., 1 v repeated interview)? See Bernard anthro methods, i.e.
24/ What were the interviewersâ opinions on the truthfulness of respondents? How is this accounted for in the analysis?
25/ Speaking of analysis... uploading data into qualitative software is not analysis. Deploying quotes is not analysis. Coding is not complete analysis â it means you have tagged and categorized your data to begin to make sense of it. Keep going.
26/ You do not need to use quotes in full, offsets as big blocks, to show that you did qualitative work. A quote is a a data point. If you want to show us one to illustrate a larger or unique point, make it is clear that is what you are doing.
27/ You can also use smaller segments of quotes, integrated into the text, so that you use the respondents' words without disrupting the flow of your results narrative. A parade of quotes will rarely stick with the reader.
28/ Help the reader understand what you are trying to say. Is a code or a quote reflective of what many people said or did? Just one? What analytic value is it bringing?
29/ How do you build faith in your analysis? For example, independent coders, discussions and consensus-building? Did you engage in any member checking or validation, such as taking preliminary analysis back to the respondents to see if they thought you were on the right track?
30/ Fin for now. I hope this is useful!