Lesson 17 · Transdisciplinary Research

17. Qualitative Analysis as Meaning-Making

24 min

Before you start

  • Lesson 12: qualitative approaches
  • Familiarity with one analysis approach (thematic, narrative, grounded)
  • Comfort using qualitative software (NVivo, Dedoose, MAXQDA, or manual)

By the end you'll be able to

  • Identify emergent themes that honor participant voices
  • Use software as a tool, not a substitute for thinking
  • Establish rigor through transparency in meaning-making
  • Distinguish summary from interpretation
  • Document the analytic path for audit and dependability

Coding is interpretive, not mechanical

Qualitative coding looks procedural — read transcript, apply codes, organize by code — but every coding choice expresses a stance. The choice of what to mark as a code, what to leave unmarked, and what to call the code is interpretive. Pretending coding is mechanical hides the interpretive moves rather than disciplining them.

The transdisciplinary discipline is to own the interpretive stance and document it. Reflexivity is the rigor.

Immersion before coding

Premature coding produces codes that reflect the codebook, not the data. The codebook becomes a filter that decides what's visible.

The practical move: read the full corpus more than once before any structured coding. The first read is for the whole — themes, surprises, your own reactions. The second is to identify candidate codes that emerge from the corpus rather than being imposed on it.

For grounded-theory and other inductive approaches, this immersion is the foundation of the analysis. For thematic-analysis and content-analysis approaches, it prevents codebook-driven thinness.

Codebook development

A codebook is the analytic instrument. A useful codebook contains:

  • Code name — short and stable
  • Definition — what does this code mark?
  • Inclusion criteria — what fits?
  • Exclusion criteria — what looks similar but doesn't fit?
  • Exemplar quote — what does it look like in the data?
  • Notes — analytic considerations, edge cases

Codebooks evolve. The version at the start of analysis differs from the version at the end. A versioned codebook is part of the audit trail.

Software supports; the researcher thinks

NVivo, Dedoose, MAXQDA, ATLAS.ti — each handles organization, retrieval, and basic visualization. None handles interpretation. The codes and themes the software stores are the output of the researcher's interpretive work, not a substitute for it.

Three software practices that align with rigorous analysis:

  • Memo every significant coding decision. Software supports memo-attachment to codes, segments, and documents. Use it. A coding choice without a memo is opaque to later audit.
  • Don't let the codebook structure substitute for analysis. A codebook organizes; it doesn't analyze. The analysis happens in the memos and themes you build from the coded data.
  • Use queries to test, not confirm. A query that returns what you expected is less informative than one that returns surprises. Run queries that could disconfirm.

A study that says "we used [software], so the analysis is rigorous" is making a category error.

Themes are claims, not categories

A theme is a claim about a pattern in the data. It is not a category label.

A useful theme:

  • Names a pattern, not a topic ("medication as relational object," not "medications")
  • Is grounded in multiple data sources or instances
  • Has at least one exemplar quote with surrounding context
  • Has analytic memos explaining why this pattern matters
  • Could be falsified by alternative reading of the data

A thin theme reads as a topic heading. A robust theme reads as a finding.

Member checking: resonance, not approval

Member checking returns interpretations to participants for feedback. Two common misunderstandings:

  1. Member checking is not a vote. Participants don't decide whether findings are correct; they offer their reactions, which become additional data.
  2. Disagreement is data, not veto. When participants disagree with a finding, the disagreement tells you something — about the finding, about participants' positioning, about the framing.

A useful member-check report includes the participant reactions, the analytic team's responses, and the resulting changes (if any) — and explains the reasoning when no change was made.

Reflexivity as ongoing practice

Reflexivity is the practice of examining how the researcher shapes the research. Done well, it's an analytic move, not a confessional aside.

Practical reflexivity:

  • A reflexive memo after each interview: what surprised me? what am I tempted to discount? whose voice am I privileging?
  • A team-level reflexivity practice: bring reflexive memos to analysis meetings and use them to interrogate emerging interpretations
  • A documented positionality statement that names training, history, and stake — and that analyzes how each could shape what you see
  • An explicit alternative-explanations memo for each major theme: what else could this be?

Reflexivity without artifacts is hard to audit. Reflexivity with artifacts is part of dependability.

Establishing rigor

Trustworthiness criteria (from Module 1) translate to specific analytic practices:

  • Credibility — triangulation across data sources, prolonged engagement, member checking with grounded interpretation of the feedback
  • Transferability — thick description that lets readers judge applicability
  • Dependability — audit trail of coding decisions, codebook versions, memos
  • Confirmability — reflexivity statement and alternative-explanations memos

Each criterion has its own evidence. A study that claims trustworthiness without showing the evidence is making a rhetorical claim.

Common failure modes

  • Reporting themes without quotes. A theme without exemplar quotes is uninterpretable.
  • Quotes without context. A quote stripped of context can support multiple interpretations; cite enough surrounding material that readers can judge.
  • Single-coder analysis with no transparency about positionality. The coder's stance shaped the codes; without documentation, readers can't account for it.
  • Inter-rater reliability as the rigor. Cohen's kappa is appropriate for some kinds of qualitative work (especially content analysis), not others. For interpretive work, negotiated coding or audit trails matter more.
  • Saturation as sample-size justification. Saturation is a claim about the data, not the sample. Justify with evidence (e.g., no new codes in the last three transcripts), not assertion.

A worked vignette

A team coding interviews with adolescents about social-media use settles on five candidate themes after first-pass coding. The themes are: escapism, connection, comparison, creativity, fatigue.

Quick check: are these themes or topics? Mostly topics. Reframe as claims:

  • Escapism as boundary-making (social media used to demarcate space from family demands)
  • Connection as performance (interactions oriented toward audience visibility rather than dyadic intimacy)
  • Comparison as identity work (comparison not as harm but as ongoing positioning)
  • Creativity as labor (content creation experienced as work, not play)
  • Fatigue as protest (disengagement as deliberate refusal, not passive exhaustion)

The reframed themes are claims grounded in the data. Each requires an exemplar quote with context, a memo explaining why the team reads the pattern this way, and an alternative explanation considered and rejected. The result is a finding rather than a topic map.

Closing

Coding is interpretive; immerse before coding. The codebook organizes; the researcher analyzes. Themes are claims, not categories. Member checking surfaces resonance; disagreement is data. Reflexivity through artifacts is auditable rigor. Trustworthiness criteria translate to specific practices with evidence.

Next: integrative analysis across data streams — convergence, divergence, complementarity, and the DIKW hierarchy in research write-up.

Common mistakes

These are the traps learners hit most often on this topic. Knowing them in advance is half the fix.

  • Coding without a stance

    Every coding choice expresses a stance about what matters. Pretending the coding is neutral hides the analytic move. Better: name your stance and let readers judge it.

  • Letting software organize you out of thinking

    Software is good at organization, fast retrieval, and provenance. It is not good at interpretation. Keep an analytic memo trail that captures why you coded as you did.

  • Reporting themes without examples

    A theme without grounded examples reads as a category label. Each theme needs at least one verbatim quote and a contextual note so readers can evaluate the interpretation.

Practice problems

Try each on paper first. Click Show solution only after you've made a real attempt.

  1. Problem 1
    Take a short transcript excerpt and code it. Write a memo on why you coded what you coded.
    Show solution

    The memo is the rigor. A code with a memo is interpretable; a code without one is opaque. A good memo names the analytic move ('I coded this as Avoidance because the speaker shifts subject when the topic of cost arises, even though they don't explicitly say so').

  2. Problem 2
    Identify one bias your analytic position introduces and how you'll address it.
    Show solution

    Bias mitigation in qualitative work is rarely about elimination — biases shape what we see — but about visibility. Documenting the position, the risk, and the mitigation turns implicit bias into auditable rigor.

Practice quiz

  1. Question 1
    What is the role of qualitative analysis software at its best?
  2. Reflection 2
    Name three forms of evidence that establish dependability in qualitative analysis.

Lesson 17 recap

  • Coding is interpretive; name your stance
  • Software is a tool; the researcher does the thinking
  • Every theme needs grounded examples
  • Documentation of the analytic path is the rigor

Coming next: Lesson 18 — Integrative Analysis & Interpretation

  • Next: integrative analysis across data streams
  • Convergence, divergence, complementarity
  • From raw findings to actionable insight

Saved in your browser only — no account, no server.