11. Reframing Quantitative Inquiry
Before you start
- Comfort with basic descriptive statistics and survey research
- Familiarity with the idea of bias in measurement
- Willingness to treat quantitative findings as situated, not absolute
By the end you'll be able to
- Reframe numbers as one way of knowing, not the gold standard
- Name the specific strengths and limits of survey and correlational research
- Apply cultural validity criteria to measurement
- Recognize epistemic injustice in instrument design
- Ask 'whose reality do these numbers represent?'
Numbers are one way of knowing
Quantitative methods have an undeniable record of producing reliable, generalizable knowledge in well-defined domains. The transdisciplinary frame doesn't reject quantitative inquiry; it locates it. Numbers are one way of knowing, with specific strengths and specific limits.
The strengths: measurement of scale and prevalence, detection of patterns invisible to single observers, comparison across populations, evaluation of intervention effect, prediction under stable conditions.
The limits: meaning is not measurable directly; context strips quickly under quantification; aggregation can hide variation that matters; the constructs being measured carry assumptions that may not transfer.
A useful starting move: every time you write a quantitative finding, write one sentence about what the number can claim and one about what it can't.
Surveys and correlations: specific strengths, specific limits
The two workhorse quantitative methods for non-experimental research are surveys and correlational analyses.
Survey research is strong for prevalence estimation, attitude measurement at scale, and comparison across known populations. It is weak when:
- The construct doesn't fit a Likert scale (lived experience, contested meanings)
- Respondents are answering in social-desirability mode rather than reflective mode
- The population is hard to reach with conventional sampling
- The phenomenon has rapid temporal dynamics that cross-sectional measurement can't capture
Correlational analysis is strong for hypothesis generation, pattern detection, and identifying candidates for intervention. It is weak when:
- Causal inference is required and the design doesn't support it
- Confounds are not measured or measured imprecisely
- The relationship is non-linear and the model assumes linearity
- The unit of analysis doesn't match the phenomenon (ecological fallacy lurks)
Naming the limits in your write-up is part of the rigor.
Wicked problems and statistical leverage
For wicked problems, statistical methods have specific points of leverage:
- Estimating prevalence of conditions or attitudes where the community needs the number for advocacy or planning
- Estimating disparities across subgroups, which require careful subgroup analysis with attention to interaction
- Building predictive models that surface high-risk groups for prioritization
- Estimating intervention effects in conditions where the intervention is manipulable
For each leverage point, ask: what does the number do for the community? A statistical estimate that doesn't move a decision is decorative. One that does — by giving advocates ammunition, by triggering a screening protocol, by directing resources — earns its place.
Cultural validity: meaning equivalence, not translation
A scale validated in one population may not measure the same construct in another. Cultural validity asks whether items mean the same thing to participants as they do to the scale developers.
The standard fix is cognitive interviewing in the target population, where:
- Each item is read by participants
- Participants paraphrase it
- The paraphrase is compared to the developer's intent
- Items that don't survive are revised or replaced
- The full revised scale is psychometrically re-validated
Cultural validity is not the same as translation, even careful bilingual translation. An item perfectly translated word-for-word can still measure a different construct. The PHQ-9 item about "feeling bad about yourself" registers differently in a guilt-individualist culture than in an honor-shame one — same words, different construct.
Epistemic justice in measurement
Miranda Fricker's concept of epistemic injustice applies sharply to quantitative measurement. Epistemic injustice happens when a person or group is treated as a less-than-credible knower, or when the conceptual resources to express their knowledge are unavailable to them.
In measurement, this shows up as:
- Testimonial injustice — participants' self-reports are systematically discounted (e.g., when pain is reported by women but treated as exaggerated)
- Hermeneutical injustice — the instrument doesn't have items for phenomena participants experience (e.g., a depression scale with no items for food-insecurity-driven affect)
The practical move is to audit instruments for what they can capture and what their structure makes invisible. Then either supplement the instrument or, when stakes are high enough, replace it.
Reductionism and its costs
Stripping context from a measurement is a method choice with consequences. A pain score of 7 out of 10 means something. It is not the same thing as "pain that wakes the patient every night, prevents work, and is judged by their family as transformative." The number compresses; the compression has costs.
For some uses, the compression is appropriate. For others, the lost context is the analysis you actually needed. Recognizing which is which is methodological maturity.
A useful heuristic: if a clinical or policy decision hinges on the number alone — independent of what's compressed — you may have stripped too much. Adding back even minimal context (one open-ended item, one stratifier) often changes the inference.
A worked vignette
A team is evaluating a community-based blood-pressure intervention. They have systolic BP measurements at baseline and 6 months and a 12-item self-report adherence scale.
Mono-disciplinary frame: report mean change in BP, mean adherence, and the correlation between them.
Transdisciplinary frame: do all of the above, then:
- Disaggregate by subgroups whose pre-intervention contexts differ (housing-secure vs. housing-insecure participants)
- Cognitively interview the adherence scale in this population, finding that two items don't transfer well; analyze with and without those items
- Pair the BP outcome with a qualitative substudy on what participants did and didn't change in their daily routines
- Report effect sizes in both clinical and patient-meaningful terms
The transdisciplinary version produces more findings, takes more time, and is more publishable in journals that value translational impact.
Closing
Numbers describe; they don't escape context. Surveys and correlations have specific leverage points and specific limits. Cultural validity is meaning equivalence, not translation. Epistemic justice asks whose knowledge our instruments make invisible. Reductionism is a method choice with costs.
Next: qualitative approaches as deep understanding — phenomenology, grounded theory, ethnography, and the trustworthiness criteria that govern rigor in interpretive work.
Common mistakes
These are the traps learners hit most often on this topic. Knowing them in advance is half the fix.
Treating a validated instrument as culturally validated
Psychometric validity (reliable, factor-stable) is not cultural validity. An instrument validated on undergraduate psychology students in one country may misrepresent meaning when used elsewhere. Both validities are required.
Equating large N with generalizability
A large sample drawn from a single demographic generalizes to that demographic. The jump to 'humans in general' requires sampling logic, not sample size.
Naturalizing the absence of context
Stripping social context from data isn't neutral — it's a method choice that makes structural causes invisible. Naming the choice is part of the report.
Practice problems
Try each on paper first. Click Show solution only after you've made a real attempt.
- Problem 1Take a well-known scale (PHQ-9, GAD-7, or another). Identify one item that might mean something different in two cultural contexts.
Show solution
PHQ-9 'feeling bad about yourself' lands differently in an honor-shame culture than in a guilt-individualist culture. Cognitive interviewing in the target population — not just translation — is the standard fix. The original item may need replacement, not just re-wording.
- Problem 2Identify one quantitative finding from your field that depends on a hidden contextual assumption.
Show solution
The exercise surfaces the dependence between methodological choices and findings. A finding that 'breaks' under one contextual change is not necessarily wrong, but its generalizability is narrower than its publication usually implies.
Practice quiz
- Question 1What does cultural validity ask of an instrument?
- Reflection 2Define 'epistemic justice' in one or two sentences and give one example from measurement.
Lesson 11 recap
- Numbers describe; they don't escape context
- Cultural validity is meaning equivalence, not translation
- Large N ≠ generalizable beyond the sampled population
- Epistemic justice asks whose knowledge our measurement makes invisible
Coming next: Lesson 12 — Qualitative Approaches as Deep Understanding
- Next: qualitative approaches as deep understanding
- Phenomenology, grounded theory, ethnography
- From validity to trustworthiness
Saved in your browser only — no account, no server.