8. Designing for Complexity
Before you start
- Lessons 6–7: ecosystem search and boundary-work synthesis
- Familiarity with at least one research design (e.g., RCT, case study)
- Comfort with the idea that designs can adapt mid-study
By the end you'll be able to
- Distinguish linear from adaptive design
- Match design to problem complexity
- Identify when complexity forces methodological integration
- Compare validity concepts across paradigms
- Build coherence in multi-method studies
Matching design to problem
Research design is supposed to be a function of the question. In practice, design often follows training: a researcher trained in RCTs runs RCTs, a researcher trained in ethnography runs ethnographies. This produces work that fits the researcher rather than the problem.
The transdisciplinary discipline is matching design to the problem's complexity. Some problems are complicated (many parts, knowable answer) and call for linear designs. Some are complex (interacting actors, emergent behavior) and call for adaptive, multi-method, or systems-aware designs. Mixing them up is a methods error.
Linear vs. adaptive design
A linear design locks the protocol before data collection and runs it through. Strengths: reproducibility, clarity, sample-size planning. Weaknesses: when the problem turns out to be different than expected, the protocol can't accommodate.
An adaptive design specifies in advance what would trigger a design change and what the change would be. Pre-specified flexibility is the key phrase. Strengths: responsiveness to emerging information without losing rigor. Weaknesses: takes more design work upfront, requires governance for change decisions.
Adaptive designs are not "we'll figure it out as we go." That's drift, not adaptation. The rigor of an adaptive design is in the pre-specification:
- What triggers a change? (Defined criteria — recruitment lag, emerging theme, drift in baseline characteristics.)
- What change is licensed? (Specified, not open-ended.)
- Who decides? (Pre-named decision-makers and process.)
- How is the change documented? (Audit trail.)
Without these four, "adaptive" is just opportunism.
Validity across paradigms
Mixed-method studies need validity concepts from each paradigm in play. Importing one paradigm's validity concepts wholesale produces unfair appraisal of the other strand.
- Internal validity (positivist) — does the design support a causal claim?
- Statistical conclusion validity (positivist) — is the statistical test appropriate, powered, and correctly interpreted?
- Construct validity (post-positivist) — do the measures capture the constructs they claim to?
- External validity / generalizability (post-positivist) — to what populations and contexts do findings extend?
- Credibility (interpretivist) — do findings ring true to the phenomenon and participants?
- Transferability (interpretivist) — can readers judge whether findings apply to their context, based on thick description?
- Dependability (interpretivist) — is the analytic process documented for audit?
- Confirmability (interpretivist) — are findings grounded in data rather than researcher preconceptions?
- Catalytic validity (critical/participatory) — does the study advance change for the community studied?
A mixed-method study addresses the validity concepts of each strand. You don't average them; you honor each.
When complexity requires methodological integration
Some problems can be addressed with a single method coherently. Others can't — the integration of multiple methods is constitutive of the question.
Signs that integration is required:
- The question explicitly asks about both magnitude and meaning ("how widespread is X, and what does it mean to those experiencing it?")
- Mechanism is unclear — quant findings need qual to explain heterogeneity
- The phenomenon is contested — different stakeholders define it differently, and you need each definition's evidence
- Generalizability claims depend on understanding context
- Action is the goal, and the action requires both magnitude (for scale) and meaning (for buy-in)
When these signs are absent, a single-method study may be the cleaner choice. Mixed isn't always better — it's better when the question demands it.
Mixed in name only
A common failure mode: a study that calls itself mixed-methods because it includes both a survey and interviews, but reports them in separate sections that never inform each other.
Test: can you delete one strand and still have the same conclusions? If yes, the strands weren't integrated. The point of mixed-methods is that the integrated conclusion is unavailable from either strand alone.
The fix is at the design stage — specifying integration points (sampling, analysis, interpretation) and integration products (joint displays, meta-inferences) before data collection. We'll cover these in Module 3.
Adaptive design in practice
A practical example. A team is studying a community health worker intervention. The pre-specified adaptive triggers:
- Recruitment lag: if enrollment is below 80% of plan at 8 weeks, the team will expand to a second clinic site (specified in protocol).
- Emergent theme: if interviewers report a recurrent participant concern not in the guide, the guide will be updated and prior transcripts re-coded with the new probe.
- Differential dropout: if dropout exceeds 20% in any demographic stratum, the team will pause new enrollment and convene the community advisory board to diagnose.
Each trigger is documented, each change is logged, and the final report describes which triggers fired. The study is more rigorous for having the triggers, not less.
Coherence in multi-method studies
A multi-method study is coherent when its strands inform each other and reach a defensible integrated interpretation. Coherence has three layers:
- Design coherence — the methods are chosen because they answer different facets of the same question
- Sampling coherence — strands sample purposefully relative to each other (e.g., qual sample drawn from extreme-scoring quartiles of quant strand)
- Analytic coherence — analysis includes integration points where strands speak to each other
Without coherence, multi-method is just parallel mono-method studies bundled together. With it, the integration is the contribution.
Closing
Match design to problem complexity. Linear works for complicated; adaptive is for complex problems where pre-specified flexibility is rigorous. Validity concepts come from each paradigm in play and aren't interchangeable. Mixed-method studies must integrate, not just co-occur. Coherence shows up in design, sampling, and analysis.
Next: experimental and interventional research in a transdisciplinary frame — applying design thinking to study design.
Common mistakes
These are the traps learners hit most often on this topic. Knowing them in advance is half the fix.
Confusing adaptive with sloppy
Adaptive design is pre-specified flexibility — you decide in advance what triggers a change and how. It is not 'we'll figure it out as we go.' Without pre-specification, adaptive becomes opportunistic.
Calling a study 'mixed methods' because it has a survey + interviews
Methodological integration requires that quantitative and qualitative findings inform each other — design, analysis, and interpretation. Running them in parallel and reporting them in separate sections is sequential, not integrated.
Importing one paradigm's validity concepts wholesale
Internal validity (causal claims) is a positivist concept. Trustworthiness (credibility and transferability) is its interpretivist parallel. A mixed study needs to honor both, not declare one supreme.
Practice problems
Try each on paper first. Click Show solution only after you've made a real attempt.
- Problem 1Take a study idea and write one paragraph specifying when the design would adapt and what trigger you'd use.
Show solution
The pre-specification is the rigor. Example: 'If after 12 interviews fewer than 4 themes appear, expand sampling to two new sites. If a theme appears that the original guide does not address, add probes and update the guide; flag and re-analyze prior transcripts.'
- Problem 2Identify one validity concept your study needs from each paradigm in play.
Show solution
Example: positivist (internal validity for the experimental component) + interpretivist (credibility, member-checking for the qualitative component). The study must address both; one cannot trade for the other.
Practice quiz
- Question 1Adaptive design is best described as:
- Reflection 2Give one validity concept from each paradigm: positivist, post-positivist, interpretivist, critical.
Lesson 8 recap
- Match design to problem complexity; complicated and complex aren't the same
- Adaptive design = pre-specified flexibility, not improvisation
- Mixed methods integrate; parallel reporting is not integration
- Each paradigm has its own validity concepts; honor them rather than rank them
Coming next: Lesson 9 — Experimental & Interventional Research
- Next: experimental and interventional research in a transdisciplinary frame
- Design thinking applied to study design itself
- Co-design with participants, not on them
Saved in your browser only — no account, no server.