CHES Area IV: Evaluation and Research in Health Education
If there is one Area of Responsibility that consistently challenges CHES and MCHES candidates, it is Area IV: Conducting Evaluation and Research Related to Health Education and Promotion. This Area is one of the most heavily tested sections on both exams, and for good reason. Evaluation and research are what distinguish professional health education from well-intentioned guesswork. They provide the evidence that programs are working, the data that justifies continued funding, and the insights that drive program improvement.
Area IV builds on the work done in all preceding Areas. The needs assessment from Area I generates baseline data. The objectives from Area II provide measurable targets. The implementation monitoring from Area III produces process data. Area IV pulls these threads together into a systematic evaluation of whether a program achieved what it set out to do. For a broader perspective on how evaluation fits within the full scope of practice, see our NCHEC Areas of Responsibility overview.
What Area IV Covers
Area IV addresses the competencies and sub-competencies related to designing evaluation plans, developing data collection instruments, analyzing and interpreting data, and applying findings to improve programs. Health education specialists working within this Area are expected to develop evaluation plans aligned with program objectives, select appropriate research designs, collect and manage data, apply both quantitative and qualitative analysis methods, and communicate findings to stakeholders.
The breadth of this Area is significant. It spans evaluation planning, instrument development, data collection, statistical analysis, qualitative analysis, reporting, and the application of findings. Each of these topics has multiple sub-competencies, which is why our preparation course dedicates 16 videos specifically to Area IV, more than any other Area of Responsibility.
Key Concepts to Master
Types of Evaluation
Understanding the different types of evaluation is fundamental to Area IV. Each type serves a distinct purpose and is conducted at a different stage of a program's life cycle.
Process evaluation examines how a program is being implemented. It asks whether the program is reaching the intended audience, whether activities are being delivered as planned, and whether the logistics and operations are functioning effectively. Process evaluation happens during implementation and provides real-time feedback that can be used to make adjustments.
Outcome evaluation measures the short-term and intermediate effects of a program. It assesses whether participants experienced changes in knowledge, attitudes, skills, or behaviors as a result of the intervention. Outcome evaluation is directly tied to the SMART objectives written during planning.
Impact evaluation looks at the long-term, broad effects of a program on the population or community. This might include changes in disease rates, hospitalization rates, or other population-level health indicators. Impact evaluation requires more time and more sophisticated research designs than outcome evaluation.
Formative evaluation is conducted during program development or early implementation to improve the program before it is fully delivered. Pilot testing, which was discussed in Area III, is a form of formative evaluation.
Summative evaluation is conducted at the end of a program to determine its overall effectiveness and inform decisions about continuation, expansion, or termination.
Exam questions frequently ask you to distinguish between these evaluation types. A common question format presents a scenario and asks you to identify which type of evaluation is being described or which type should be conducted next.
Pro Tip: To distinguish process from outcome evaluation quickly, remember that process evaluation asks "Did we do what we said we would do?" while outcome evaluation asks "Did what we did make a difference?" This distinction appears frequently on the exam.
Research Design Basics
Health education specialists are expected to understand the fundamentals of research design, even if they are not conducting original research. The three broad categories tested on the exam are experimental, quasi-experimental, and observational designs.
Experimental designs involve random assignment of participants to intervention and control groups. The randomized controlled trial (RCT) is the gold standard for establishing causation, but it is not always feasible or ethical in health education settings.
Quasi-experimental designs lack random assignment but still include a comparison group or use other strategies to strengthen causal inference. Common examples include the non-equivalent control group design and the interrupted time series design. These designs are widely used in health education because they can be implemented in real-world settings where randomization is not possible.
Observational designs do not involve an intervention. They include cross-sectional studies, cohort studies, and case-control studies. These designs are used to describe populations, identify associations, and generate hypotheses, but they cannot establish causation on their own.
The exam tests your ability to identify research designs by their characteristics and to select the most appropriate design for a given evaluation scenario.
Data Collection Instruments and Methods
Area IV expects you to understand how to develop and select data collection instruments. Common instruments include surveys, questionnaires, interview guides, observation checklists, and existing records or databases.
When developing a new instrument, health education specialists must consider the constructs being measured, the response format, the reading level of the target population, and the need for pretesting the instrument before full deployment. When selecting an existing instrument, you should evaluate whether it has been validated for use with a similar population and whether it measures the specific outcomes your program targets.
The exam may ask about the advantages and disadvantages of different data collection methods, the steps involved in instrument development, or the criteria for selecting an existing instrument.
Quantitative vs. Qualitative Analysis
Quantitative analysis involves numerical data and statistical methods. Health education specialists should be familiar with basic descriptive statistics such as means, medians, modes, and standard deviations, as well as inferential statistics such as t-tests, chi-square tests, and correlation analyses. You should understand what each test is used for and when it is appropriate, though you will not need to perform calculations on the exam.
Qualitative analysis involves non-numerical data such as interview transcripts, open-ended survey responses, and focus group recordings. Common qualitative analysis methods include thematic analysis, coding, and content analysis. These methods identify patterns, themes, and meanings within the data.
Many health education evaluations use mixed methods, combining quantitative and qualitative approaches to provide a more complete picture of program outcomes. The exam may test your understanding of when and why mixed methods are appropriate.
Validity, Reliability, and Bias
These three concepts are among the most frequently tested topics in Area IV.
Validity refers to whether an instrument or study measures what it is intended to measure. There are several types of validity, including content validity, construct validity, criterion validity, internal validity, and external validity. Internal validity refers to the confidence that observed effects are due to the intervention rather than confounding factors. External validity refers to the extent to which findings can be generalized to other populations or settings.
Reliability refers to the consistency of a measurement. A reliable instrument produces similar results under consistent conditions. Types of reliability include test-retest reliability, inter-rater reliability, and internal consistency.
Bias is any systematic error that distorts study findings. Common forms of bias in health education research include selection bias, response bias, social desirability bias, and recall bias. Understanding how to minimize bias through study design and data collection procedures is a core competency in Area IV.
Pro Tip: Create a two-column table with validity types on one side and reliability types on the other. For each type, write a one-sentence definition and a brief example. This reference tool is invaluable for quick review before the exam.
Using Evaluation Findings to Improve Programs
Evaluation is not an end in itself. The purpose of collecting and analyzing data is to generate findings that can be used to improve program quality, demonstrate accountability to funders, and contribute to the evidence base for health education practice.
Health education specialists must be able to interpret evaluation results, draw appropriate conclusions, communicate findings to diverse audiences, and make recommendations for program modification. The exam may present evaluation results and ask you to determine what action should be taken based on the findings.
Why This Area Challenges Many Candidates
Area IV is challenging for several reasons. It covers a wide range of topics, from evaluation planning to statistical analysis to research ethics. Many health education students have limited coursework in research methods or statistics, and the terminology can be intimidating. Additionally, the exam tests application rather than memorization, requiring you to analyze scenarios and make judgments about appropriate evaluation approaches.
The volume of content in Area IV is part of what makes it difficult. There are more sub-competencies to master, more vocabulary terms to understand, and more conceptual distinctions to keep straight than in most other Areas.
Study Strategies for Area IV
Given the breadth and depth of this Area, a structured approach is essential. Begin by reviewing the NCHEC competency framework for Area IV and mapping out all sub-competencies. Then organize your study around the major topic clusters: evaluation types, research design, data collection, analysis methods, and validity and reliability.
Use flashcards for vocabulary-heavy content like validity types, reliability types, and statistical tests. Practice with scenario-based questions that ask you to select the appropriate evaluation type or research design for a given situation.
If research methods are not your strength, consider supplementing your study with introductory resources on quantitative and qualitative methods. Understanding the logic behind research designs will make the exam questions more intuitive.
Connect your Area IV study to Area V: Administration and Management, where evaluation findings inform organizational decision-making and resource allocation. A well-constructed study plan should allocate extra time to Area IV given its weight on the exam.
Prepare for Your CHES or MCHES Exam — For Free
Area IV is one of the most heavily tested sections on the exam, which is why our free course dedicates 16 videos specifically to evaluation and research. Our 89-video preparation course covers all 8 Areas of Responsibility with scenario-based practice questions in every lesson. Created by an MCHES-certified health education specialist.
View the Free CHES & MCHES Prep Course →Building Your Research Skill Set
The competencies in Area IV are not just exam topics. They are professional skills that you will use throughout your career. Every grant proposal you write will require an evaluation plan. Every program report you produce will rely on data analysis. Every time you advocate for continued funding, you will point to evaluation results that demonstrate your program's value.
Candidates who invest the time to truly understand evaluation and research, rather than merely memorize terms, find that these competencies serve them far beyond the testing center. They become the health education specialists who can not only deliver effective programs but prove that those programs work. And in a field that depends on evidence to justify its existence, that ability is indispensable.
This content is not affiliated with or endorsed by NCHEC. CHES and MCHES are registered trademarks of NCHEC.