administrative data: information routinely collected by agencies, organizations, or institutions that might be appropriate sources for analysis in evaluation research.
case study: detailed, qualitative description of a single case (or very small number of cases), including initial assessment, description of the intervention(s) applied, and observed outcomes.
clinically significant: as opposed to statistical significance, relates to the practical importance of a finding or observed difference in data.
community-based participatory action research (CBPR): an approach to research formed out of a collaborative partnership between community-based members and individuals with research expertise to meet information needs of the various partners.
control group: a comparison condition in experimental study designs where participants do not receive the novel or experimental conditions delivered to the experimental group.
cost-related evaluation: research designed to answer questions concerning an intervention’s (or program’s) efficiency, particularly in relation to the extent of impact it has on the target problem.
cross-sectional study: data collection at one time point with a single unit of study (individuals, couples, families, or other elements); does not require participant retention over time.
DALYs (disability-adjusted life years): units designed to indicate “disease burden,” calculated to represent the number of years lost to illness, disability, or premature death (morbidity and mortality), often used as an outcome indicator in evaluating medical interventions.
dependent variables: the variable(s) presumed to vary as a function of changes in the independent variable; sometimes called the outcome variable(s).
double-blind study: an intervention study approach where both the study participants and those delivering the intervention remain unaware about the group to which any participant has been assigned until the study’s conclusion (the “unblinding” phase).
effect size: a statistical means of quantifying the magnitude or size of a difference between groups or time points (in a longitudinal study) being compared.
exclusion criteria: standards applied in screening potential study participants where good fit with the sample criteria leads to an “ineligible to participate” decision.
experimental designs: a research study approach in which at least one variable is intentionally varied or manipulated (independent variables) and the influence of other variables is controlled to maximize the investigator’s ability to identify the effects of the manipulated variable on one or more outcome variables (dependent variables).
external validity: the extent to which conclusions based on observations about a sample can appropriately be generalized to a population or to other situations.
follow-up: collecting data to answer questions about durability of intervention effects over time following completion of intervention (as compared to immediate post-intervention data collection)
formative evaluation: evaluation designed to address intervention planning questions, such as feasibility or needs assessment (as opposed to process or summative evaluation).
homophily: the tendency for similar individuals to aggregate or associate together, separately from individuals who are different.
inclusion criteria: standards applied in screening potential study participants where good fit with the sample criteria leads to an “eligible for participation” decision.
independent variables: the variable(s) intentionally varied or manipulated in an experimental design to determine their effect on an outcome (dependent variable).
internal validity: the extent to which potentially confounding factors are controlled in an experimental study, enhancing confidence in the main study results concerning the impact of the studied variables.
intervention fidelity: the degree to which implementation of an intervention replicates the original, studied intervention protocol with integrity.
longitudinal study: data collection at two (or more) time points with the same units of study (individuals, couples, families, or other element); requires participant retention over time.
manualized intervention: one strategy to enhance fidelity and integrity in intervention implementation, involving the development and dissemination of detailed logic model and implementation guidelines.
maturation: when change occurs as a result of naturally occurring developmental processes rather than resulting from application of an intervention intended to produce change.
measurement inconsistency: the degree to which a measurement tool fails to consistently measure a construct and measurement error is introduced from individual differences in interpretation of the measurement tool rather than actual differences in events or experiences presumably being measured.
measurement reliability: indicates the degree of accuracy or precision in measuring the construct or variable of interest.
measurement sensitivity: the rate of accuracy a measurement tool has in detecting the problem of interest; the tool accurately identifies individuals meeting the criteria (“positives”) and does not miss those who should be classified as “positive” (few false negatives).
measurement specificity: the rate of accuracy a measurement tool has in not erroneously identifying individuals as “positives” when they should not be (false positive rate low) and correctly identifying individuals as “negatives” when they should be.
measurement validity: the extent to which a measurement tool or instrument adequately measures the concept, construct, or variable of interest, comprised of several types of validity (construct, concurrent, predictive, and others).
naturalistic observation: collecting data about behavior occurring in its natural environment and context, where the observer is non-participant (compared to laboratory, contrived, or controlled circumstances and to participatory action research).
non-treatment control: a study design where the comparison group receives no form of intervention in contrast to the experimental group.
outcome or impact evaluation: evaluation designed to answer questions about the effects of an intervention (see summative evaluation).
participant recruitment: the process of engaging participants in a study.
participant retention: process of keeping participants engaged in a study.
participatory action research (PAR): an approach to intervention or evaluation research where investigators are involved in both observational aspects of research and as agents of change (action oriented); understanding comes from changing the situation and observing the impact of the change efforts.
participatory observation: a form of naturalistic or semi-naturalistic observation whereby the investigator is or becomes an integrated member of the group being studied.
participatory research: a set of approaches to research where participants direct the activities for change and for investigation in collaboration with investigators.
placebo effect: an effect produced by exposure to a neutral “treatment” where the effects (positive/beneficial or negative/harmful) cannot reasonably be attributed to its characteristics, but instead to the experience of having some intervention delivered rather than none.
post-only design: data collection at one point in time after the intervention.
pre-experimental designs: study designs that lack control groups, suffering reduced internal validity as a result.
pre-/post-design: data collection at two comparison time points are before the intervention and immediately after the intervention.
pre-/post-/follow-up design: data collection at three comparison time points are before the intervention, immediately after the intervention, and at a significant point following the intervention.
primary data: information collected for the specific purposes to which it is used in a research study, tailored to the study’s selected aims, design, and variables.
process evaluation: evaluation research designed to answer questions about how an intervention was implemented (as differentiated from intervention outcomes, see summative evaluation).
proxy variables: a variable that serves in place of another variable that could not be or was not directly measured itself; proxy variables should have a close association (correlation) with the variables they represent.
quasi-experimental designs: research designs that include comparison (control) groups but lack random assignment to those groups.
random assignment: elements in an experimental study are assigned to study conditions in such a manner (randomly) that potential sources of group membership bias are minimized or eliminated; also called randomization.
random control trial (RCT): an experimental study design where participants are randomly assigned to different conditions, such as the novel experimental or control groups (may be non-treatment, placebo, TAU or a different intervention).
randomization: see random assignment.
random selection: elements (participants) in a study are selected from the population in a manner (randomly) that maximizes the sample’s representativeness of (generalizability to) the study’s target population and minimizes bias in the final selected sample.
screening and assessment tools: instruments used in practice to either identify persons who are at risk or possibly experience the problem of interest (screening) or to determine the extent/severity of the problem identified (assessment).
secondary data: research data originally gathered to meet one study’s aims, design, and demands, but can be re-analyzed as a separate study to meet a different study’s aims without having to collect new primary data.
selection bias: the risk of non-representative results due to a non-representative sample.
spontaneous or natural change: changes that occur without or outside of intentional intervention efforts.
statistically significant: determination that an observed relationship between variables exists beyond what might be expected by chance.
summative evaluation: evaluation research designed to answer questions about the effects, impact, or outcomes of an intervention (as differentiated from intervention implementation, see process evaluation).
treatment as usual condition (TAU): an experimental design where the control group receives whatever is the usual and customary form of intervention compared to the experimental group receiving the novel intervention being tested.