Module 2 Chapter 5: Integrating, Implementing, and Evaluating

By this point in the process, a social worker will have devoted a great deal of energy and effort toward generating a pool of evidence-supported materials which can help inform practice decisions. In reviewing and evaluating these materials, it is important to keep in mind the lessons learned in our earlier course concerning the problem with pseudoscience. As a reminder, the risk of relying on pseudoscience increases with authors making assumptions not based on evidence, failing to acknowledge contradictory evidence, relying on “shaky” evidence (poorly designed studies or purely anecdotal evidence), obscuring facts with artificially constructed lingo, over-interpreting the implications of study results, and/or circumventing the peer review process (Lilienfeld, Lynn, & Lohr, 2015; Thyer & Pignotti, 2015). At this point in the evidence-based practice process, social workers are challenged with the need to integrate what they learned from the literature to make evidence-informed practice decisions. Then they will implement the intervention decision and evaluate the process and outcomes.
In this chapter you:
  • learn practices for reviewing evidence presented in the empirical literature,
  • learn practices for critically appraising evidence presented in the literature,
  • are introduced to steps of integrating evidence, monitoring, and evaluating in practice.

Reviewing and Critiquing the Located Evidence

Drawing from what you learned in our earlier course Module 2 Chapter 4, the same principles about how to review an empirical article or report apply. Practicing social work in the “Information Age” has both advantages and disadvantages. On one hand, we have access to vast amounts of information (only some of which is evidence), much of it being available at low cost and at high rates of speed. Unfortunately:

“the fabulous Age of Information we’re living in doesn’t guarantee that we can make the most informed choices…the information may be out there, but it still takes work to find it, and think about it” (Ropeik, n.d.).

This section is about appraising the located evidence to improve practice decisions. We begin with a brief reminder about how to review empirical articles (from our prior course) and expand on these topics for applying evidence to informing practice decisions.

Review titles and keywords

The title of an empirical article should help a reviewer determine if an article’s relevance to addressing the intervention question at hand. The title may or may not refer to the specific intervention, social work problem or phenomenon, population, or outcomes studied. This initial review is not going to be conclusive regarding an article’s relevance, but it can help weed out some irrelevant pieces (improve precision).

In addition to the title, many published works also have a list of 3 to 5 keywords that the authors selected to help individuals search for their work electronically. Keywords might or might not appear in the title but can help inform a reviewer about the elements that the authors felt were most relevant to describe the work.

pile of skeleton keys

Review the abstract.

The abstract provides a reviewer with a summary of the study: aims, approach/design, methodology (participants, measures, and procedures), main results/findings, and conclusions/implications of the findings. Ideally, the abstract provides enough information for you to determine whether it is sufficiently relevant to pursue the full article. The abstract alone does not provide sufficient information for a reviewer to evaluate the evidence—that evaluation requires reviewing the full article.

abstract art

Review the article.

If an empirical report survives the screening applied in review of the title, keywords, and abstract, it is time to acquire the full report and review its contents. In reviewing articles for relevance to understanding interventions, this is a summary of what to look for in the separate sections of an article.

Introduction: The introduction should inform you about the background and significance of what was studied, the state of knowledge and gaps in what is known, and the rationale for engaging in the study. You should come away with an understanding of the research questions, and if there are hypotheses, what these might be. After reviewing the introduction, you should have a better idea of whether the remainder of the article is relevant for your purposes.

Methodology: The methodology should describe the research approach adopted by the investigators—this should follow logically from the research questions presented in the introduction. If the study is qualitative, you should know what tradition was followed. If the study is quantitative, you should know what design was applied. If the study is mixed-methods, you should understand the approach adopted. Then, you should be able to develop a clear understanding of what was done at each step of the study—how participants were recruited and, if random assignment to conditions was involved, how this was accomplished; what variables were studied and how each was measured; and, the data collection procedures employed. Note that intervention or evaluation research reports need to provide sufficient details about the intervention that you can evaluate its characteristics, replicate its delivery (if you so choose), and understand how fidelity/integrity was addressed in the study. 

Results/Findings: This section should explain how data were analyzed and what findings resulted from the analyses. 

Discussion/Conclusions/Implications: This is the place where authors are expected to tie together the results with the introduction, showing what the study contributed to knowledge and what answers emerged to the research questions. Authors should address any limitations of the study and show implications of the study results for practice and/or future research on the topic.

References: An article’s reference list contributes to important things to the analysis. First, it allows a reader to determine whether the literature behind the study was well-covered and up-to-date. Second, it offers potentially relevant articles to seek in conducting one’s own review of the literature.

Analyzing what occurred—Critiquing study methods.

Once the review of contents is completed, it is time to critically analyze what was presented in an article. This is where the reviewer makes decisions about the strength of the evidence presented based on the research methods applied by the investigators. In critiquing the methods, here are some points to consider.

Participants. You should consider the appropriateness and adequacy of the study “sample” in terms of the research approach, study design, and strength of evidence arguments. Authors should present descriptive information about the “sample” in terms of numbers and proportions reflecting categorical variables (like gender or race/ethnicity) and distribution on scale/continuous variables (like age).

The participant response rate (in quantitative studies) might also be calculated—this is the number of participants enrolled in the study divided by the number of persons eligible to be enrolled (the “pool”), multiplied by 100%. Very low response rates make a study vulnerable to selection bias—the few persons who elected to participate might not represent the population. In a qualitative study, generalizability is not a goal, but information about the participants should provide an indication of how robust the results might be. Finally, regardless of study approach, authors typically make evident that the study was reviewed by an Institutional Review Board for the inclusion of human participants.

Not only does information about a study’s participants help you evaluate the strength of the evidence, it helps you consider the relevance of the “sample” to the social work practice problem/decision you are facing. Consider, for example:

  • the nature of the pool or population represented
  • inclusion and exclusion criteria—what they were and their implications
  • adequacy of the “sample” to represent the “target” population (numbers and representativeness) as a generalizability/external validity issue (quantitative studies)
  • diversity in the “sample” and inclusiveness (qualitative studies)
  • whose presence might have been excluded (intentionally or unintentionally)
  • attrition/drop out from the study (if longitudinal) or from the intervention
  • relevance of the “sample” to the clients for whom you are seeking intervention information
  • potential ethical concerns should you wish to replicate the study or intervention.

number of construction paper cutout of people

Intervention details. Intervention and evaluation reports differ from other forms of empirical literature in that they need to describe key aspects/elements of the intervention being studied. These features (adapted and expanded from a list presented by Grinnell & Unrau, 2014) need to be appraised in the process of reviewing such an article:

  • intervention aim—what were interventionists attempting to change, what were they attempting to achieve, what were the measurable objectives of the intervention? Were these elements appropriate to the logic model and/or theory underlying the intervention?
  • intervention context—where, when, and under what conditions was the intervention delivered? How do these factors likely influence the intervention implementation and outcomes? Are these factors replicable?
  • change agent—who delivered the intervention (change agent) and what were the characteristics of the change agent(s) involved? How do these factors likely influence the intervention’s implementation and outcomes? Are these factors replicable?
  • intervention elements—what were the key elements of the intervention, how do these relate to theory, logic model, empirical literature?
  • intervention fidelity—to what extent did intervention implementation actually reflect the intervention protocol? How did the investigators assess fidelity (or intervention integrity)?
  • inclusiveness—to what extent is the intervention culturally and otherwise inclusive, sensitive, appropriate? How did investigators ensure or assess inclusiveness (cultural competence) of the intervention and its delivery?

Measurement and data collection. In assessing strength of evidence, it is important to consider how data were collected and how data collection frames the evidence. A great deal depends on the study approach—qualitative, quantitative, or mixed methods. Qualitative data collection procedures should be clearly identified by the investigators, including a description of what was asked of participants or what was being observed, and how the data were handled and coded. Practices related to inter-observer or inter-coder reliability should be reported, as well. Your job is to assess whether the variables were measured in a reliable, valid, and unbiased manner—particularly the outcome variables in the case of intervention or evaluation research.

Quantitative approaches require clear descriptions of the variables and how each was measured. You learned about measurement principles in our earlier course (Module 3, Chapter 5)—validity and reliability, in particular. These psychometric properties of measures used help determine the quality and strength of the data collected. Authors typically report this information for quantitative data collection tools previously published in the research literature; they also may summarize literature concerning how their measures were known to perform with the specific type of study participants involved in the study—for example, different ages, diagnoses, races/ethnicities, or other characteristics. On the other hand, investigators may create or modify existing tools and instruments for their own study, and psychometric information may not be available. This is an important consideration in your analysis of a study’s methodology—it does not mean that the study is not valid, just that its strength may be unknown.

A note concerning administrative and secondary data analysis is warranted here: it is important to evaluate, for yourself, how adequately you believe the variables of interest were indicated by the data used. For example, primary data in a study of who participated in prisoner visitation (parents, siblings, partners/spouses, children, and other family members) lacked consistency regarding the variable for “parent.” Administrative records included various terms, such as parent, mother, father, mother-in-law, step-father, foster mother, and others. The investigators made decisions concerning how to manage inconsistencies in data recording and readers need to decide for themselves if they agree with the decisions made (Begun, Hodge, & Early, 2017).

empty jail cell

Study procedures. Sometimes study procedures are described in the participants and measures sub-sections, and sometimes there is a separate sub-section where they are discussed. Study procedures content describes activities in which the investigators and study participants engaged during the study. In a quantitative experimental study, the methods utilized to assign study participants to different experimental conditions might be described here (i.e., the randomization approach used). Additionally, procedures used in handling data are usually described. In a quantitative study, investigators may report how they scored certain measures and what evidence from the literature informs their scoring approach. Regardless of the study’s research approach or whether procedures are described in a separate sub-section, you should come away with a detailed understanding of how the study was executed. As a result, you should be sufficiently informed about the study’s execution to be able to critically analyze the strength of the evidence developed from the methods applied.

Analyzing what was found—Critiquing results.

The structure and format of the results section varies markedly for different research approaches (qualitative, quantitative, and mixed methods). Regardless, critical review of how results were determined from the data is an important step. In the prior “article review” step you would have noted the actual results. In this step you are assessing the appropriateness and adequacy of the data analysis procedures and report of findings. To some extent, you can rely on the peer reviewers to have addressed these issues. This is a major advantage of limiting your search to peer reviewed manuscripts. However, as your familiarity with qualitative and quantitative data analysis matures, you will be able to engage in this assessment more fully. At the very least, you can assess how the reported results relate to the original research aims, questions, and/or hypotheses. You can also assess descriptive analyses for what they tell you about the data and how those results relate to your practice questions. You can also become informed about any problems the investigators encountered (and reported) regarding data analysis, how these problems were addressed, and how they might influence the strength of evidence and conclusions drawn from the data.

illustration of the word RESULTS

Analyzing what was concluded—Critiquing the discussion.

The discussion section of an article (review or empirical) presents the authors’ interpretation of what was found. In your critique of the manuscript you need to assess their interpretation—determining the extent to which you agree with their interpretation and how the study fits with the other pieces of evidence you have assembled and reviewed, how well it relates to the previous existing literature—did the results confirm or contradict the literature, or were the results ambiguous so that no strong conclusions could be drawn? Here are several points to consider in your analysis:

  • the extent to which the conclusions are appropriate based on the study approach/design, participants, measures, procedures, data obtained, and data analyses performed—recommendations need to be supported by the evidence;
  • the extent to which alternative explanations (competing hypotheses) fit the evidence, rather than or in addition to the interpretation offered by the authors—assessing what else could explain the observed outcome and whether the study design was strong enough to conclusively determine that the observed intervention outcomes actually were due to the intervention and not due to other factors;
  • conclusions are not over-reaching the evidence—the conclusions are based on the strength of the evidence developed;
  • the extent to which you believe the authors identified relevant methodological, analytic, or data quality limitations in the study, and how you believe the limitations affect the study’s strength and relevance;
  • implications of the study results for practice and/or future research that you believe the authors might have missed reporting.

Critiquing other relevant pieces.

As previously noted, you might review an article’s reference list to evaluate its adequacy, appropriateness, and strength. One feature of significance is the extent to which the references are up-to-date. This does not mean that older references are not important; it simply means paying attention to whether new evidence is integrated into the manuscript.

Another piece of evidence to seek concerns errata or other corrections to the article that may appear in published literature subsequent to the article first appearing. The word errata (Latin) refers to errors that appear in print, and journals sometimes publish corrections. Often the corrections are minor—perhaps a number was incorrectly reported. Sometimes, however, the errors discovered have major implications for interpreting the evidence. And, unfortunately, journals are sometimes faced with the necessity to retract an article because of research integrity/misconduct concerns. A 2012 review of over 2,000 retracted research articles in biomedical and life-sciences research indexed in PubMed reported that over 67% were retracted because of research misconduct (fraud), almost 10% because of plagiarism, 21% because of significant errors, and about 14% because they had previously been published elsewhere (Fang, Steen & Casadevall, 2012). The journal published a correction in 2013—a table depicting the most frequently cited retracted articles contained errors. The corrected article (Fang, Steen, & Casadevall, 2013) presented a corrected table.

Example of errata posted to an article

Assessing Appropriateness

As you complete this phase of the EBP process, you are taking into consideration all of the evidence that you were able to locate, assemble, review, and critique to make an informed practice decision. One dimension of analysis for social work professionals to consider is the degree to which a study is appropriate to include in the decision-making process—this is over-and-above the critique of its quality and strength of evidence. A study could be very strong on these dimensions but not have relevance to the practice question at hand. For example, there may exist a great deal of evidence concerning interventions to prevent unplanned pregnancies among older adolescents and young adults, but this evidence might not be relevant to preventing pregnancy among younger adolescents (aged 11-15 years). Thus, it is important for the social work practitioner to consider how well a study’s participants represented the clients for whom the studied intervention is being considered. Bronson and Davis (2012) explain:

“In other words, social workers have an obligation to know which interventions or programs are supported by rigorous research and to share that information with clients. However, research knowledge provides only the starting place for selecting the intervention of choice. The practitioner must also consider the similarities between the subjects of the research and the client seeking services, the acceptability and appropriateness of the intervention for the client, and the client’s ability to participate in the intervention” (p. 5).

Along these same lines, it is important to consider the outcomes specified in the reviewed studies and how well these outcomes align with the intervention goals prompting the search for evidence. For example, a systematic review of medication assisted treatment (MAT) identified and reviewed 40 studies—observing that they reported on a diverse set of outcome measures. Results were mixed in terms of recommending MAT over other treatments and the outcomes differed on the basis of which medications were administered in the treatment protocol (Maglione et al., 2018). In other words, some of the reviewed studies would be less relevant than others, depending on the practice concerns being addressed by the practitioner and client.

Another dimension of appropriateness that needs to be taken into consideration is feasibility. Feasibility is about the practical realities involved—the likelihood that the intervention can be implemented with fidelity in the context where the practice question arose. For example, a form of mental health or addiction treatment may not be feasible to implement with persons being released from incarceration or with persons at risk of homelessness with a high degree of fidelity to the intervention protocol because of the uncertainties and fluctuations in the life structures and contexts these individuals might experience. Or, it may not be feasible to implement these intervention protocols if there are too few trained professionals to deliver the intervention in a particular community. Political and cultural context are also important considerations, depending on characteristics of a particular intervention and how those features intersect with the target population or community—this issue relates to the acceptability aspect of an intervention’s appropriateness.

Working Example: Addressing High School Dropout

Considering the 12 interventions that the team identified in the previous step, they felt that 12 intervention options were too many. They re-engaged with the COPES framework, deciding to “favor the approach that best allowed them to change their focus and emphasis without having to totally revamp all the organizational structure or staffing” (Kelly & Franklin, 2011, p. 151). In other words, finding the most effective option that could most practically be adopted. Based on this further analysis, the team decided on one intervention as being the most feasible for the district to implement: Quantum Opportunities Program (QOP). They provided this information to the school board in their final report.

image of a student with a diploma

Implementing, Monitoring, and Evaluating

Let’s review the six steps of the evidence-based practice (EBP) process specified in the introduction to this module:

Step 1: specifying an answerable practice question

Step 2: identifying the best evidence for answering that question

Step 3: Critically appraising the evidence and its applicability to the question/problem

Step 4: Integrating the results from the critical appraisal with practice expertise and the client’s or client system’s unique circumstances

Step 5: Taking appropriate actions based on this critical appraisal of evidence

Step 6: Monitoring and evaluating outcomes of (a) the practice decision/intervention and (b) effectiveness and efficiency of the EBP process (steps 1-4).

Subsequent chapters prepared you to develop answerable practice questions, and to identify and critically appraise the relevant evidence. The next step is for the practitioner and client(s) together to determine the most appropriate evidence-based intervention for their situation—what is appropriate to THIS practice situation. The process recognizes that different tools (intervention options) often exist for promoting change and that the same tool is not always the best tool for different individuals or circumstances. Consider the saying (often attributed to Maslow, 1966), “if all you have is a hammer, everything looks like a nail”and how inappropriate it might be to apply that hammer to open a piggy bank. Thus, client values, preferences, and attributes, along with practitioner experience and circumstances, are part of the social work EBP process—what works for some might not be the best choice for others under all circumstances.

shattered piggy bank

For example, an intervention designed to address homelessness might not be the best choice for individuals who also experience co-occurring mental health or substance use problems. One-third of persons who are homeless experience a serious mental disorder (e.g., schizophrenia, bipolar disorder) and at least half of persons who are homeless have substance misuse problems or a substance use disorder (Padgett, Stanhope, & Henwood, 2011). Whether these mental health and substance-related problems came first or they came after the homeless situation, they tend to worsen and complicate (exacerbate) the problem of homelessness. “Usual” intervention strategies fall short in meeting the needs of this distinct population of individuals experiencing or at risk of homelessness (Padgett, Stanhope, & Henwood, 2011). Thus, a search for evidence about alternative best practices begins with the stated research question including these features if they are part of the encountered practice concern.

Once the prime-choice intervention has been selected, the next step is to implement that intervention with the greatest possible degree of fidelity and integrity. Implementation may require additional training or certification by the professional delivering the intervention. Instead, it may be necessary to collaborate with or make a referral to another service provider to ensure proper implementation.

Critical to this implementation process is the application of monitoring and evaluation processes, as well. This means that the implementation process will be monitored for fidelity and outcomes will be evaluated along the way. Together with the client(s), measurable goals and objectives need to be identified. The evidence collected from monitoring and evaluation will form part of a feedback loop to inform modifications or changes in the intervention strategy. It may even become necessary to switch from the initially selected intervention plan (Plan A) to an alternative that is also supported by the search for evidence (Plan B or Plan C). These six steps apply to social work practice at any level—individuals, families, groups, communities, institutions, and policy. In future modules you will learn more about the evaluation process and what to consider when analyzing a study’s methods and results.

Plans A and B crossed out with Plan C remaining

Stop and ThinkTake a moment to complete the following activity.

1. Create a checklist of information you want to include in reviewing an empirical article about an intervention study.

2. Create a second checklist of points to analyze in your critique of the reviewed article, including points related to appropriateness.

3. Keep these tools handy as you begin to locate, review, and critically analyze intervention literature.

Chapter Summary

In this chapter you learned how the final steps in the evidence-based practice (EBP) process are fulfilled. You learned to review intervention and evaluation literature for strength of evidence and how empirical literature informs practice decisions. You learned steps and issues in critically analyzing the empirical literature, as well. Finally, you were introduced to issues related to implementing an evidence-informed practice decision and the importance of monitoring and evaluating the intervention implemented.

License

Icon for the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Social Work 3402 Coursebook Copyright © by Dr. Audrey Begun is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, except where otherwise noted.

Share This Book