|Introduction||Key Terms and Definitions||1. What to Measure||2. Characteristics of a High-Quality Measurement Tool||3. Establishing a Fidelity Threshold||4. Summarizing Data for
Collecting high-quality data is essential for determining whether practitioners are implementing a practice as intended. This tip sheet provides guidance for assessing the quality of tools that measure practitioners’ implementation of evidence-based practices – the behaviors practitioners exhibit when working directly with children and families.
A tool is the instrument or protocol used to collect data on practice implementation (e.g., a checklist, rating scale, observation form, list of interview questions, or document review form). A tool contains multiple items (such as individual questions or topics). Fidelity may be measured with a single tool or a combination of tools (for example, an observation protocol and a log for documenting when practices were conducted with children and/or families). Also, a tool can measure several fidelity components (e.g., adherence, quality of delivery, dosage).
Some practice developers design a tool to measure implementation while they are conducting research to form an evidence base for the practice. However, an associated research-based tool does not exist for many practices. If no tool is available for a particular evidence-based practice, you can develop one with experts in the practice and in evaluation tool development. Use the information below to evaluate the quality of existing tools or as a checklist for developing your own.
Characteristics of a High-Quality Tool
A high-quality tool to assess the implementation of evidence-based practices:
- Is aligned with the selected evidence-based practices. For instance, if you are implementing specific evidence-based social-emotional practices, the tool should measure implementation of those practices.
- Provides valid information. The tool provides accurate information on implementation of the practices.
- Is reliable. The tool produces consistent information across users, settings, activities, and time points. Item wording and instructions must be clear and complete to achieve reliability.
- Captures variation across time points and practitioners with different levels of implementation skill. The tool must be sensitive enough to detect when practitioners have improved their implementation and how practitioners differ in how well they implement the practices.
- Provides a meaningful fidelity threshold score that indicates whether practitioners have reached a level of implementation that is sufficient for achieving targeted child or family outcomes. A typical practice implementation tool has multiple items that together produce a summary score. A threshold score is a predetermined score that indicates whether a practitioner has reached fidelity. (See Tip Sheet 3 for more information on fidelity thresholds.)
- Is practical. The tool can be used with the staffing and resources available. However, a practical tool that is not reliable and valid will not produce meaningful data.
- Provides information useful to practitioners, such as areas of strength and areas for improvement to move toward, reach, and maintain fidelity. An ideal tool provides information that practitioners, administrators, and others can use to improve practice.
High-Quality Administration of a Tool
Additional considerations for improving data quality and usefulness include:
- Timing/frequency of administration. Use the tool sufficiently often to measure incremental improvement (practice change) and the maintenance of fidelity over time. Consider more frequent assessments as practitioners work to achieve fidelity so as to increase or change supports for practitioners who are not making adequate progress. Once practitioners reach fidelity, frequency can be reduced to periodically assess fidelity maintenance.
- Instructions and training for raters. Provide training and supporting resources (e.g., written protocols and guidance) to all those who are collecting data on practice implementation and ensure they have the knowledge, skills, and resources to produce accurate data. For self-assessments, provide clear instructions on how the assessment should be conducted. Clear instructions and thorough training will help improve the tool’s reliability.
- Communicating the purpose of data collection. Clearly conveying the purpose of data collection and the usefulness of data for practitioners and programs can increase motivation and commitment to high-quality data.
- Practice Improvement Tools: Using the DEC Recommended Practices (ECTA, n.d.).
- Fidelity Assessment (Module 7). (NIRN, n.d.)
The contents of this tool and guidance were developed under grants from the U.S. Department of Education, #H326P120002 and #H373Z120002. However, those contents do not necessarily represent the policy of the U.S. Department of Education, and you should not assume endorsement by the Federal Government. Project Officers: Meredith Miceli, Richelle Davis, and Julia Martin Eile.