This tip sheet series provides concise guidance for collecting and analyzing high-quality data on the implementation of evidence-based practices. The content was designed for staff of state and local early intervention (IDEA Part C) and preschool programs for children with disabilities (IDEA Part B 619), but it is relevant for anyone evaluating the implementation of evidence-based practices. The tip sheets address topics that state personnel identified in webinars and workshops the Center for IDEA Early Childhood Data Systems (DaSy) and the Early Childhood Technical Assistance Center (ECTA) offered in partnership with the National Center for Systemic Improvement and the IDEA Data Center. The tip sheets are not intended to be comprehensive; readers are encouraged to consult the resources listed in each tip sheet and to obtain support from federally funded technical assistance centers such as DaSy and ECTA, university partners, and others with evaluation expertise.
The long-term goal of the State Systemic Improvement Plan (SSIP) and other federal and state early intervention and early childhood education initiatives is improved child and family outcomes. States play a critical role in supporting practitioners in the use of evidence-based practices to improve child and family outcomes. Therefore, it is essential for states and local programs to collect, analyze, and use data on the extent to which practitioners are implementing evidence-based practices as intended. Having high quality data on implementation, decision-makers can identify implementation successes and challenges and target valuable resources appropriately.
Each tip sheet is described below and linked at the top left of this page.
- Key Terms and Definitions defines key terms used in the tip sheets.
- Tip Sheet 1: What to Measure helps state and local programs develop a clear understanding of what they should measure when evaluating practice implementation. It presents key components of a practice implementation evaluation.
- Tip Sheet 2: Characteristics of a High-Quality Measurement Tool presents considerations to ensure the measurement tool and data collection approach provide relevant, useful data. Considerations include alignment with the evidence-based practice, reliability and validity, practicality, timing and frequency of administration, and training of raters.
- Tip Sheet 3: Establishing a Fidelity Threshold outlines considerations and example methods for determining a fidelity threshold.
- Tip Sheet 4: Summarizing Data for Decision-Making presents strategies for aggregating data on practice implementation and includes instructions and calculations for each strategy.
- Tip Sheet 5: Analyzing Data for Decision-Making explores the steps involved in determining analysis goals and questions and provides examples of analysis approaches for common questions related to the implementation of evidence-based practices.