The Science of Doing: Implementation to Excellence (Learning Entry 4: Measuring What Matters)
Mar 25, 2026
Measuring What Matters: Data, Fidelity, and Sustainability
By Janine Gacke and Dr. Morgan Goering
March Series: The Science of Doing: Implementation to Excellence
Estimated read time: ~9 minutes
Implementation is not what we intend; it is what people experience.
Grounding Wonder: How do leaders and teams align implementation actions with current reality without losing people, purpose, or progress along the way?
Before Judging Outcomes, We Ask How Well We Implemented
As I have worked alongside teams across classrooms, districts, and systems, I have noticed a consistent pattern in how data is approached. Outcome data tends to carry the most immediate weight. It is often the first place we look when we are trying to understand whether something is working. It tells us what is happening for students, whether progress is being made, and whether our efforts are producing the results we intended.
However, outcome data on its own, however, cannot explain what produced those results. It tells us what is happening. It does not tell us why. In many systems, the sequence moves quickly from outcome to interpretation. When results do not align with expectations, urgency increases. Questions surface rapidly, and the conversation can shift toward explanation or correction. At times, this shift introduces blame, subtle or explicit, which narrows the space for learning. What I have learned, and continue to relearn, is that outcomes cannot be interpreted meaningfully without first understanding how well we implemented the practice in the first place. Implementation research reinforces this. Without attention to fidelity, we risk drawing conclusions that are disconnected from the conditions in which the work is occurring (Fixsen et al., 2005). Fidelity slows us down just enough to ask a different question. Not: Did it work? Instead: What did we actually put in place?
Implementation in Practice: What Data Reveals Beneath the Surface
In my work, data has rarely been just data. Behavior data, for example, is never only about behavior. It reflects clarity of expectations, consistency of adult response, predictability of routines, and the presence, or absence, of relational trust. It reflects systems.
The same is true for fidelity. Fidelity data does not simply tell us whether a practice was followed. It reflects the conditions surrounding that practice. It shows where expectations were clear or unclear, where supports were present or uneven, and where the system either made implementation possible or made it difficult to sustain.
When I began engaging more deeply with fidelity data, I initially saw it through a compliance lens. I understood its importance, but I also understood how it could be experienced by teams. It could feel evaluative. It could feel like a judgment. Over time, that understanding shifted. Fidelity began to feel less like a score and more like a mirror. When the questions and process are wielded intentionally, the process reflects:
- what we actually implemented
- where practice flexed across contexts
- which conditions supported consistency
- where strain or variability emerged
Research supports this complexity. Fidelity is not binary. It is multidimensional and requires interpretation across multiple components of practice (Century, Rudnick, & Freeman, 2010; Dane & Schneider, 1998). That means variation is not failure; it is information.
I have sat with teams who expected fidelity data to confirm whether they were “doing it right.” Instead, it surfaced patterns they had not yet named. Practices that felt clear in one setting appeared fragmented in another. Supports that were assumed to be in place were uneven. Expectations that felt shared were being interpreted differently across roles. In those moments, the conversation changed. It was no longer about whether individuals were implementing correctly. It became about what the system was, or was not, making possible.
Leadership & Coaching Practice: Holding Curiosity and Accountability Together
One of the most complex parts of this work is holding two truths at the same time. Fidelity matters, and how we engage with fidelity matters just as much. In my experience, especially at the state and systems level, data carries weight, there are expectations, and some are held accountable. There are certainly real implications for students and for systems. Yet, I have seen how quickly fidelity conversations can shift if they begin in a tone or stance of judgment rather than curiosity. When leaders begin with questions like Why didn’t this work? Or, why aren’t we seeing results? The space for interpretation narrows.
Leaders have the opportunity to begin instead with open questions like:
- What does this data suggest about how we implemented the practice?
- Where are we seeing consistency?
- Where are we seeing variation?
- What conditions might be shaping those patterns?
This shift does not remove accountability. It strengthens the depth and breadth of the conversations between those at the table. It also allows us to hold accountability at the level of the system, not just the individual. Asking questions in this frame preserves psychological safety while maintaining clarity that implementation matters.
Coaching plays an important role here. Coaches help teams stay in inquiry. They help distinguish between thoughtful adaptation and unintended drift. They support conversations that remain grounded in understanding before moving to action. When this stance is consistent, teams stay engaged, even when the data is difficult. They remain willing to surface what is not yet working because they trust the system will respond with curiosity rather than consequence.
System & Structure Lens: Layering Fidelity to Understand the System
What has become increasingly clear in many teams is that fidelity cannot be understood in isolation. When we look at fidelity in a single classroom, team, or moment, we see only part of the picture. When we begin to layer fidelity across classrooms, teams, and buildings, patterns emerge. We begin to see where alignment holds and where the system not being in place puts more burden and strain on the people holding the system up. Layered fidelity allows systems to:
- identify patterns across tiers
- recognize where supports are inconsistent
- respond before outcomes begin to decline
- align resources in more intentional ways
Tools such as the Tiered Fidelity Inventory (TFI) support this work by offering a structured way to examine implementation across tiers (McIntosh et al., 2017; Horner et al., 2014). Classroom, team, and meeting-level fidelity checks, such as TIPS protocols, help ensure that practices are not only defined, but enacted in ways that are consistent enough to produce reliable outcomes. I have come to see these tools not as measures of performance, but as ways of seeing the system more clearly. They do not tell us what to think. They help us understand what is happening.
Fidelity as Curiosity Protocol
Structuring conversations to support learning, not judgment, is a difficult and time consuming task to plan for as leaders. The frameworks and inventories are often comprehensive protocols. Before reviewing outcome data, I have found it helpful to pause with fidelity using a consistent set of questions:
Core Questions
- What did we actually implement? What is our current reality?
- Where did practice flex, and why?
- What conditions supported consistency?
- What conditions created strain?
- What would make this practice easier to implement as intended?
Guardrails
- No ranking of individuals, teams, or sites
- No labeling of performance
- No consequences tied to the conversation
- A shared commitment to understanding before action
When these guardrails are present, the conversation shifts. Fidelity becomes a space for learning. Teams are more willing to engage honestly because the goal is not to evaluate, but to understand. Over time, this strengthens both coherence and trust. It allows systems to respond more effectively because they are working from a clearer picture of current reality. Check out the Fidelity as Curiosity Protocol that can be used to process results from any fidelity tool. The example used is the newly released TFI 3.0.
Reflect · Connect · Grow
Reflect (individual sense-making)
- How is fidelity data currently experienced in my context? Where might it feel evaluative rather than supportive?
Connect (system patterning)
- Where do I notice alignment, or misalignment, across classrooms, teams, or tiers? What patterns begin to surface when I look across levels?
Grow (intentional direction without urgency)
- What shift in how we structure fidelity conversations could better support learning while maintaining accountability?
Closing and Looking Ahead
As I reflect on this series, I am reminded that implementation is not defined by speed or perfection. It is defined by alignment, presence, and care sustained over time. Across this work, we have explored how systems move from intention to practice through clarity, coaching, teaming, and now, through data that supports learning rather than judgment. The Science of Doing reminds us that implementation is not merely technical work. It is leadership work. It is human, relational, and deeply consequential for students. As always, implementation is not what we hope or intend; it is what people experience.
Next month, we will be joined by yet another amazing guest writer in the Leadership Knowledge Commons. The guest, who brings a wealth of knowledge in coaching, alternative learning environments, systems thinking, and secondary education, will join Dr. Morgan Goering in a Series dedicated to a topic many know is important in this work, but often can’t find time to prioritize time to learn about Adult Learning or Andragogy. Check out Morgan’s Leader and Learner Profile here!
References
Century, J., Rudnick, M., & Freeman, C. (2010). A framework for measuring fidelity of implementation. Educational Evaluation and Policy Analysis, 32(2), 199–218.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention. Clinical Psychology Review, 18(1), 23–45.
Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. University of South Florida.
Horner, R. H., et al. (2014). School-wide positive behavior support. Journal of Positive Behavior Interventions, 16(2), 75–87.
McIntosh, K., et al. (2017). Sustained implementation of PBIS. Journal of Positive Behavior Interventions, 19(4), 213–224.