When Accountability Punishes Rational Behavior
When we stop experimenting, we stop learning.
Organizations rely on accountability systems to drive performance. Schools are no exception. Data meetings, improvement plans, evaluation frameworks, and progress monitoring are designed to clarify expectations and ensure results.
But many accountability systems fail for a simple reason that leaders often miss: They punish behavior that is still psychologically rational.
When this happens, the system does not produce improvement. It produces compliance theater, defensive routines, and subtle disengagement.
In other words, the accountability system starts working against the very outcomes it was designed to achieve.
The Rationality Problem
Most accountability systems are built on a straightforward assumption: if expectations are clear and consequences are strong enough, people will change their behavior.
But human behavior is not governed by incentives alone. It is governed by psychological safety, identity, and perceived risk.
When a system punishes the behavior that makes sense given the environment, people do not become more accountable. They become more self-protective.
For example, consider a teacher in a high-stakes environment where data meetings focus primarily on identifying deficits.
The rational behaviors in that environment may include:
Avoiding instructional experimentation
Minimizing discussion of mistakes
Teaching narrowly to tested content
Presenting partial data rather than complete data
None of these behaviors support deep learning. Yet each is psychologically rational when the cost of vulnerability is public scrutiny or professional risk.
Punishing these behaviors without addressing the conditions that produce them only intensifies the cycle.
Why Accountability Systems Misfire
Accountability systems often assume that undesirable behavior reflects low effort, poor commitment, or weak skill.
But in many cases the behavior is a predictable adaptation to the system itself.
Research on organizational behavior shows that when environments emphasize blame and public evaluation, individuals shift toward risk-avoidance strategies.1 Instead of surfacing problems early, they conceal them. Instead of experimenting, they replicate familiar routines.2
In schools, this dynamic appears in subtle but pervasive ways:
Data conversations become performances rather than inquiry.
Improvement plans become compliance documents rather than learning tools.
Observations trigger impression management rather than reflection.
The system is designed to promote learning, but the psychological conditions promote self-protection. From the perspective of the individual operating within the system, the behavior still makes sense.
The Downstream Consequences
When accountability punishes psychologically rational behavior, three predictable outcomes follow.
First, information quality declines.
People share what is safe rather than what is true. Leaders lose visibility into real problems.
Second, innovation decreases.
Experimentation carries reputational risk, so individuals revert to familiar practices even when those practices are ineffective.
Third, trust erodes.
Employees experience accountability as surveillance rather than support, which reduces engagement and increases turnover.
Ironically, the more leaders intensify the accountability system in response to weak results, the stronger these defensive behaviors become.
The Data Meeting
Consider a common accountability ritual in schools: the data meeting.
A middle school ELA teacher walks into a meeting where student writing scores are projected on a screen next to those of other teachers on the team. The discussion quickly turns to why her students are underperforming.
She explains that students struggled with argumentative structure. A leader asks what she plans to do differently next week.
The next time the team meets, her scores look better. But something else has changed too.
Instead of trying a new writing routine she had been experimenting with, she shifts to test-style prompts and tightly structured outlines designed to produce faster gains on the rubric. Experimentation stops.
From the perspective of the accountability system, the behavior change looks like improvement. From the perspective of learning, it is retreat.
But psychologically, the teacher’s behavior is completely rational.
When performance comparisons are public and mistakes trigger scrutiny, the safest strategy is to reduce risk. Experimentation introduces uncertainty; familiar routines produce predictable outcomes.
This is the paradox at the heart of many accountability systems. Leaders assume that undesirable behavior reflects low effort or weak commitment. But in many cases, the behavior is a rational response to the environment the system itself has created.
Designing Accountability That Works
Effective accountability systems start from a different premise: behavior is often an adaptive response to the environment.
Instead of asking only, “Why aren’t people doing what we expect?” leaders must also ask, “What in the system makes the current behavior make sense?”
This shift changes the design of accountability in several important ways.
Separate behavior from identity.
Feedback should focus on specific actions and systems rather than character judgments.
Reward transparency.
If surfacing problems creates risk, problems will remain hidden.
Interrogate the system, not just the individual.
Repeated underperformance often signals a structural issue rather than a motivational one.
Protect learning while demanding results.
High expectations and psychological safety are not competing priorities; they are mutually reinforcing.
When accountability systems recognize the rationality behind behavior, they can target the conditions that produce it.
A Simple Test for Leaders
Before tightening an accountability system, leaders should ask a simple question: Does the behavior we are punishing still make sense given the environment people are working in?
If the answer is yes, the system may be reinforcing the very behavior it is trying to eliminate.
Accountability works best not when it assumes irrational actors, but when it recognizes that people are responding logically to the incentives, risks, and signals embedded in the system.
Design those conditions well, and behavior changes naturally.
Design them poorly, and no amount of pressure will produce the results leaders are hoping for.
Marylene Gagné and Rebecca Hewett, “Assumptions about Human Motivation have Consequences for Practice,” Journal of Management Studies 62, no. 5 (2024): 2098-2124.
Peter M. Senge, The Fifth Discipline: The Art and Practice of the Learning Organization (New York: Doubleday, 1990).



