The Fallacy of Performance-Based Funding
Performance Funding Seems to Make Sense
Behavioral psychologists call it informational social influence, or social proof. Just plain folks call it “monkey see, monkey do.” Developed through evolution, the “herd mentality” has conditioned us to believe that the right course of action is typically consistent with what everyone else does. Most of the time, this tendency has been beneficial, from surviving natural disasters to determining the best television to purchase. Unfortunately, social proof also makes us susceptible to pursuing the wrong path, especially in times of crisis or uncertainty.
As the growing movement behind performance funding illustrates, those of us in higher education are as prone to social proof as any other group. Everyone’s doing it; why shouldn’t we? I get it. Performance funding seems to make sense:
You invest your resources into that which will give you the greatest return. Who in their right mind would continue to invest in ventures that are stagnant, or worse, losing value, when you could invest in another that is “performing?” The very premise that performance funding is built upon — the premise that if we fund only performance outcomes, performance will improve — unfortunately leads to one of two end results. The first is that eventually only those institutions that perform best will remain. If our growth in expenditures in higher education is outpacing the ability for communities and legislatures to fund us and the funding pie isn’t getting larger, then performance funding will direct more and more of the finite resources only to those institutions with the highest performance or outcomes. Survival of the fittest, right?
Wrong. Most of us would struggle to identify even a handful of public community or technical colleges that have closed because of financial constraints. The prospect of closure has been bandied about numerous times, but these institutions persist, in spite of financial woes.
That’s because they are community institutions employing and serving members in those communities. When their future is threatened, public outcry ensues and politicians respond. Few state legislators and governing boards have an appetite for shutting down their community’s college.
Thus, the end result of only the fit surviving never happens. What then? The other logical end result of performance funding is a new equilibrium after the redistribution of existing public funding. This hypothesis is more difficult to explain, but here goes.
Traditional community college funding models have been condemned for their focus on enrollments. Modern funding models based on performance focus on outcomes, primarily outcomes that are derivatives of enrollment. They measure the output of the enrollment spectrum: successful course completions, credit accumulation, persistence to another semester, transfer to another institution, and completion.
The goal is to shift the funding to outcomes and away from enrollment. Yet with most measures being derived from enrollment, counting students at the other points in the educational spectrum simply establishes a new equilibrium. Within most funding jurisdictions, the differences between colleges on the handful of any selected measures being used are simply not that different. This is especially true when you allow for differences between colleges (e.g., urban versus rural, affluent areas versus low-income, technical versus comprehensive, etc.). When performance funding is applied then, some money changes hands, but after the initial implementation of the model, the amount shifting between institutions isn’t significant. Take the two neighboring states of Tennessee and Alabama. IPEDS data from 2010-11 and 2011-12 show that the actual movement of state appropriations between individual institutions within Tennessee (a 100 percent performance funding state) and Alabama (a state without performance funding) was surprisingly similar. After controlling for changes in overall state funding (it appears during this time Tennessee saw an increase in state funding, whereas Alabama saw a decrease), and examining the amount of funding change as an overall percentage of the annual state appropriations to institutions, the relative impact of the different funding models emerges.
In Tennessee, on average, the percent of a college’s state funding changed just 0.3 percent, with changes ranging at individual colleges from a decrease of 7 percent to an increase of nearly 14 percent. In Alabama, without performance funding, the average change of a college’s state funding—guess what!—0.3 percent. And the range in change within specific institutions was even greater than in Tennessee, with one college seeing a 20 percent decrease and another seeing nearly 15 percent more.
Some caveats: I don’t know these two states well. I don’t know the precise reasons for the funding shifts. I also recognize that a one-year analysis is limited. However, I believe when all is said and done, the actual amount of state funding that shifts as a result of performance funding will be minimal, and not very different from non performance models. We just reshuffle the funding distribution picture a bit and settle on a new equilibrium.
Here in Wyoming, a state that I do know well, an increasing percentage of our state appropriations is being allocated based on performance. Currently, performance is defined as successful course completions, calculated with one-part percentage and two-parts volume of passing grades associated with class enrollments. In the current fiscal year, 15 percent of a portion of our state appropriations is allocated based on these outcomes. After calculations were conducted to compare differences between Wyoming’s community colleges on this measure, just 0.5 percent of all performance funding actually was redistributed. For next fiscal year, when we move to 20 percent being allocated based on performance, only 0.3 percent of the funding will change hands.
Are you seeing a trend? Let’s keep going. I looked into North Dakota as well, where 100 percent of state funding is allocated using a similar methodology as Wyoming’s. No surprises here: From 2011-12 to 2012-13 the average percent change in state funding between the five public two-year institutions was just 0.3 percent.
Which brings me to perhaps the greatest flaw in performance funding: It hasn’t been proven to work. Even though some policy makers and educational reform groups tout the improvements to educational outcomes that have coincided with the implementation or changes to performance funding models, the literature hasn’t linked improved outcomes to performance funding. Yet we continue to invest time, energy, and precious resources into rationalizing why it hasn’t worked and spinning the rhetoric to justify our continued commitment to the concept.
Community colleges are making impressive strides in improving student outcomes in spite of performance funding. Recent data suggest the overall numbers of degree and credential attainment at community colleges have been increasing across the nation, even during a time of declining enrollment. Last year, my institution, Laramie County Community College, witnessed the largest graduating class in the college’s history and an annual increase in completions of 12 percent. This is even after we experienced significant decrease in overall enrollment.
So if it isn’t performance funding, why is performance improving? First, unlike the research on performance funding, colleges across the nation are implementing high-impact practices that research actually shows to improve student outcomes. Take, for example, the great work of the Center for Community College Student Engagement and the 13 proven high-impact practices delineated in A Matter of Degrees: Practices to Pathways. Or the Association of American Colleges & Universities ten tried and tested educational high-impact practices. Guess what? Performance funding isn’t on either list.
The second reason outcomes are improving is painfully simple; it’s because we’re talking about them. I opened this piece with the behavioral concept of social proof. As human beings, we tend to do what appears to be socially proven as correct. Since the turn of the century, our focus has shifted from enrollment to completion. Whether it’s the Spellings Commission, the president’s American Graduation Initiative or Complete College America, we are now focusing on the other end of the enrollment spectrum. With that focus comes attention to different outcomes, and from that attention comes human effort directed towards new goals.
Don’t get me wrong. I’m not suggesting that student outcomes in the community college will improve simply because we talk about them. Nor do I suggest abdicating our responsibility for openly sharing how well we are performing as institutions. Sooner or later we will all find ourselves standing before our governing boards and elected officials either seeking additional funding or providing justification for the funding we currently receive. Thus, we must not only act on what we are talking about, in this case improving student completion, we must also be honestly willing and able to demonstrate how our actions are, or aren’t, yielding the desired outcomes.
In these important conversations, performance funding is a distraction, consuming our time and efforts in an endeavor lacking any compelling demonstration it will help us reach our goals for student success. Let’s not waste our time when we know there are other actions we can take, practices we can implement, and new areas of promise we can explore. We need to focus our energy there.
For those of you who are already riding the performance funding pony in your states, I wish you the best of luck and sincerely hope you discover the formula that will unlock performance funding’s potential to actually help improve student outcomes. Perhaps the performance funding debate has been a good catalyst to get us all talking about completion versus enrollment.
However, for those of us just starting the journey, or for those of you who haven’t yet experienced performance funding, I highly recommend letting the early adopters lead the way. Let’s learn from their experiences before becoming deeply invested in a journey that probably won’t lead to our preferred destination.