We kicked off another year of the Dodge Board Leadership series last week with a workshop on assessment. I was struck, as always, by the challenge of nonprofit accountability in the age of “outcomes measurement.”
Everybody wants numbers, but the outcomes and impacts of our work that matter most to us do not always lend themselves to quantification. What do we do?
I think we need to think carefully about the purpose of measurement in the social sector and embrace qualitative as well as quantitative assessment.
When I was still at the Dodge Foundation, I remember interviewing the founder and executive director of an organization in Newark that offered an arts education program designed to be a deep, transformational experience.
Anecdotal evidence suggested that it was; the program seemed to be profoundly changing the lives of the young people who participated. Yet the E.D. was holding his head in his hands as he told me about a recent visit from another foundation officer, a key backer of this program: “All he said to me was ‘You have to get your numbers up. You have to get your numbers up.’”
Does the number of participants in a program trump the quality of the program when assessing whether or not it is successful? It can and often does if there is no qualitative assessment that defines a different vision of success.
For better or worse, measurement becomes a proxy for intention and values. It is hard to argue with the foundation officer’s intention to serve as many young people as possible.
But that was not this executive director’s intention, certainly not his highest aspiration. He did not have the capacity to do that. He did have the capacity, and it was his mission, to change some number of young lives through the arts. But there was no metric for what mattered most to him, nothing with which to make an alternative case.
“Statistics are a wonderful servant and an appalling master.”
This is a familiar story. In workshops with social sector leaders, I sometimes ask, “Do you feel you are measuring what matters?” and more often than not, I get a quick “no.” Frequently, they say they did not think they had the option to measure what matters to them, and besides, they say, it wouldn’t be a valid measure, would it? This is a critical question.
At least part of the moral of the previous story is that if you do not define and assess what matters to you, someone else will do the assessing of your work, based on what is important to them.
The other part involves an assumption and a question: what holds us back from thinking our internal, home-grown assessment would have legitimacy? Why are we so afraid of the word “soft” when applied to a measurement? It’s an adjective you want to avoid if you are a politician talking about crime or a Marine doing anything, but I don’t think it is necessarily a bad quality for a measurement.
Much that we care about — feelings of belonging, pride in citizenship, confidence in the future, a general sense of well-being — require soft measurements. The real question is whether those measures can help us achieve what we care about.
We live in an age when measurement — and its uses in assessment and evaluation — have become a serious science. Indeed, when a measure itself has to meet standards — think of the SAT or the Richter Scale — we quite properly train our attention on it. Is it accurate? Is it reliable, which is to say does it give consistent results? Is it valid, which is to say does it measure what it is supposed to measure?
These criteria are extremely important if a carpenter is measuring for a shelf, or a coach is weighing in wrestlers before a tournament, or a doctor is drawing blood to determine levels of uric acid.
But does a measure always have to meet strict criteria to be helpful? I think we get confused over whether it is the measure that matters or what is being measured. We get intimidated by the science of measurement, forgetting that, in the words of change expert Michael Fullan: “Statistics are a wonderful servant and an appalling master.”
We find ourselves arguing over whether a measure of levels of quality can ever be accurate or valid. But what if we were able to agree that a measure is accurate enough, or valid enough, for us to take sensible and appropriate action based on what we learn from it?
This question is important because we know any measure of social benefits will never have the consistency we seek in standardized measures, nor the precision. Faced with the realization that we will never find a common unit for the many and varied positive impacts of the social sector, we have two choices.
We can say such impacts and benefits cannot be measured, or we can measure them ourselves in the manner that social profit demands: a combination of pertinent metrics and a qualitative description of that social profit which can only be created by the people who are providing and receiving it.
That was the message of our opening workshop: Don’t assume that assessment is something others do to you, to judge the success of your work after it is over. Think of it as something you do with your colleagues and stakeholders, before the work has happened, to improve the work itself. If you can envision and describe and then reflect and redefine these visions of success, you can create a clear assessment process that leads to clear benefits. And it does not bother you a bit that some of the most important benefits are described in words instead of defined by numbers.
David Grant’s book, The Social Profit Handbook, will be published in March by Chelsea Green Publishing, White River Junction, Vermont. The book will offer a tutorial in treating qualitative assessment rubrics for organizations who want to take assessment into their own hands. Look for an announcement of publication on the Dodge website.