Evaluation: From Proving to Improving

While important contributors to the field of evaluation such as Michael Scriven would rightly say that the purpose of evaluation is the determination of worth or merit of a program or solution, the ultimate purpose for any evaluation, and in fact, for any measurement initiative from my perspective is to enhance decision-making that leads to improved performance – in other words, evaluations should move from just proving to improving.

I recall a conversation I had years ago with the great Geary Rummler, where I was going on and on about new approaches to designing performance measurement systems, and he interrupted to remind me, “well, Ingrid, performance measurement systems are actually performance management systems.” My immediate reaction was of course to quickly emphasize that I knew that. But just as I opened my mouth to get the first word out, I had an “aha” moment. I remained silent for a couple of seconds, which can seem like an eternity when you’re on the phone with someone. And in those two seconds I had several insights go through my head all at once. It was as if I had been finally given the right prescription glasses to clearly see in detail what I had previously seen as gross shapes before. Some of my racing thoughts, which I have expanded since, included:

Take your evaluator hat off and put on the management hat: what factors have to be managed in order achieve your objectives? This is a different approach than asking ‘what has to be measured’ and consequently, your performance management/measurement system might look different. If you only ask about what has to be measured, you might end up in the purely ‘accountability’ box and use the data primarily for reporting, and maybe even reflecting back on what was accomplished. If you ask about what has to be managed, you are more likely to adopt a “continuous improvement” framework, where data usage is an integral mechanism for working, managing, and adjusting in the present, in order to drive the results you want tomorrow. Of course the opposite is also true….if you’re not measuring, you’re probably not managing.

How do the various metrics relate to one another? How do we make sense of them within the organization? The ‘bucket’ approach where we identify a bunch of metrics for various levels or categories of results without bothering to understand how they affect one another severely limits what we can do with that data. Instead, figure out how they influence one another, so that you can more accurately interpret the data, and make better decisions about how to improve them. What are the drivers and leading indicators? Which indicators are key for driving ultimate results (a.k.a. KPIs)? What are the lagging indicators…that is, those indicators illustrate the cumulative effect of our efforts? And which indicators mediate these two sides?

Follow through with actionable recommendations and specific responsibilities. I’m sorry but I cringe every time I hear someone say the data speak for themselves. The data do not speak for themselves, rather, those who look at the data speak to themselves about what they think it means…and in some cases, they may even be clueless about what it might mean. Don’t assume that people will receive a consistent message about how to interpret and use the data in their work and organizations. Remember, it’s a management system, so specify what the data seems to be saying, back it up with strong evidence, and then make actionable recommendations about what to do, and to the extent possible, who is responsible for doing it. If you assume everyone is responsible, then they assume no one is responsible.

Performance measurement and evaluation is not an end in and of itself, it is a means to an end. Roger Kaufman has always reminded us of the perils of confusing means for ends. Performance improvement is our goal, and everything that we do is a means to that end. Whether we are evaluating performance, designing a dashboard, carrying out performance assessments or other front-end diagnostics, or even if your work is more closely aligned to learning systems, be mindful not to fall in love with your tools. Rather, stay committed to measurably improving performance. This should be the guiding star to help you make sure you not only do things right, but do the right things.

So what does this mean for performance improvement professionals? When you design an evaluation, or any investigative initiative such as a needs assessment or a performance analysis, or a dashboard, the starting place must be identifying what types of decisions will have to be supported, who will make these decisions, and when. In turn, let that guide the questions what should be asked, and what data would allow those questions to be answered. That is the foundation to your evaluation design. Unfortunately, what I often see happening with clients is that their natural inclination is to start with a familiar data collection tool (“I want to do a survey of…” “We want to do some focus groups with…”, which, while familiar or safe from their perspective, may not get them where they want to go…of course, if you haven’t clarified your final destination, any road will take you there. This can lead you to spend money, time, energy, and not to mention emotional capital, collecting data that will have very little return for you and the organization.

Of course, this notion of evaluation as a tool for not only proving, but improving is not unique to the performance improvement field, and in fact was also expressed by evaluation gurus like Egon Guba and Michael Patton. But who better to develop and use the most cutting edge approaches to measurable performance improvement, than those of us who are accountable for sustainable performance results.