AB Data

Measuring The Impact Of Leadership Development: Getting Back To Basics

Written by Larry Clark for Harvard Business Publishing.
Find original article here.

In the 60-plus years since Donald Kirkpatrick published his landmark dissertation on the four levels of learning evaluation, there is still great debate about our ability to measure the impact of leadership development. The debate is understandable—it’s much easier to measure things with a direct result, like reducing errors in a manufacturing plant, or improving first call resolution in a call center. Pick the right metric, assess before and after training, have a control group, minimize the impact of special variables … the process is fairly straightforward.

But leadership development feels like a different animal that shouldn’t be held to the same measurement standard. In some way, it feels like it’s “above the law.” As one learning leader once lamented to me, “They don’t ask IT to measure the ROI of email, but everybody knows we still need it.” And our 2018 State of Leadership Development report backs this up: Only 24 percent of organizations attempt some form of impact measurement. The most popular measurement tool? Satisfaction surveys.

ROI of email aside, we absolutely can – and should – measure the impact of leadership development. We just need to go back to the essentials of learning evaluation and perhaps think a bit differently about what we’re trying to accomplish with our efforts.

So, instead of unpacking Kirkpatrick and Phillips here, let’s just revisit five fundamental principles of learning evaluation through the lens of leadership development.

Start with Strategy

In a previous post, I talked about how all leadership development can line up under two strategies – driving performance, or preparing for the future. If your stakeholders understand the strategy the solution supports, they’ll understand the type of metrics you choose. Performance-based leadership development will support performance-based results, and pipeline-based leader development should support talent metrics.

Agree on Impact Metrics Before You Build

Too often in leadership development, we build the learning experience, roll it out, and then reactively attempt to show value through measurement. I’ve never seen this reactive approach succeed. Measurement needs to be part of the plan from the start, not an afterthought. I once conducted an impact study on a leadership development program I had inherited from a predecessor – a program that was rolled out to literally thousands of leaders. While the program was loved by the business, when the results came in, we could find no correlation between the training and the business performance of the learners’ teams. In a debrief, a member of the design team summed up the problem perfectly: “If we had known we were going to track these metrics, we would have built different training.” Which takes us to the next point…

Solve a Specific Problem

When we build functional training, we’re usually clear about what organizational issue we’re solving for: Are people using the new software? Are we improving close rates of our sales force? We should get that specific in leadership development with our stakeholders. Several years ago, one of our clients used coaching training for field supervisors and managers to measurably reduce repeat service calls to clients. The study showed that the supervisory coaching training did reduce repeat calls, and that training both the supervisors and their managers dropped the number even further. Don’t be afraid to go after something specific. At Harvard Business Publishing, our business impact projects ask participants to apply their learning to solving a real-world critical business challenge. The results demonstrate the tangible impact the learning has on the organization.

Shoot for “Impact” Instead of “Proof”

One of the fears I’ve heard over the years from learning leaders is that there are too many variables to definitively “prove” that a leadership development solution was the root cause of any measurable improvement. In practice, though, we’re usually much more concerned about “proof” than our stakeholders. A well-executed study that shows a strong correlation between the learning and the result is usually much more than stakeholders expect. Remember, it’s rare for a stakeholder to get an impact study on other initiatives that touch their work. And, if they were involved up front in picking the metric and designing the solution, they want the solution to work.

Choose Your Spots

Level 3 or 4 studies takes resources and time, especially if your team is still learning how to do evaluation. The good news is that learning teams that show impact on a few important projects often get a “halo effect” that builds confidence with stakeholders about the integrity of their other solutions, even if they don’t choose to implement impact studies.

So if learning evaluation is not a strong muscle for your team today, start with a few key projects this year and see what your team and your stakeholders learn along the way. What you’ll probably find is that your stakeholders will see your solutions differently, and the solutions themselves will be laser-focused on business results.
How will you implement these measurement ideas?