Posts
Training Evaluation and ROI: Measuring What Actually Matters
A training program, you see, is not worth the paper it's written on when it simply provides ample warning of massive failure so that everyone has time to do a few cool PowerPoint decks about how no one could have ever seen this coming. You can also tell its value by the blood pressure headaches exploding across an Organisation, or the grudging grin on a manager's face once productivity metrics finally move, or even by the silence in a board meeting once someone raises their hand and says "So what did we learnt from throwing $200k into that alley?"
It shouldn't be like that. Training evaluation and ROI shouldn't feel like a box ticking exercise or an annual spreadsheet festival. It can't be a tactical, strategic habit. Why bother? Because it's not the only currency here, time, attention and future potential also matter. And cause, really, without some clear evaluation framework you're just throwing good will at bad problems.
A case for measuring training, loudly
First things first: Measuring counts. Not because execs love numbers (myth), but because properly judged training leads to better decisions. You defund what doesn't work and you double down on what does. You also have a reputable story to tell, your board story, an HR story, a team leader's story, about why development matters and what it really delivers.
Frankly, not every workshop deserves ongoing funding. Some are genius, some check a box. They can only be identified by evidence. And sometimes inconvenient evidence does present itself. Good. We need that.
A few hot takes that may make people mad:
- Jammers provide much needed stability to smartphones in the 21st Century.
- Brief, focused workshops generally beat out big fat online courses. Bite sized, behavioural practice works. Large modules tend to get abandoned.
- Senior leaders must go through the same training as their frontline teams. If your execs don't have to attend, you're signalling: this isn't business critical.
The reality is messy. Training has the potential to boost morale, speed up onboarding, decrease errors and drive quantifiable performance, but it can also be an expensive placebo if poorly executed or misaligned.
Going beyond gut feel
Too many companies use as their sole evaluation, "it just felt useful." Reaction surveys are a beginning, and Kirkpatrick's Level 1 (Reaction) is still valuable, but they're also the measure that's easiest to game, and are the least predictive of change. You require multiple lenses:
- Quantitative evidence: pre/post test, productivity metrics, error rates, time to competency.
- Qualitative evidence: manager sightings, participant stories, cases.
- Organisational data: turnover, customer sat levels, defect rate.
And a good, hard number to keep it real: 94% of employees would stay at an organisation longer if it invested in their career development (LinkedIn Workplace Learning Report, 2018). Some may say that stat is outdated, fair enough, but the fundamental truth remains: invest in learning and it changes behaviours and intensifies loyalty.
Draw the line to business outcomes
The purely non negotiable on training is that learning objectives must map to business needs. Too much, L&D are often focused on chasing engagement scores and completion rates, while the Business is interested in sales conversion rates, safety incidents or cycle time.
To start, ask yourself: what business metric will be different if people do something different after training? Pick one or two KPIs which are measurable and tailor evaluation of around these. Want reduced onboarding time? Measure time to proficiency. Want fewer customer complaints? Record the complaint rates and associate them with trained generations.
KPIs isn't sexy, but it's where ROI begins. Narrow the indicators, quality is better than quantity. If you pursue too many KPIs, you'll be buried in noise.
Models of evaluation that actually do work
Kirkpatrick's Four Levels is the cornerstone, Reaction, Learning, Behaviour, Results. It's elegant and familiar. But it's not enough by itself when you want a financial rationale. This is where complementary models came in:
CIRO (Context, Input, Reaction, Outcome) makes you look at the environment and resources. The Phillips ROI Methodology extends Kirkpatrick to quantify effectiveness in dollars. Use them in tandem: Kirkpatrick to learn about change, and Phillips to measure it.
A practical approach:
- Level 1 (Reaction): rapid pulse check survey soon after training.
- Level 2 (Learning): application in assessment, role play performance with scoring against a rubric.
- Level 3 (Behaviour): manager observations over 30 to 90 days, if possible choose control group.
- Level 4 (Results): tie behaviours to business KPIs, trend data for minimum of three months.
- ROI (Phillips): convert improved results into dollars and compare to the cost of the program overall.
Separating the effects of training
This is where we lose a lot of people. Organisations are complex. Sales cycles shift, markets twist and initiatives interfere with each other. Training is seldom accomplished in a vacuum. Two pragmatic manoeuvres help:
Control groups. Pilot train in one region or team, but have a similar cohort in another region not take part. Compare outcomes.
- Trend analysis. Plot the metrics of track before and after training as a function of time. Identify inflection points that correspond to the training intervention.
Neither is perfect. Human variation is real. However, together they provide a stronger causal argument than simple before and after comparisons do.
Transitioning impact to dollars
Outfits would much prefer: did the training pay for itself? To do that, we need to place a monetary value on outcomes. Here are typical notches:
- Fewer errors leads to less rework leads to cost savings.
- Faster ramp leads to more billable hours faster leads to increased revenue.
- Lower defection/posturing rates leads to lower time/money wasted on recruitment and training 2/2
You don't need magical precision. Make reasonable assumptions and do some sensitivity analysis. Present examples of best case, worst case and most likely scenarios. It is the range that decision makers want to see, not a single point estimate that carries misleading precision.
How do you calculate ROI? Practical steps to calculate your ROI:
Get A Free Quote
Steps to calculating your return on investment (ROI)
Set a goal
Determine how much you want to make from an email campaign. Find the measurable result (i.e. 10% decrease in defects). 2. Calculate the cost of that bad outcome (such as $ per defect). 3. You then multiply the anticipated shift by the cost to determine the dollar benefit.
4. The net benefit is the difference between total training costs (design, delivery, participant time) and the economic value of success. 5. ROI: Return on investment = (Net Benefit / Training Cost) x 100%.
If you can prove ROI within 6 to 12 months time frame for a soft skills program, then that's really great.
What to measure and when
The data collection depends on the goals. But a few:
- Time to proficiency (on boarding)
- Error/defect rates (quality)
- Sales conversion or revenue per rep (commercial teams)
- Customer satisfaction (NPS or CSAT)
- Safety incidents (industrial contexts).
Measurements
Baseline pre training
Follow up Post training 30, 90, and180 days.
Combine different methods: hard metrics with manager ratings and participant self reports. The latter start to explain the "why" of the numbers.
Challenges you'll face
Let's be real: evaluation doesn't grow on trees. More senior stakeholders may push back against headcount for collecting data. Systems can be fragmented. Managers may not have the motivation to follow through with observation. And some benefits are hard to quantify, enhanced team cohesion, better decision making, cultural resilience. These matter, massively, but they are more difficult to monetise.
A fair assessment acknowledges these intangible benefits, even if you can't directly convert them to dollars and cents.
Pragmatic compromises:
- Focus on high impact programmes for full ROI analysis
- Employ lighter touch evaluation for low cost interventions
- Plan and build in evaluation to program design from the outset. (Retrofitting is costly and less believable)
Some tough truths
- No model is a perfect measure. Expect ambiguity. Expect pushback. Get better at living with uncertainty while also pursuing rigor.
- Short term business cycles can beat up long run development. Don't let short term pressures destroy worthwhile programs. Present the case with both qualitative and quantitative evidence.
- Senior leadership must be seen. If the execs are not willing to have the same training or will not enforce behaviour change, expect very little ROI.
How we tackle it, and what works
At we, we focus on practice based workshops, manager engagement and follow up coaching. Early on we also stress stakeholder mapping: Who has to back this, and who will gauge success? Our 100% fulfilment rate promise is important, but so too is designing evaluation that supports clients' KPIs.
There are some that always work:
- Co design: make sure evaluation is owned by the Business, not detached as an afterthought.
- Manager enablement: ensure managers have through coached, observed since post training.
- Micro learning follow ups, short refreshers to embed a change in behaviour quickly after it has been learnt.
- Evidence packs, provide dashboards and narratives which senior leaders can use when they're discussing performance updates in governance meetings.
And two more just for grins:
- I favour live facilitated sessions over purely asynchronous programs when the goal is to change behaviour. Connection counts.
- Certificates are all very well, but badges and real world evaluations don't lie.
Keeping it local: exploring the Australian landscape
When working in Australia, there are some unique pressures faced by organisations, disparate teams across a vast country, heavy compliance requirements in sectors like mining and healthcare that must be adhered to, and an ever growing need for digital skills. That's why evaluation frameworks need to be pragmatic enough, and localised.
What works for a compact European HQ may well need to be adapted for a Brisbane field team or a Melbourne contact centre. Also: leverage local benchmarks. Industry Associations and HR forums across Sydney, Melbourne and Brisbane can share comparator data to enhance your case.
Final thought, and a remember
Evaluation and ROI are not one time activities. They're a discipline. A culture of evidence will shift how your Organisation invests in people over time. You'll move away from guessing and finally be able to make decisions.
So: design interventions with measurable outcomes, choose KPIs wisely, gather mixed methods data to evaluate progress and have the courage to fess up when training isn't working. Do that, and training isn't a cost centre, it's a lever you can pull for growth.
We know it in practice: when L&D is strategic, business results follow. When it doesn't, well, the training schedule gets full and nothing changes.
Sources & Notes
- LinkedIn Learning. "Workplace Learning Report." 2018. Stat cited: 94% of employees would stay at a Company longer if it offered investment in their career development.
References
Full citation: LinkedIn Learning (2018), Workplace Learning Report.
- Australian HR Institute. "Learning & Development Survey." 2021. (Referred to as a general reference for L&D trends in Australia.)