A Transfer of Learning Horror Story
I was chatting with a learning professional the other day having collaborated on a successful transfer of learning pilot for a Leadership program. They were looking into other areas our approach could add value.
They were sharing details of a program for people new to management, which already had coaching follow up designed in. Sounds good at a surface level, right?
I was thinking “if that’s the case, why would they need additional learning transfer to support it?” … “Transfer MUST be under control if they are having professional follow up?”
As I learnt more, some red flags started to pop up.
Firstly – they were having 1 single face-to-face follow up coaching session that was an hour long delivered by the facilitator. Why is this a red flag? Well, in my best practise learning transfer rule book you’re always going to get more value from working by phone not face-to-face. You can get people going deeper quicker because it’s easier to be vulnerable over the phone and of course it’s often more efficient and better value. I’d always suggest 2 x 30 minute phone sessions will get far better results than a 1 hr face to face, and for organisations today every dollar from the budget counts, so you may as well get a cost saving.
Secondly, the facilitator will be positioned in the learners’ mind as the expert (even though of course most great facilitators will have told them this isn’t the case!). Subconsciously they will be looking for answers from the facilitator as the expert in the course content and will perhaps become lazy about what’s working for them or not. The ownership for change then stays with the facilitators.
Finally, a single session is never going to give you a good return. It’s the power in between sessions that creates an outcome. When you have actions generated supporting a specific action plan and THEN follow up on those actions – that is when the magic starts to happen.
Those were my initial red flags.
The full horror story unfolded when I learnt that not everyone from the program takes up the coaching session with the facilitator! And there were no collected metrics to confirm who completed the follow up, what they implemented and the impact that had on the organisation.
As you can imagine I was hyperventilating at this situation.
Learning transfer being invested in – and not actually happening!!!
Of course I immediately wanted to help! It reminded me of a conversation with another learning professional last year who said they had tried to do learning transfer coaching follow up – and it hadn’t worked.
“Hadn’t worked.”
A strange turn of phrase.
I dug deeper to work out how they had measured it “working” or not. The one and only metric they had was that less than 30% of people had engaged in the transfer of learning process.
No wonder to them it “didn’t work”. With our Turning Learning into ActionTM learning transfer follow up here at Lever – our minimum KPI is 80% engagement, and we frequently get above 90%.
How can other companies be doing something similar and only get 30% engagement? The mind boggles. Moreover, how can the only indication of transfer success be whether or not people engaged in the process?!
Evaluation of learning transfer is a thorny issue. It’s easy to get confused between evaluation and learning transfer! They are of course inextricably linked, and often creating transfer makes the whole issue of evaluation so much easier because change has been clearly created to evaluate. However this is only possible if we are actually measuring the right objectives and not simply ticking the box by measuring how many people showed up.
Jack and Patti Phillips are thought leaders in the measurement and evaluation area, and much of my thinking has been shaped by their work at the ROI Institute. For evaluation to be really effective, we must start with the end in mind. We need to decide what the course objectives are at the very start, before the program has even been designed, and then evaluate the outcomes in relation to those objectives. If the objective is to finish on time and collect some ‘happy sheets’ about what participants intend to do then it is very easy to fudge success. Reaction evaluation is not enough.
The absolute minimum objective and evaluation that needs to happen after learning is application back in the work place or behavioural change.
Using an effective transfer of learning tool you can facilitate this application of learning to the workplace using supportive techniques to hold participants accountable for their own change.
Learning transfer generates the return on investment and then by following these 3 simple steps you can measure and showcase your brilliant results.
1) Once a participant has completed their learning transfer journey, collect self-rated progress review data.
2) Collate this data into an impact dashboard
3) Share dashboard wins with client, CEO or Managers.
Using the right questions on your progress review forms you can collect data that will enable stakeholders to assess reaction and learning evaluations as well as application and first-stage impact evaluation.
Some effective questions that we use with clients can be found in our article An Easy Method to Showcase Training Results
To go even further and learn how to measure full ROI you might want to take a look into an ROI Certification program. I highly recommend the work of Jack and Patti Phillips from the ROI Institute – they are leaders in the field of evaluation, having written 50 books on the subject. Patti is visiting Auckland, New Zealand in May with their ground breaking ROI Certification workshop – learn more and register HERE.
If you want to avoid any transfer of learning horror story please speak to us. We specialise in this area. It’s all we do and it’s our life’s work. We have analytics that are created at the end of every transfer cohort and will give you an honest appraisal of whether our approach will work for you or not.
Look forward to speaking with you!