Measurement is always a hot topic for learning and development professionals—and it’s a project senior training and development specialist Alyssa Willet tackled for her team at University of Colorado Boulder. The goal of the Department of Human Resources is to create positive behavior change with university employees—but how do you measure that?
“We redesigned the [post-course] survey and used a different model to assess learner confidence and actually talk about skills with someone else,” she said. “Are they actually using it, and if not, what barriers are in the way of doing that? If you’re scared to talk with a supervisor or manager, then there are other issues we need to address.”
An experienced facilitator, Willet spent four years teaching Crucial Conversations courses (Mastering Dialogue and the Accountability add-on) as well as Getting Things Done in open enrollment courses for the university’s faculty and staff. She and her team have trained more than 1,500 learners so far.
“In higher education, there’s a tendency to go toward silence to maintain the relationship,” Willet said. “Our goal in training was to show people that they can maintain the relationship and have these respectful conversations to build trust.”
To assess results in a way that would provide useful data for improving the learning experience, Willet and her colleague Perri Longley, a senior training coordinator, spent six months researching and designing two updated surveys: one administered immediately upon course completion and one sent six months later.
“Most of that time was deciding how we wanted to design the questions, what our philosophy and goals would be for assessment,” Willet said.
In higher education, where there’s no “product to get out the door or patient on the table,” measurement was challenging in terms of ROI. That’s why they turned their focus on the actual behavior change, drawing from the Kirkpatrick Model. Their focus was on level 3 (behavior) and level 4 (results).
Willet and Longley leveraged online webinars and training industry magazines to research measurement in the learning and development space. This led them to a helpful resource from L&D consulting firm Phillips Associates: a survey example titled “Fifteen Research-Based Training Transfer Factors.” It explains how to evaluate learning program design factors, learner mindset factors, and learner work environment factors. This became their framework for the survey’s initial design.
At the same time, Willet and Longley turned to a unique resource at their fingertips: the university’s Office of Data Analytics Assessment group. This team “provides leadership and centralized support… to facilitate best practices that lead to data-informed decision making to promote student success at CU Boulder,” according to their website.
Working with the ODA group and the framework from Phillips Associates, Willet and Longley drafted an initial version of the survey that was then refined with a larger group, including assistant director Lauren Harris, senior training and development specialist Lisa Nelson, and senior diversity and inclusion training and development specialist Clara Smith.
The surveys launched in June 2022 and are administered via Qualtrics, an assessment software.
Where the initial survey is longer and focused on confidence coming out of the course, the second survey looks at results—if the learner held a Crucial Conversation since taking the course, and if so, did they get the desired result? If not, what impacted their ability to get the results they hoped for? If they didn’t hold a Crucial Conversation, what factors prevented that?
“In a lot of ways, the questions in the survey are trying to draw out what in the Six Sources of Influence is holding them back from having a Crucial Conversation,” Willet said. “Is it social with other people? Is it ability or motivation? Are they the only person in their organization who is wanting to have the conversation? Is it structural?”
Since ability gaps can hold people back from having Crucial Conversations, Willet and her team offer post-course support through what they call “crucial consultations.”
In these one-on-one sessions, course graduates can discuss specific questions, get coaching, or walk through the Conversation Planner to map out a Crucial Conversation. Willet said the consultations have been better attended than lunch-and-learn sessions, and they’ve provided her and her team with a closer look at what learning gaps exist.
“The follow-up survey is an extension of those sessions, seeking similar feedback on a broader scale,” she said.
When it comes to planning a survey, Willet offers this advice.
“Don’t ask a question if you’re not going to do anything with the data,” she said. “It’s a great way to cut out extra questions and fluff. Get creative and think about whether there are other ways to assess capabilities.”
The CU Boulder team is in the midst administering the first round of follow-up assessments. One thing is clear so far: measuring learner confidence puts their attention on skill development, and they believe it will lead to stronger data-driven decisions.
This sounds great! Crucial Learning, will you integrate this assessment into the platform for other trainers?