Data collection is an integral part of any learning measurement program, but it can also be the most problematic stage for many teams embarking on implementing their first measurement initiatives.
Here are a few comments from respondents to our 2019 Measuring the Business Impact of Learning Survey on the issues they’re facing with data collection:
We’re struggling to turn data into useful information.
Disparate data makes it hard for us to tie things together.
Data mess—we have too many sources and none of them are joined up.”
Before we delve into solutions, it’s worth exploring the types of data many L&D teams will seek out to start gaining insights and demonstrating impact. This all relates back to building your chain of evidence.
Learning Data That Builds on Kirkpatrick’s Model
At LEO Learning, our chain of evidence approach builds on Kirkpatrick’s model of learning evaluation. Essentially, to build a picture of a learning program’s impact, you need to think about how to gather evidence in each of these four areas:
- Engagement
- Knowledge, skills, and attitude
- Performance and behaviors
- Business change
And there’s a wide range of data you can collect for each of these areas.
Your LMS may be a key data source for engagement data i.e. whether learners have accessed and completed the learning.
For data on knowledge, skills and attitudes, you may look at assessment scores stored in your LMS or gather feedback from managers. You may also be collecting more granular detail on learner activity using xAPI statements.
When it comes to performance and behavior change, it’s likely to get more complicated.
A Changing Picture of Employee Behavior
Here you need to gather data on whether the learning has prompted a change in employee performance or behavior. Building this picture can involve a range of different data sources. For example:
- Gathering data from the tools and platforms staff use in their daily roles e.g. SalesForce.
- On-the-job observation from managers or colleagues.
- Data on key performance indicators relevant to the training e.g. customer service feedback scores.
Finally, to make the link to business impact, you’ll need to gain access to operational and commercial business data on critical performance measures.
Understandably, collecting and evaluating this wide range of data sources can prove difficult for many L&D teams. And it’s likely you may encounter the following types of issues related to the quality of your data:
- Junk or incomplete data: duplicate users, inactive users, or conflicting data on the same users
- Conflicting methods of data reporting from different systems that makes it hard to make correlations.
To solve this problem you need to get the data you want to use synchronized and combined in a standardized way. This means implementing processes and standards that ensure compatibility, completeness, and reliability throughout your data collection process.
1) Breaking Down the Barrier: A Good Start
Dealing with a wide range of different and potentially conflicting data sources can be resolved with a technology-led approach, which we’ll outline below. However, this may require more investment than your organization is currently ready to permit.
So, instead, one way to sidestep the ‘data tangle’, especially if you’re just starting out on your first steps into measurement, is to take a ‘narrow but deep’ approach to creating your chain of evidence.
By this we mean scaling down your data sources to what you know is reliable. Many measurement programs start out using data from one or two sources, so don’t be afraid to take a smaller-scale approach. Ultimately if this results in more reliable insights it’s a far more valid approach—at least to begin with.
If the data sources you want to use are too messy to deliver value, you can also think about using alternative methods to gather data which give you control over the quality, such as surveys.
2) Breaking Down the Barrier: Even Better
We’d always recommend that organizations are able to move to a ‘big data’ measurement approach that leverages multiple data sources. So to solve the data quality issues that inevitably occur with this, a larger-scale approach will require the following tasks:
- Establish mechanisms and guidelines to bring in clean and reliable data. Work with the management information teams within your business to vet how your data is collected in the first place, reconfiguring systems to capture more reliable data. This could be something as simple as using dropdown menus rather than open entry fields. Remember, these are the tasks that data analysts and platform administrators are employed to undertake.
- Implement xAPI statements that standardize the information captured from all the different learning activities involved in your program(s). This effort is complemented by using a Learning Record Store (LRS) to collect this data in a centralized location. A Learning Record Store also enables the standardized collection of business data using connectors and plug-ins.
This approach to standardizing collection methods will remove much of the reliability and quality issues that occur with a big data approach, enabling you to get better data to begin with.
3) Breaking Down the Barrier: The Best Option
The ideal data collection position to move into is a fully connected and integrated data ecosystem that connects all of your learning tools, platforms, and sources of business and performance data. While the ‘Better’ approach above may still be focused on a limited set of data sources, the ideal position to move into is connecting all the platforms and data sources across your enterprise, giving you access to data across each function in the business and across a wide range of different learning activities.
The struggle to source quality learning data is just one barrier to measurement success. We’ve identified others, including:
- Stakeholder buy-in
- A lack of skills or capabilities
- Understanding what measurement tools are needed
- How to effectively use data insights once they’ve been collected