LEO logo linking to homepage

Home The LEO Learning Blog

Good Enough Is Good Enough for Learning Analytics

One of the most influential men in the history of science was the philosopher Francis Bacon. Bacon was a fascinating man who held some of the highest roles in government. However, he is best known as a philosopher of science, having written some of the most influential works on the subject.

His books described a new approach to scientific investigation (known as the Baconian method), which along with René Descartes’ work, were some of the first widely circulated thinking on scientific investigation since Aristotle in 300BC.

This thinking formed the technique we know today as the scientific method, the process for conducting scientific experiments. Today, scientists use an updated form of it for everything from laboratory experiments to designing new marketing materials.

Experimentation and L&D

So how does this relate to L&D (Learning and Development)? Any time we set out to measure the success of learning, we are essentially performing an experiment. To perform an experiment we need a hypothesis that our experiment will either disprove or verify, this is one of the core tenets of the scientific method.

You may not think of yourself as formulating a hypothesis when you create a training programme, but it’s happening nonetheless – your ‘hypothesis’ is almost certainly that your training programme will successfully change performance and capabilities as you have designed it to. You might ‘hypothesise’ that your programme will improve performance, expand capability, increase sales, or prevent some undesirable outcome such as a health and safety breach.

This is true for the traditional (for example, the Kirkpatrick or Philips) style models of learning evaluation, where we:

  • make a hypothesis i.e. that learning will improve performance
  • design an ‘experiment’, for instance a pilot training programme, using control and experimental groups (that is, we have a group of people who take the training and a similar group who don’t, whose performance can be meaningfully compared)
  • measure the change and find out whether the hypothesis held up

It’s also the case for the big data style where we measure everything and then try to tease out trends from it.

RECOMMENDED READING | 'Measuring the Business Impact of Learning: The Definitive Guide'

The challenges of using the experimental method

The problem is that both approaches tend to fall short of the strict requirements of the scientific method, so many academics would argue that any hypothesis ‘proved’ by these experiments likely won’t meet the scientific definition of ‘proved’. There are three main ways in which they can fall short:

1) Uncontrolled variables. Good experiments try to eliminate unexpected ‘variables’. These are things other than what we are actively doing (our training programme) that might impact the outcome. As an example, learners might opt to undertake independent study, which could make your training programme appear more effective than it really is.

2) Under-determination. When drawing conclusions from an experiment you need enough relevant data to form a conclusion. The textbook example of under-determination is that knowing that I spent £10 on fruit, and that apples cost £1 and oranges £2, you won’t be able to work out how many apples or oranges I bought. The only safe conclusion you have information for is that I can’t have bought six oranges.

3) Confirmation bias. Anyone running an experiment will have a bias that can affect the outcome and data collected. The psychologist Peter Cathcart Watson ran some groundbreaking experiments in the 1960s which showed that people tend to favour information that confirms any pre-existing hypotheses they have – to the point of discounting perfectly good information that their experiments have generated.

So am I saying that given these concerns, you shouldn’t bother trying to do any kind of learning analytics? Am I saying that any hypothesis you generate could be considered bogus? Far from it.

The counterargument

This post is a challenge to anyone who thinks that trying to measure learning impact is too difficult, too complicated, too costly or all of the above. At the recent Learning Technologies conference and its fringe events, I had many conversations with people from academia and business who viewed measuring learning impact as a waste of time and money.

My counter argument is simple, yet powerful: look at the big data/analytics you see around you every day… things like Google’s predictive search, Amazon’s recommendations, Netflix’s seemingly endless ‘watch next’ suggestions and even your smartphone suggesting the app you might want to use. These are all far from strictly adhering to scientific method. Nonetheless, they make our experience of those products better, faster and often cause genuine surprise when they work well.

Getting to good enough

My claim is that our current learning analytics methods are good enough for our needs. Sir Isaac Newton developed the three laws of motion and theory of gravity utilising the Baconian method, and they still stand up to the rigour of modern methods. I want to leave you with three thoughts that hopefully defend the role of learning analytics:

  1. It’s better than doing nothing… which is what most organisations are doing.
  2. Think about what level of evidence will be good enough for you. As the Kirkpatricks have pointed out, a chain of evidence that would not stand up to scientific rigour may be enough to convict someone in a court of law. If a level of evidence is good enough for a court then it’s probably good enough to convince your board to invest in L&D.
  3. Good enough develops into great. Marketing departments are the masters of this – 20 years ago they had no data, but they started to analyse their data and now have robust methods that they can use to empirically prove their campaigns’ worth and personally target consumers.

Click here to download the full version of this insight and remember to check back next week when we officially launch our series on measuring learning impact at work.

Want to know more? Read LEO Learning’s free ebook, ‘The growing appetite for measuring the impact of learning at work’.

Get the ebook

We use cookies to give you the best website experience possible, and by browsing our website you consent to this use. Non-essential cookies are currently blocked, but certain functionality on this website won't work without them. For full site access, please accept these cookies below. To reset your cookie settings, please see our privacy and cookie policy page.