We recently held a Discussion Dinner as part of the consultation process around its white paper The New Learning Organisation. In the second of a three-part report from the event by John Helmer, we hear about the challenges of sharing and scaling good practice in elearning innovation.
A hand-picked group of learning innovators from world-leading organisations debated issues raised by this white paper under Chatham House rules. Organisations represented included BP, Credit Suisse, Defence Academy, Defence College of Communication and Information Systems, Home Office, HSBC, Jaguar Land Rover, KPMG, PwC, University of Birmingham, Volkswagen and Xerox.
Part Two: Where can we see best practice and how can we make it scale?
Following on from Part One of our debate, where we discussed why is it so difficult to create appropriate conditions for the type of learning that goes beyond ‘the course’ and addresses the full 70:20:10 vision, we asked our delegates to describe what they see happening on the ground in this process of transformation. Where can we see best practice?
This sparked an interesting discussion about:
• The challenges of spreading and sustaining successful innovations
• How transforming learning impacts L&D, in sometimes unsettling ways
• The changing dynamic of compliance learning in a regulated industry
Learnings lost and misapplied
One quick answer to the question of where we can see good practice – given by a delegate who spends a fair amount of time scouring learning blogs – was that there isn’t an awful lot around.
This could be partly the result of a reluctance to share among corporates, where learning innovation is part of a company’s competitive advantage – added to the fact that a certain amount of learning innovation happens outside the L&D remit, and so does not necessarily get shared with the community. But in the course of our discussion, a number of other factors emerged that pose challenges for successful learning innovation being leveraged and scaled.
One head of L&D for a government department described the fate of a successful programme with some unique features that was nevertheless not followed up on because of a structural reorganistion. The organisation had been launching a new service and needed an induction programme – something of a nightmare since this involved not only 1,000 new starters unfamiliar with the organisation, but also new offices, new systems and new processes. Numerous stakeholders were involved, ‘all wanting a piece of the action’, but the content needed to be meaningful for the users. Adding to these difficulties, no clear outcomes for the training had been set, and these needed to be defined as part of the project, against a tight deadline.
Clear measures were put in place for what trainees were expected to be able to do as a result of the training, a blended solution defined and implemented, and managers given training too so that they would know what the staff were going to get and could help to manage it. The programme was tracked it in terms of observation for the first three months – and also in terms of cost and perceived values added to performance in the organisation.
The programme proved very successful and won an award; but unfortunately that part of the business is now contracted out and the learnings from this innovative programme were, to a degree, lost.
Another example given showed that even where the learnings from innovation are not lost, they can be so specific to a particular organisation or situation that a simple copy-and-paste approach to emulating them will probably not work. This programme was for induction to a corporate sector organisation and replaced eight hours-worth of fairly ‘gruelling’ elearning modules, which did not provide a very good experience for new starters, with an innovative learning architecture.
The design process involved fairly radically redefining what the organisation was all about for new joiners. The team asked staff, with the benefit of hindsight, what they thought could have been done better at induction to help them; what, for instance, had they worried about when they joined ¬– what had they struggles most with, etc. The design built backwards from there, focusing on design and utility, providing useful infographics, checklists and so forth.
Without the initial enquiry period, however, our delegate argued – and a stealth approach to getting it through – the programme would not have assumed that particular final form; making it less useful, surely, simply to copy and paste the unique architecture that resulted in this instance for a different organisation.
You can lead a horse to water …
Sometimes programmes are successful, in that they solve a particular business problem neatly and effectively, and yet for cultural and other reasons struggle to get user adoption.
Such was the case with an app store launched by a manufacturing company on behalf of its sales dealerships. Part of the problem had to do with the fact that the dealerships were franchises, and therefore the staff were not under the direct control of the manufacturer, but cultural factors no doubt played a part.
The app store contained thousands of short videos designed to head off end-user problems with operation of the extremely complex products this manufacturer shipped. These user problems, though they might not actually necessitate a repair, often came back to the dealerships as such, incurring unnecessary costs, if the sales store staff on the spot were unable to set the customer right. The apps in the store gave them the knowledge to do this within a couple of finger-presses on a mobile device.
Nevertheless, despite the fact that this performance support resource was ‘completely open, completely free and completely available’, out of 76,000 registered users, less than 4% actually used it.
The culture that people work within, another delegate suggested, could be behind this lack of uptake. A member of staff whose job does not involve them sitting at a computer all day might not be so receptive to having apps and gadgets pushed at them as other, more deskbound staff. This could be true of the sales people in this example.
The same delegate shared a contrasting experience with a blended learning programme aimed at staff from different disciplines within his company. In that case, it was actually the technicians, who didn’t have access to computers on an hour-by-hour basis, who were most keen to do the blended learning pre-work and post-work. Here however there was something about those individuals that made them want to learn. They had to learn, in fact, in order to be able to do their day-to-day work in repairing machinery.
Motivation, he concluded, had to be looked at job by job, culture by culture. The trick is, in designing such a programme, to find a way of motivating the people at the other end of the computer (or device).
Theory into practice
Sometimes, again, potential programmes are just too leading-edge to get sponsorship from organisations.
One delegate, an organisational development specialist with an academic institution, outlined an idea they had been working on using social network theory. The basis of this is that staff know who the good managers are (these judgments tending almost always, interestingly, to coincide with those of top management) and can find the people they need to help them with what they have to learn to progress in their careers or solve a problem.
Staff would each be asked to identify five people they know who they respect at work, who have curiosity and want to develop things – and those five people would become a kind of hub; each of those five being asked to choose a further five, and so on.
The idea leverages friendship networks that exist at work, and is a way of harnessing the ‘informal’ system and paying some attention to the expertise that lurks within the middle of organisations, a tier which is too often overlooked.
For the moment this particular project exists only in the realm of theory – however another delegate, from a professional services firm, had a working example of something very similar from his own organization to talk about. In a small-scale graduate programme, something very similar is actually being done, and showing successes.
Interestingly, the social element here has happened organically, rather than through deliberate design. The graduates have come in with a completely agnostic attitude towards the way they should learn and are introduced to a social network learning system that has a competitive element. In the course of carrying out their assignments they have naturally gravitated towards those individuals they see as good and from whom they can learn. These knowledgeable individuals have no official connection with the graduate scheme, it was noted, having been brought into the ‘learning sphere’ solely by the learners.
Now obviously, there is a little danger in this. The trainees could find the wrong people. However in practice it has been found that the graduate trainees quickly learn that if they you get a wrong answer from one of their mentors that leads to them getting a reprimand, or some other repercussion, they might have chosen wrongly: ‘if you put your hand in the fire and it gets burned, you’re not going to do that again’.
Where this delegate sees a lot of other organisations fall down is in trying to interfere too much with this type of organic process – and here we come to a point that was raised several times in this discussion by different delegates; a central paradox that seems to operate in our attempts to move towards the new learning organisation – the natural tendency, when success occurs, to want to formalize it, and then try to expand it across the business; when actually by leaving it alone, it might work better.
Knowing how and when to be hands-off; letting things happen bottom-up; recognizing and using the power of informal learning without attempting to formalize it and stifle it – this is perhaps the greatest challenge L&D faces in training transformation.
It goes against the way L&D is used to operating, and indeed against the way it is expected to operate by other parts of the business. It changes the conversation. But even if L&D is successful in changing the conversation (and several delegates gave testimony to the difficulty of doing that) there are dangers for L&D in doing so – perhaps even something of an identity crisis.
Changing the conversation
The change at issue was characterised by one delegate as moving from having ‘the efficiency conversation’ with stakeholders to having ‘the effectiveness conversation’.
The first of these, the efficiency conversation, is one that the people around the table have been having for a number of years, and has, in a sense, driven the adoption of learning technology within organisations: can you reduce the cost of this programme by making it blended. There is life in that conversation: ‘we can probably continue to live out our careers having that conversation about how I can save you some money on training’. However it creates an anxiety for L&D that it is not really delivering change.
Going into the effectiveness conversation, however, the one we want to have – which is all about outcomes for the business – can get to be quite troubling. What is most effective might not be the stuff that L&D does, and might not in fact be learning at all.
‘You can talk about healthy eating and diet till you’re blue in the face, and put people on courses, and it has no effect. But if you give them one of these devices that tracks the number of steps they take every day you see significant changes in their behavior.’ The pedometer, a type of performance support device, seems to have a greater effect on behavior than a change in knowledge and understanding. There is no learning component here: it is not even just-in-time learning. So where does that leave L&D?
In the age of GPS, Jawbone and Google Glass there is a burgeoning range of technologies that can give real-time feedback on performance and which promise a similarly instantaneous effect on behavior. ‘Personally, with every conversation I find myself thinking, am I in a conversation where I’m just going to push some learning objectives more cheaply using technology or is this really a conversation around effectiveness – and then it might not be learning at all’.
This prompted, from another delegate, an interesting (if contentious) answer to the initial question of where we can see best practice. Very often outside the influence of the L&D function, was his reply. The fastest route to ‘go the way of elearning’ he recommended, in one company he worked with, was simply to collect everything that has been done on elearning outside the L&D function and make that publicly available.
The L&D function in some organisations is ‘the burier rather than the supporter of this kind of modernization’. And perhaps with understandable reason: ‘the more you move into the 70 and 20 the less control and influence the L&D function actually has.
Regulator interested in outcomes shock
Being the bulwark against change, however, is not a sustainable position when the rest of the world is moving on. One example of this was given by a delegate who works in the heavily compliance-driven financial sector.
Regulators are beginning to recognize the futility of tick-box compliance training that adds little value from a business perspective in demonstrating knowledge or retention of knowledge for individuals. Their stance is changing in the way they look at organisations they regulate: does an individual understand the context behind the information they are mandated to ingest and regurgitate?
Our delegate described how he was caught out by this shift. While the message he was getting from his organisation was to count ‘bums on seats’, and tick the metaphorical boxes, because that was what the regulator was felt to care about, when he actually spoke to the regulator, they said, ‘we’re really interested in how you add value to individuals’.
‘And I reacted like … sorry, say that again! He was interested in classroom induction – as well as elearning – and he said to me, what happens in terms of learning after the classroom induction? This completely caught me out because we hadn’t got that far yet.’
In starting to look at outcomes – in shifting, along with the more progressive parts of L&D represented by our delegates, from the efficiency conversation to the effectiveness conversation, regulators too will have to grapple with some knotty problems – such as how you measure effectiveness, and how you assess both learning outcomes and business outcomes.
The point is that learning transformation is not something to be either driven by or ignored by L&D, it is the way the world is moving.
Obviously, this has a big effect on the L&D function, which, in the words of another of our delegates: ‘has to be redefined in terms of the purpose it has within the company: it is not enough just to modernise what we’ve been doing all the time’.
This thought led on naturally enough to the next part of the discussion, about governance – and whether good learning governance is an achievable or even desirable thing (given the evident popularity of stealth tactics among our delegates) – or as yet merely an aspiration.