Measuring Informal and Social Learning

By Clark Quinn

Many folks argue that you can’t measure informal and social learning. It would seem hard to put any particular metric on it, let alone find the way to collect the data. There are some simplistic measures, but I want to suggest that we may be able to do better at measuring informal and social learning.

So, the usual metric for social learning is ‘activity’. In fact, that’s how social media companies at least used to charge; by the amount of activity. In this case, if someone logged on or accessed the platform at any time during a month, that was activity. Not really a useful measure, however. Certainly not at that granularity!

So, we could expect that, if there were sufficient activity, there’s likely to be social learning going on. I actually support this; with some caveats. If people are posting, and others responding, that to me is liable to be social learning. So, it’s about the activity plus the triggered activity. The frequency does matter, too. There should be engagement every day, or at least several a week, by every person, to expect that there’s much going on. Too many days without activity, and we would expect that any learning would be extinguished.

We can, of course, ask people. Surveys would potentially reveal at least whether people thought they were learning. We do know that people’s assessment of impact versus the actual impact is flawed, but it would be a better indicator than just activity. If we pay attention to how we word the questions, ala Will Thalheimer’s Performance Focused Smile Sheets, for instance, we could have some faith in the responses.

I’d argue we can do still better. What outputs might indicate learning? Why would we enable social learning? We might expect that the time to accomplish something might go down, or the time to find an answer likewise. We might not actually know this, however, unless folks tell us. If it’s in a particular group, say we enabled the sales team, we might expect some related metrics to improve: more proposals, shorter time to close, higher close rate, etc.

This is all true for social learning, but how about informal learning? There’s individual informal learning, where someone’s learning on their own. We might see people are hitting a portal at a sufficient rate, e.g. a Learning eXperience Platform (LXP, misnamed). We could see that after installing it, things move faster: less time for things to happen – get written, solved, etc. – or more things happen per time – e.g. more outputs. That is, general metrics.

Again, we can also ask people. With the same caveats above, we could get some indication of what people’s attitudes about the learning are. Can we do better? As above, what improvements would we expect? If folks post about their learning, we could manually tag it, possibly even have some AI evaluating the posts. We would expect the individual to improve in areas of interest. However, it could be hard to track if people don’t register their learning themselves. Though, if they share, we could track it. Similarly, there are tools that troll people’s outputs to estimate what they know, we could look for improvements there, if we feel comfortable using such tools and are mindful of privacy and respect.

Overall, we want the organization to learn. Thus, it’d be nice if we could measure the learnings. Actually, if someone in the organization (*cough* L&D *cough*) is doing what’s suggested in Garvin, Edmondson, & Gino’s Harvard Business Review article Is Yours a Learning Organization?, we might look to see if learnings are being captured and shared. We could quantify that, and look for improvement as we try different approaches and tools.

My point is that we shouldn’t abandon informal and social learning to chance. We should look for the evidence we can track – access to tools and resources, posting and commenting – and see if we can’t get some evidence. I think we can, and should, be measuring informal and social learning.