Experimenting with Experimentation

By Clark Quinn

I’ve previously talked about the value of showing your work, and brainstorming. Another important set of skills settles around experimentation. There are specifics here that matter, and thus there’s a role for L&D. So, how should we be experimenting with experimentation?

Many years ago, Valerie Shute & Jeffrey Bonar, working at the Learning Research & Development Center at the University of Pittsburgh, proposed a unique approach. The core area of work was on intelligent tutoring systems, which had specifics. Those specifics included having an expert model of the domain, and a model of the student’s understanding, and a set of rules about how to use the delta between the expert approach and the student’s current level to intervene. Problems would be set for the learners, and their approach compared to the expert’s, which then would trigger rules to provide feedback if learners were on the wrong path, or to choose the next problem if they were successful.

Shute & Bonar took a different approach. They, instead, had expert models of how to experiment in a domain. They used three different domains – geometric optics, electrical circuits, and economics – and used the same coaching about how to be smart about working in those domains. The expectation was that by doing smart experimentation, learners would discover the important relationships. Thus, the learners were watched, and coached, as they did studies to explore how the respective worlds worked.

The rules they used are what’s interesting. Rules included things like only varying one variable at a time, and taking sample measurements at appropriate gaps across the whole range of a variable. You don’t want to sample with too few measurements, missing nuances, or only focus on a small range and miss some of the pattern. Thus, you’d be able to explore systematically and map the outcomes, generating the necessary understanding.

The point of the story is that the reason for such guidance is that we don’t necessarily experiment smartly. Moreover, it’s not a skill specifically developed in school. Thus, assuming folks have good skills in testing hypotheses isn’t a good bet.

Yet, if you accept the dimensions of a learning organization as proposed by Garvin, Edmondson, & Gino*, you see having concrete practices is an integral part. Given that experimentation shows there, too, it then becomes clear that having effective experimentation is a critical component of learning.

Further, I argue that innovation is learning, and therefore, so too is experimentation. If innovation is a new idea that is better, the only way to determine better is to test it. That is, to experiment and determine if a new idea is also an improvement! If you don't want to delude yourself, your test should be effective.

Thus, the role for L&D is to make sure that the experimentation being practiced is effective. Similarly, I’ll suggest that L&D needs to be experimenting itself, as part of an overall strategy to ‘own’ innovation before taking it to the broader organization. Once L&D understands experimentation successes, then these can be codified, and developed throughout the rest of the organization. Quite simply, L&D needs to be experimenting with experimentation!


References

Garvin, D. A., Edmondson, A. C, & Gino, F. (2008). Is Yours a Learning Organization?  Harvard Business Review. March, 2008.

Shute, V. & Bonar, J. (1986).  Intelligent tutoring systems for scientific inquiry skills.  Proceedings of the Eighth Annual Conference of the Cognitive Science Society.  Hillsdale, NJ: Erlbaum.


* There is no evidence, to this date, that Gino interfered with data on this article.