What’s the Use of Evidence-based Instructional Methods?
by Ruth Clark
I was in a meeting recently in which someone suggested that there was little benefit from nor interest in evidence-based instructional methods. Maybe she was right. Her comment made me think about the gaps between research evidence of what works and practitioner design decisions.
First, we have the gap of research and practitioner communities. Evidence producers work primarily in universities, publish in academic journals like Review of Educational Research and attend academic research conferences like the annual American Educational Research Association meeting. Practitioners in contrast work in organizations, engage in on-line communities of other practitioners, and attend conferences such as ATD or ISPI. What’s missing here? A bridge between these 2 worlds. One reason I want to invest time with organizations like LDA is their commitment and success to date in translating evidence into practitioner guidelines.
But there are other gaps. As a former science major I focus mostly on experimental evidence – the kind that involves random assignment of participants to two or more lesson versions and compares the learning outcomes with a valid and reliable test. While experimental research results offer the most valid scientific results, there are some significant limits. For one, most experiments use lessons that last only a few minutes. In addition, the topics of these lessons are generally science and math. Science and math skills lend themselves to right or wrong answers on a follow up assessment and also get priority funding as part of the government focus on STEM learning. What else? Typically learning is measured right after the lesson is completed so we often don’t know much about long term effects. Finally, most research participants are college students as they are readily available to academics. As a practitioner you are likely responsible for lengthy courses that focus on soft skills, involve adult learners and hopefully transfer over time to the workplace. In short, a lot of research conclusions may have limited applicability to your context.
In spite of these limits, research findings have gradually become more robust. One reason is the large number of meta-analyses we’ve seen over the past 10 years. A meta-analysis synthesizes the results from dozens of research studies on a specific topic and reports the analysis as an effect size. An effect size of less than 0.4. is probably not worth implementing in practice while an effect size closer to .50 and higher is worth our attention. The meta-analysis also allows us to identify boundary conditions – the conditions under which a given instructional method works. Boundary conditions are identified by categorizing research studies into groups such as prior knowledge of the learner (high versus low) or content type (factual, conceptual, or procedural). Then the effect sizes for each subgroup are compared.
Take animations for example. Under what conditions would an animation be more effective for learning than a series of still graphics? Suppose you want to teach how a toilet tank flushes. Would it be more effective to use a series of still graphics or an animation? A number of experiments actually showed better learning from the stills than the animated versions. The still visuals allowed learners to progress at their own rate, to review the sequences, and to engage in mentally representing how one stage progresses to the next. In contrast, the animations presented a lot of visual information in a short time and may have led to passive learner viewing.
However the benefits of an animation may depend on the relationships being represented. In their meta-analysis, Ploetzner, Berney & Betrancourt (2019) divided the content of 194 experiments that compared animations to static graphics into content that involved simple versus complex movement. A complex movement involves 2 or more simultaneous moving parts (think multiple gears) or concepts such as velocity or acceleration that are difficult to portray with still graphics. They found that for complex visualizations animations were more effective than still visuals. Therefore one boundary condition for animations is to use them when movement is complex and learning benefits from a kinematic (spatial) mental model. For simpler movements such as how a toilet tank flushes, a series of stills would as effective and maybe more cost effective to produce.
So what should you do when evidence does not sync with your instructional context? My suggestion is 1) Be aware of research recommendations including the boundary conditions and 2) Be aware of basic psychological principles of learning such as cognitive load. Then when faced with your own unique situations, use your best judgement to root decisions on what evidence we do have and what we know about human learning.
WHAT DO YOU THINK? PLEASE ENTER YOUR COMMENTS SO WE CAN HAVE AN INTERACTIVE DISCUSSION
What benefits have you seen in evidence-based practice guidelines?
How could evidence be more impactful for practitioners?
Do clients care whether instruction is evidence based? What is your experience?
WANT TO READ MORE ABOUT EVIDENCE-BASED PRACTICE?
Mayer, R.E. & Fiorella, L. (2022) The Cambridge Handbook of Multimedia Learning, 3rd Ed.
This substantial book includes 46 chapters most written by researchers. Each chapter addresses an important perspective regarding design elements in multimedia learning.
Ploetzner, R., Berney, S., & Betrancourt, M. (2019). A review of learning demands in instructional animations: The educational effectiveness of animations unfolds if the features of change need to be learned. Journal of Computer Assisted Learning, 36:838-860.
This is the reference to the meta-analysis I mentioned regarding when animations are more effective.