Learning Development Accelerator

View Original

Reflecting on Research and Practice

By Clark Quinn

Two separate, but recent, occurrences have given me pause to reflect on research into practice. If the Learning Development Accelerator (LDA) is to be, as we intend, evidence-based, that means we have to have a reliable way to do that. Yet it’s not just for us to be that, what we need and want to develop is a culture of being evidence-based. Further, our intentions reach beyond just LDA, this is an exercise that should affect our entire industry. That means we need to assist people in grounding their practice in what’s known, and without expecting them to necessarily have to read research in the original academese!

First, we have to articulate a process for ourselves, before we can articulate this for others. While we (Matt, I, and at least most of our board of advisers) have been trained in the scientific method for learning science, we can’t expect all the society members to have to wade through the long history of the scientific literature, let alone be active researchers. We need to find a better approach.

It turns out that this isn’t a trivial exercise! Of course, research has shown that as expertise is developed, we lose access to the underlying background information. Also, it can get complex. For instance, if you do venture into the world of journal articles, how should you evaluate what you read? For instance, I have not only been trained during my Ph.D. program, but have served as an academic, created and reviewed program and journal submissions, and internalized a lot of information. How do I resurrect all that, how much do I present, and how to make it comprehensible?

There’s a lot to unpack. We’ve tried to address this in several ways, and here are some thoughts about where we’re at. Recognize that this is an ongoing process, so there’s always more.

The Status of Science

As preface, it’s worth looking at the status of science. Too often, people either say science is right, or it must be wrong because it changed what it says. Neither’s accurate. Science is all about what’s the best explanation that we currently have. We switched from a Newtonian to an Einsteinian view of physics, for example. Similarly, we’ve gone from a formal logical to a contextually-sensitive view of cognition (and learning).

This is not because the older model was wrong, but because the newer models better explain observed behaviors. Despite continuing advances, the robust empirical results from previous research must be accommodated. And, at times, the explanations and predictions from older models are useful enough. For instance, cognitive explanations are more useful for designing learning than neural ones because they’re simpler.

Which isn’t to say that there’s aren’t flaws with the scientific process. For one, replication of results is important, so that what’s said is tested and shown to be valid. Yet, there’s a replication crisis in the psychological sciences right now. This comes about because of pressure to publish, amongst other things, so folks have incentives not to adhere to the process.

Another problem is that, for many practical yet problematic reasons, much of the results from research can be categorized as WEIRD. Here we’re not talking about the derogatory political comment, but instead Western, Educated, Industrial, Rich, and Democratic, which characterizes using college students as a subject pool. These results are easy to obtain, but for certain types of results won’t generalize to other populations!

Again, science isn’t wrong. As they say: it’s not perfect but there’s no better approach. Instead, science is continually working towards better explanations, and processes for accomplishing that as well. There are always new findings to be accommodated, which ultimately enrich our understanding and our practice. We just have to be mindful, and cautionary in what we accept as our approach.

May Contain Lies

A recent book had a useful approach to evaluating claims. Alex Edmans’ book May Contain Lies has a ‘ladder’ of checks of information that’s quite useful. At each of four stages, he talks about how the previous level doesn’t necessarily mean that the next level is appropriate. For instance, just because there’s a statement, doesn’t mean that the statement is factual. The chain leads up from statement to proof.

To be clear, there really is no ‘proof’, in that sense. As stated above, we just achieve the best possible explanation for the observation. Yet, doing so in systematic and scrutable ways is important to understand.

The book also talks about how people deliberately misconstrue the story, to their advantage. Cherry-picking data, for instance, can be used to tell a story that’s not representative of the overall picture but makes the case for a particular approach. We see this when folks say that their own studies demonstrate an outcome, but don’t have published data in independent journals because ‘proprietary’. This is misleading, because patents protect those rights if they’re real. Similarly, folks can cite correlation and argue for causation, yet the two aren’t necessarily linked. For instance, a rooster reliably crowing before dawn doesn’t mean that a male chicken is the reason we have morning.

We held a Meet the Author with Alex (viewing available for members, including the free level) where he unpacked much of this thinking for us. The subtitle of his book “How stories, statistics, and studies exploit our biases - and what we can do about it” was important in the discussion. For instance, he debunked some of the biggest business books for choosing a select group of companies to study and the factors chosen to highlight. One piece of evidence is that the companies subsequently failed!

Checklist

Also, we developed a Research Checklist (again free by registering for our LDA community). Here, Matt Richter (co-director of the LDA with me) and I developed a checklist to use to evaluate claims. It emerged as a mashup of my approach to checking out research before going to the journal, and Matt’s thoughts on analyzing a journal article. Fortunately, we improved each other’s contributions and believe (and hope) that the whole is greater than the sum of the parts.

In the checklist, we have three sections. The first is a surface analysis, which precludes going to the journal. If the proposal passes that first test, you then may have to go to the original research. This involves, yes, reading the academese. Still, we’ve tried to create some checks. Finally, if there’s a third party presenting this research, we ask a couple of questions to evaluate their motives and construal.

The goal is to give you a way to avoid research if it doesn’t meet some sane requirements. We are erring on the side of caution, as there are a variety of ways to fault the claim. Yet this is grounded in best practices, as we see with conferences and journals, too, touting their rejection rate! We’re not looking to counter commerce, but we also want solid evidence base before you invest your resources!

Translators

Finally, there are people who have developed a reputation for translating research into practice and doing so across topics and issues. These translators have demonstrated grounding in the practices to accurately assess quality of claims, and also the ability to present the research in practical ways.

While there are some that have expertise in specific areas, such as Christy Tucker with scenarios, others have demonstrated an ability to cover a broad swath of topics, across time. They have earned the respect of their peers as a consequence, and they regularly write and speak to carry valuable messages. These include folks like Connie Malamed and Jane Bozarth.

We’re honored to have several of them on our advisory board. Dr. Will Thalheimer co-founded LDA with Matt, and continues to serve as Chair. Dr. Ruth Clark, author of the classic eLearning & the Science of Instruction, also serves. So, too, does Dr. Patti Shank whose series of books on learning practices such as multiple-choice questions are valuable and recommended. Julie Dirksen, author of Design for How People Learn is another.

Being evidence-based is not easy, but it’s also a professional obligation. We’re not claiming these are the only resources to improve, I'm just documenting some. Still, it is doable, and we want everyone to get better at it. Our industry is getting better, we believe, but there’s still room to improve. We hope you’ll join the efforts, regardless of how you do so.