Don’t Be Fooled by the 73%: Probability for L&D Realists
By Matt Richter
INTRODUCTION
I've been writing about statistical literacy for Learning & Development (L&D) lately. For example, if you roll out a leadership development course, and 60% of participants demonstrate measurable skill improvement afterward[1], what does that mean? Is it good? Compared to what? How likely is it that the improvement was due to the training and not just chance, timing, or organizational shifts?
Without basic statistical literacy—including an understanding of probability—it's easy to fall for seductive but meaningless metrics (e.g., "92% of participants enjoyed the session"). Worse, we risk making learning decisions based on anecdotes, outliers, or vendor spin rather than meaningful data.
In this article, I am interested in PROBABILITY. Specifically, how probability numbers get tossed at us as confident data we should embrace. For those of us working in L&D, probability isn't just for quants and analysts—it's central to what we do. We use probability to ask for resources, defend our existence, and make the case for using a particular tool. Colleagues and vendors do the same to us. When we assess whether a program "worked," we're often dealing with likelihoods, confidence intervals, and effect sizes[2]—even if those terms never make it into the slide deck or a report to the higher-ups. In other words, what is behind the forecast before an event, or the mean/ median after an event, contains a lot of variables often ignored or unseen.
That's why we L&D pros must become more comfortable with statistical thinking. You don't need to run regressions. But you do need to ask: "What are the chances this outcome is repeatable? How likely is it that our intervention caused this effect?" Probability isn't about certainty—it's about degrees of belief tempered by evidence. It is one among many statistical tools used against us. Of course, with a bit of knowledge, we can use them for forces of good!
And if we're in the business of making people better at their jobs, we'd better be pretty good at judging what's probablyworking—and what probably isn't. Pun intended!
WHAT IS PROBABILITY?
It's the coin toss problems we used to get sitting in math class. What’s the probability that tails will come up during a coin toss? Probability is the language of uncertainty. It helps us express, with numbers, how likely something is to happen—or not. Whether you're flipping that coin, forecasting revenue, or deciding whether to carry an umbrella, probability is the framework behind the scenes, helping you estimate risk, reward, and everything in between.
It doesn't give you guarantees—it provides you with odds. It helps you hedge, plan, prepare, and sometimes, take calculated risks. In everyday life and business, the probability is the difference between acting on gut instinct and acting with informed confidence.
HOW IS PROBABILITY CALCULATED?
Determining the probability of something happening isn’t always as clean as plugging numbers into a calculator. Sure, there are formulas—especially when you’re dealing with controlled environments like coin flips, dice, or card draws. In those cases, you’re often using classical probability, which is just:
This is easy enough to apply when predicting the odds of drawing a red card from a standard deck. But real life—especially in business or learning environments—is messier.
That’s where empirical probability and subjective probability come into play. We can estimate the likelihood of an event based on historical data (e.g., “80% of learners complete this module within two days”) or use expert judgment when data are sparse.
But here’s the kicker: context always matters. You can’t just drop a probability in without asking: What conditions were assumed? Are they still true? A probability based on past behavior assumes similar environments, incentives, and variables. Change those, and you’ve changed the odds.
Then, there are confounding factors—the hidden variables that can influence both cause and effect. You might see a presumed 70% success rate for a leadership program, but what if that’s really due to the quality of the manager, not the training?
That’s why even when you see a “clean” number, you need to ask:
How was it derived?
Was there a large enough sample?
Was the environment stable or shifting?
What else could explain the outcome?
Statisticians and analysts actually determine probability in a credible way—not just pulled from the air or a vendor slide. First, they start by getting laser-specific about what they’re calculating. You can’t just ask, “What’s the probability of success?” and expect a meaningful answer. Instead, they define the exact outcome: What’s the likelihood that 70% of learners will apply the new skill within 30 days? That clarity matters.
Vague inputs lead to vague outputs.
Then, they dig into historical data. If the event has occurred often enough, they can use empirical probability—literally just counting. If 120 out of 150 people completed the course in past cohorts, they can reasonably say there’s an 80% chance someone will complete it going forward. Simple but powerful—assuming the conditions haven’t changed.
But when data are limited—or when you’re predicting something new—they turn to statistical models. Logistic regression, Bayesian inference (more on Bayes later), and other tools help estimate probabilities based on patterns and prior assumptions. Bayesian methods are especially useful when you’re combining known base rates with fresh evidence and want to update beliefs over time.
Critically, they also account for confounding variables—those sneaky hidden factors that distort what you think you’re seeing. A 90% success rate might not necessarily mean the program is effective; it could simply indicate that only the most motivated individuals were invited. That’s why analysts use methods like randomization, regression analysis, and propensity matching to isolate the signal from the noise (Thank you, Nate Silver, and no… you don’t need to know what those things are at this time).
Finally, real professionals don’t present probability as certainty. They use confidence intervals or margins of error to show the range of likely outcomes. And they keep iterating. With every new data point, they refine their estimates.
In short, good probability work isn’t about finding one magical number. It’s about defining the right question, accounting for the right context, and staying humble about what the data can—and can’t—tell you.
HOWEVER, THERE ARE ISSUES WITH PROBABILITY
The problem? Most people misunderstand it. Worse, they're often manipulated by how it's framed. Let's look at why that matters, especially if you're trying to make informed decisions or help others do the same.
If you have a 99% chance that no one will ever break into your house and harm you, you won't think twice about it. Right? 99% is pretty darn close to perfect.
But if I say there's a 1 in 100 chance you'll die a horrible, violent death from a home invasion, you're suddenly double-checking the locks. "1 in 100" sounds more ominous, especially when paired with "violent death!"
Here's the thing: the probabilities are the same. But how we frame the numbers can completely alter the way people perceive and react to risk.
Your goal—especially in business and specifically L&D—should be to mitigate those misunderstandings.
Numbers—particularly probabilities—can be slippery (see a very famous paper by the late Stephen Jay Gould listed in the REFERENCES).
Remember the 2016 U.S. presidential election? The day before the vote, Trump had a roughly 10% chance of winning. Many interpreted that as "Clinton's got this." However, 10% means that if we played out the election 10 different times, Trump wins in one instance. For us, it happened to be that one. 🤷♂️ And elections are more complex than coin tosses. Meaning the coin toss mechanism is specific, clear, and mostly controls for any variables that might influence outcomes. An election forecast is based on simulations run over and over again, along with polling results. The way those simulations are programmed and how those polls are conducted (and then both combined into an algorithm) contain subjective factors that can undermine confidence in the forecast. That's why so many political forecasts exist and are often different from each other. But most coin toss predictions will be mostly consistent.
It's like drawing one number out of ten. You probably won't get the four every time—but it'll happen once, which means unlikely ≠ impossible.
So again… why does understanding probability matter?
Because understanding likelihood allows you to prepare, it enables better risk management, more informed choices, and smarter tolerances for uncertainty. It also serves as a reminder that the unlikely can still happen.
What's the probability that participants will show up for my workshop?
What's the chance that bad weather will delay my flight?
Business relies on probability. Sales forecasts are based on probabilities. The same goes for demand curves. However, garbage in still results in garbage out. The better you understand what's in front of you, the better you'll interpret, communicate, and act on forecasts.
If you are like me, it is unlikely you are going to become an expert in probability. But you can become an expert in questioning how probabilities get presented to you. You can get better at challenging and thinking critically what claimants say.
That means having:
1️⃣ A basic (and I do mean basic) understanding of how probability works
2️⃣ A sense of how data were sampled—quality, size, and bias risk
3️⃣ A general awareness of how the data were analyzed
All of this protects you from being manipulated by the appearance of numbers and helps you understand their meaning.
Unfortunately, in our business, most of the time, we don't have the tools, data, or capability to assess probability ACCURATELY. So, most of the time, it's self-defense. Many vendor claims are dressed up in the language of certainty: "This program increases retention by 73%" or "Participants are 4x more likely to apply skills on the job." Sounds impressive—until you ask, “Probability based on what?” Sample size? Random assignment? Compared to what baseline?[3]
Probability is notoriously difficult to isolate in L&D contexts. We're rarely dealing with controlled environments, clean variables, or repeatable trials. So when vendors toss out numbers, they often mask assumptions, cherry-picked case studies, or shaky causal links.
That's why L&D pros need a working grasp of probability—not to run the stats ourselves but to challenge the illusion of precision. If a vendor says there's a 70% probability of success, ask: How was that calculated?What's the confidence interval? What are the confounding factors?
Without those questions, we risk mistaking probability for promise—and that's where bad purchasing decisions are made. Probability, when misunderstood, doesn't inform better choices. It enables better manipulation. Let's not fall for it.
Oh—and I always lock the door. 😳
PROBABILITY FITS INTO A BIGGER PICTURE
I mentioned forecasting. Forecasting, prediction, and probability are related—but not interchangeable. Probability quantifies uncertainty, you the likelihood of an event occurring. Prediction is a specific claim about an outcome (e.g., “This learner will succeed”), often without expressing confidence. Forecasting combines the two—it’s a prediction withprobability built in (e.g., “There’s a 70% chance this program will increase engagement”). Think of forecasting as structured prediction, grounded in data and probability. In L&D, we often hear predictions—but without probability, it’s guesswork. Without forecasting discipline, it’s just optimism wrapped in numbers.
In business and L&D, these distinctions matter because they affect how decisions are made, how risks are assessed, and how accountability is managed. A prediction without probability is just a confident guess—often dressed up in PowerPoint. When someone says, “This training will boost retention,” you have to ask: With what likelihood? Under what conditions?
Forecasting brings discipline. It’s not just saying what might happen—it’s saying how likely it is based on data, trends, and past performance. This enables scenario planning, resource alignment, and contingency planning. It forces us to consider base rates and confounding factors.
Probability is the backbone. It helps us understand uncertainty, not eliminate it. Without it, we’re flying blind. With it, we’re managing risk.
For L&D pros especially, knowing the difference means we stop reacting to bold claims—and start asking better questions about what’s really likely to happen and why.
This is where Bayes’ Theorem earns its keep. Bayes sits right at the intersection of prediction, probability, and forecasting—because it gives us a disciplined way to update what we believe as new information rolls in. It’s not just about assigning a number to a guess; it’s about refining our probability estimates based on prior knowledge and fresh evidence. In other words, Bayes helps us forecast more intelligently and predict with greater humility. Rather than treating every data point as definitive, it reminds us that context and history matter and that every forecast is a moving target, not a fixed claim.
WHAT’S BAYES’ THEOREM?
Bayes' Theorem is a statistical formula that helps you update the probability of something being true based on new evidence. It's not magic—it's just a smart way of saying: given what I already know, how does this new information change the odds?
Think of it like this: You know that only 22% of people in a company are likely to be actively disengaged. But after a pulse survey, someone scores super low on every engagement metric. Bayes helps you ask: Given this data, what is the updated probability that they are disengaged, not just having a bad week?
In L&D, Bayes shows up when you're interpreting feedback, diagnostics, or assessment scores. It pushes you to consider base rates (how common is the issue in general?) before reacting to one data point. Without that, you risk overreacting to outliers or misclassifying people based on noise.
Mathematically, Bayes works like this…
Where:
P(A|B) = the posterior probability (the probability of A given B is true)
P(B|A) = the likelihood (the probability of seeing B if A is true)
P(A) = the prior probability (how likely A was before seeing B)
P(B) = the evidence (overall probability of B occurring)
However, Bayes is essentially a logical tool disguised as a mathematical formula. It helps you update your beliefs about a situation when new information becomes available. You start with a prior—your best guess based on what you already know. Then, when something happens (a test result, survey response, learner behavior), Bayes helps you ask: Given this new evidence, what are the updated odds? It flips the question from "What are the chances this happens if X is true?" to "Given that this happened, how likely is it that X is true?" It's how you move from instinct to evidence-informed judgment—especially when the data aren't perfect, but the decision still matters.
Bayesian thinking isn't common in L&D—but it should be. It teaches us to update carefully, question snap interpretations, and never take a single data point at face value.
Here's a quick example using the formula to work through a question.
Let's say you implement a post-training assessment to identify which employees have actually mastered a new compliance process. The vendor tells you that employees who pass the test are "very likely" to be competent.
Using Bayes.
Your baseline data:
You estimate that about 40% of learners actually master the material after training. That’s your prior: P(Mastery) = 0.40
The test has been validated to correctly identify mastery 85% of the time (true positive rate): P(Pass|Mastery) = 0.85
It also has a false positive rate of 25% (i.e., 25% of those who haven't mastered still pass): P(Pass|No Mastery) = 0.25
Now, someone passes the test. What's the probability they've actually mastered the material?
We apply Bayes' Theorem:
But first, calculate P(Pass) (the total probability of passing across both groups):[i]
P(Pass) = (0.85 * 0.40) + (0.25 * 0.60) = 0.34 + 0.15 = 0.49
Now plug it into the formula:
So, even though the learner passed the test, there's only about a 69.4% chance they've actually mastered the content.
Without Bayes, you'd probably assume that a passing score means success. But Bayes says: Not so fast. The test's false positive rate and the prior base rate of mastery matter—a lot. So you might decide that a single test isn't enough and build in a skills demonstration.
Let's take another quick example that doesn’t leverage the formula—one that just relies on thinking as a Bayesian…
Let's say your CHRO has asked you to pilot a new leadership potential assessment.
You are aware from historical data provided by Talent Management that only 10% of employees at your company become strong leaders (your base rate). Note, the reasons for that are not necessarily known. One team member scores high on the new assessment, which, according to the vendor, is 80% accurate, meaning it theoretically correctly identifies high-potential leaders 80% of the time. Therefore, it also gives false positives 20% of the time (flagging someone as high potential when they're not).
So… What's the actual probability that this person is a true high-potential leader?
Here's what Bayes tells us:
There's a 10% chance anyone is a true high-potential leader.
Of that 10%, 80% will score high, meaning 8 out of 100 people are true positives, according to the vendor's claims.
Of the other 90%, 20% will also score high, resulting in 18 false positives out of 100 people.
That means:
Of the 26 people who score high (8 true + 18 false), only eight are high-potential. So, 8 divided by 26 gives us:
That's just 30.8%.
Given a high score, there’s only a 30.8% chance the person is truly high-potential, even though the test is 80% accurate.
In other words, even with a claimed strong test, if the thing you're measuring is rare, most positive results will be wrong—unless you consider the base rate. That's Bayes. And that's why you don't build your bench based on a single data point. Bayesian reasoning is built around the idea that context always matters—formally known as the prior. You start with what you already know (prior probability) and then update that belief when new evidence appears (using the likelihood of the evidence given the earlier information).
So yes, Bayes requires you to:
Include prior knowledge (e.g., how common is this outcome in general?)
Weigh new evidence against that prior (how strong is this new signal?)
Update your belief accordingly (posterior probability)
In L&D, that means never interpreting data—survey results, assessments, drop-off rates—in a vacuum. A 90% engagement score doesn't mean much if your base rate is always 90%. A red flag in an assessment is less meaningful if the behavior is actually common in your culture.
THE FAMOUS BAYESIAN EXAMPLE
Let’s look at one of the most famous examples of all time: the Monty Hall Problem![4] On the surface, it feels like a logic puzzle. But really, it's a lesson in just how bad our intuition can be when it comes to probability—especially when conditions change.
Here's the setup: You're on a game show. There are three doors. Behind one is a car; behind the other two are goats. You pick Door 1. The host—Monty—who knows what's behind each door, opens Door 3… and it's a goat. Now he gives you a choice: stick with Door 1 or switch to Door 2.
Most people assume it's now 50/50. But it's not.
When you first chose Door 1, you had a 1 in 3 chance of being right. That hasn't changed. What has changed is that Monty gave you new information—he eliminated one losing option on purpose. That pushes the remaining 2/3 probability to the one unopened door you didn't initially choose. So, what's your best move? Always switch.
Here's why it matters: This isn't about goats. It's about updating probabilities based on context and new evidence. It's what statisticians call conditional probability—and what Bayesians live for.
In business and L&D, we run into Monty Hall moments all the time. A new survey result, a learner dropout, or a mid-program pivot shows up, and we think, "Well, now it's a toss-up." But it's not. Our prior knowledge still matters. We have to recalculate, not start over.
The Monty Hall Problem isn't just a quirky thought experiment. It's a reminder that good decision-making in uncertain environments means fighting our instincts—and trusting the math.
The bottom line is that having a basic understanding of Bayes' Theorem enables you to increase your self-defense capabilities.
APPLICATIONS
Let's look at some possible self-defense areas in L&D.
🔁 Program Impact and Evaluation
A colleague claims that after a training session, 70% of participants showed skill improvement on an assessment. You ask, "What's the probability this result happened by chance?" This leads you to consider sample size, effect size, and statistical significance[5]. It helps you avoid overclaiming effectiveness from small or biased data.
📊 Predicting Training Completion or Engagement
Based on historical LMS data, an analyst states there's a 90% probability that employees who start a self-paced module will finish it if it's under 15 minutes. Challenge the premise. Is the speed of completion your objective? Is that the cause of the dropout duration? Or is it irrelevant, boring, and useless content and activities? Completion is essential but use the figure to flip and ask the more relevant questions. Don't jump to the conclusion you need to redesign for speed.
🎯 Talent Prediction & Selection
Your CHRO wants to use a vendor assessment that predicts with an 80% probability that a participant will succeed in a leadership role. Use the Other Statistical Tools Mentioned in my other articles. Ask how that 80% was calculated. Ask about its CRITERION VALIDITY. Ask what model or theory the assessment is based on.
🌧 Risk Mitigation in Learning Logistics
You're scheduling a global virtual training. There's a 30% probability of regional internet outages during the scheduled time. Who cares whether the number is truly reliable! You build in contingency plans regardless of the number (e.g., recordings, backup facilitators) because the risk is real, even if it's not highly likely.
💸 Forecasting ROI or Training Uptake
A vendor forecasts a 60% probability of achieving the desired behavior change within three months post-training. As with the above examples, question how that number was calculated. For more reliable measures, post-training calculations should be used to determine the professed ROI. This claim is like the election claim that Trump has a 10% chance of winning. The variables are difficult, if not impossible, to control for. If the outcomes occur (again, assuming the vendor can accurately assess and evaluate them), the original forecast receives the credit (falsely). Still, if the outcomes fail to happen, then the vendor simply claims that the 40% occurred. See! Foolproof… for the vendor.
A CHECKLIST
Here is a checklist you can use. It is similar to the LDA Research Checklist I wrote with my friend and colleague, Clark Quinn, but applied to probability claims hurled at you.
1. What is the claim, exactly?
Is the outcome clearly stated (e.g., "improves retention," "boosts engagement")?
Is it a relative claim ("50% more effective") or an absolute one ("increased retention by 12 points")?
2. Is a probability or statistic mentioned?
What's the sample size?
Over what time period?
Is it a mean, a median, or an outlier?[6]
Is the effect size meaningful or just statistically significant?
3. Compared to what?
Was it compared to doing nothing or to another intervention?
Is there a control group or benchmark?
If "improvement" is claimed, what's the baseline? Does the improvement come from a benchmark or a control group?[7]
4. Was the study or data independently evaluated?
Is the data peer-reviewed, third-party verified, or vendor-generated?
Can they provide the raw data or a full report?
Have similar results been replicated elsewhere? Are the results valid? To what degree? To what level? (See my article WHY VALIDITY AND RELIABILITY MATTER IN L&D, 2025)
5. Is the claim generalizable to your context?
Was the study done in the same industry, region, or learner type?
Are there confounding variables (e.g., leadership support, tech platform differences)?
Does the claim hold up under your constraints?
6. Are they making causal claims from correlational data?
Does the claim say, "led to," "resulted in," or "caused"?
Are they implying causation from a simple correlation?
Were participants randomly assigned?
7. Does it pass the sniff test?
Does the number feel oddly precise (e.g., "62.4% boost in productivity")?
Worse, do all the numbers have a zero at the end?!?
Are words like "proven," "guaranteed," or "scientifically validated" used without sources?
Are testimonials used in place of evidence?
8. Final gut check: Would you bet your budget on it?
Would you stake your credibility on the claim?
Have you pressure-tested it with peers or stakeholders?
If it fails, can you defend why you chose it?
CONCLUSION
Probability isn't just a math thing—it's a thinking thing. In L&D, we deal with uncertainty all the time: Will this program work? Is this person high potential? Can we trust that vendor's claim? Understanding probability—especially Bayes—helps us question assumptions, catch statistical spin, and avoid being duped by data that looks precise but isn't. It forces us to ask: What's the context? What do we already know? And how should new evidence shift our thinking? We don't need to be statisticians. But we do need to think like them, especially when someone's trying to sell us certainty.
REFERENCES
Gigerenzer, G. (2002). Calculated risks: How to know when numbers deceive you. Simon & Schuster.
Gould, S. J. (1985, June). The median isn’t the message. Discover, 6, 40–42.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Silver, N. (2012). The signal and the noise: Why so many predictions fail—but some don’t. Penguin Press.
McGrayne, S. B. (2011). The theory that would not die: How Bayes’ rule cracked the Enigma code, hunted down Russian submarines, and emerged triumphant from two centuries of controversy. Yale University Press.
Richter, M.S. (2025). Why validity and reliability matter in L&D—And what you can do about it. LDA. https://members.ldaccelerator.com/c/articles/why-validity-and-reliability-matter-in-l-d-and-what-you-can-do-about-it-331833ab-15e7-4d6f-ba45-172eec40aa0c
Richter, M.S. and Quinn, C.N. (2024). The LDA Research Checklist. LDA https://members.ldaccelerator.com/c/articles/the-lda-research-checklist-9a91c5b8-01f5-409d-af85-f67923185966
Spiegelhalter, D. (2019). The art of statistics: How to learn from data. Basic Books.
FOOTNOTES:
[1] We won't even get into the perilous issue of how you determined that 60%-- let's assume your methodological approach was sound.
[2] Confidence interval and effect size: A confidence interval gives a range around a result that we’re reasonably sure contains the actual value (usually 95% confidence). It tells you how much wiggle room is in your estimate. Effect size tells you how big the impact is—not just whether it’s real, but whether it matters.
[3] A baseline is the starting point—what things look like before any intervention happens. It helps you measure change.
[4] The Monty Hall Problem is named after Monty Hall, the host of the American game show Let’s Make a Deal, which first aired in the 1960s. The probability puzzle gained wide attention in the 1990s when Marilyn vos Savant popularized it in her Parade magazine column. Despite being mathematically sound, her solution—to always switch doors—sparked widespread controversy, even among PhDs. For a formal treatment, see Morgan, J. P., & Chaganty, N. R. (1991). Let’s make a deal: The player’s dilemma. The American Statistician, 45(4), 284–287.
[5] Statistical significance indicates that the result is unlikely to be due to chance. But remember: “significant” doesn’t always mean “important.” A tiny, irrelevant difference can still be statistically significant if your sample is big enough.
[6] Mean, median, and outlier: The mean is the average—add up all the numbers and divide by how many there are. The median is the middle value when the numbers are arranged in order from lowest to highest. It’s less affected by extreme values. An outlier is a value that’s unusually far from the rest of the data—high or low—and can distort the mean, making things look better or worse than they really are.
[7] Benchmark and control group: A benchmark is an external standard or point of comparison—what others are achieving or what’s considered “good” in your industry. A control group is a group that doesn’t receive the intervention or treatment used to see what would’ve happened without it. Together, these help you judge whether your program is actually doing anything meaningful—or just riding on noise.
ENDNOTE:
[i] If you are like me, the first time you see this formula in action, you may wonder when addition came into play. It took me a bit of time to internalize that. So, just in case you need to see it, here is a walk-through…
Remember the core formula:
A = the thing you’re interested in (mastery)
B = the observed evidence (passing the test)
So we’re solving for: What’s the probability someone has mastered the content given that they passed?
To do that, we need to figure out how often people pass, regardless of whether they actually mastered it. That’s P(B), or in this case, P(Pass).
Why the addition?
People can pass the test for two different reasons:
They mastered the content and passed (true positive)
They didn’t master it but still passed (false positive)
So the total probability of anyone passing the test is:
P(Pass) = P(Pass|Mastery) * P(Mastery) + P(Pass|No Mastery) * P(No Mastery)
That’s why we add those two pathways. It’s not arbitrary—it’s because there are two ways a person could show a “pass” on the test.
Let’s recap with numbers:
40% of learners master → P(Mastery) = 0.40
60% don’t → P(No Mastery) = 0.60
True positive rate: 85%
False positive rate: 25%
So the full denominator:
P(Pass) = (0.85 0.40) + (0.25 * 0.60) = 0.34 + 0.15 = 0.49
We add 0.34 and 0.15 because those represent all the ways the person could have ended up with a passing score.
In Bayes, the denominator P(B) always sums up all the paths to the evidence you observed. You’re calculating: Of everyone who passed, how many actually mastered it? That’s why we include both groups—and that’s why addition is key here.