
Welcome to Week 1 of our series on evidence-based wellness. This series is built around Wellness Literacy: the skill of cutting through wellness noise by evaluating sources, understanding evidence strength, and turning what’s credible into routines you can actually do. Each week, we’ll highlight the most trustworthy expert guidance and translate the essentials into practical takeaways.
If you want the short weekly version, Slightly Smarter is publishing a 2-minute snapshot of each topic—you can subscribe free and follow along there, too.
The Snapshot: If you read the companion Slightly Smarter issue, you: learned intelligence doesn't protect you from wellness misinformation, got the book recommendation (Foolproof), and read the six manipulation tactics to watch for. This is the full analysis—the research, the framework, and how to verify.
The Research
Finding 1: Why smart people aren't protected
Sultan et al. (2024) synthesized 256,337 veracity judgments across 31 experiments and used a signal-detection framework to separate (a) discrimination (telling true from false) from (b) response bias (a general tendency to label things true or false). In their main model, analytical thinking showed a strong association with better discrimination (estimate 0.66, 95% CI [0.52, 0.81]), while education showed little to no meaningful relationship with discrimination (estimate 0.07, 95% CI [0.00, 0.13]). Further, Kahan et al. (2012) found that more scientifically literate individuals sometimes show greater polarization on charged topics, using their cognitive resources to defend existing beliefs more effectively.
The implication: being smart can actually make you better at fooling yourself.
Finding 2: The two-second intervention
Pennycook et al. (2021) tested a simple “accuracy nudge.” In a Twitter field experiment (n = 5,379), users who received a message prompting them to think about the accuracy of a single headline went on to share higher-quality news afterward.
The mechanism is the key idea: most of us evaluate content on interest or emotion; the prompt shifts attention to accuracy.
Finding 3: How fact-checkers beat professors
Wineburg & McGrew (2017) observed Ph.D. historians, Stanford undergrads, and professional fact checkers evaluating live websites. Historians and students tended to read vertically (staying on the site); fact checkers read laterally—leaving the site quickly and opening new tabs to see what other sources say—and reached more warranted conclusions faster.
The lesson: never evaluate a source by looking at the source alone. Go lateral.

The Framework
Based on the research, four practices that actually work:
1. The Accuracy Pause
Before sharing, buying, or acting on health information—stop. Ask yourself: "Is this actually accurate?" Not "Is this interesting?" or "Does this confirm what I believe?" Just: Is it accurate?
Why it works: Pennycook's research shows this simple question shifts your brain from social evaluation mode to accuracy evaluation mode.
2. Lateral Reading
Don't evaluate a website by reading it. Open new tabs. Search for what others say about that source. What's their reputation outside their own site?
Why it works: McGrew's research shows this is what professionals do—and it outperforms the "read carefully" approach that feels more thorough.
3. Tactic Recognition
Learn the manipulation playbook from Foolproof: emotional appeals, fake experts, conspiracy framing, false urgency, polarization, impersonation. Knowing the plays makes you harder to fool.
Why it works: Van der Linden's inoculation research (Lu et al., 2023) shows that exposure to weakened manipulation tactics builds lasting resistance.
4. Consensus Check
Does this claim contradict what major health organizations say? If one source is telling you something dramatically different from scientific consensus, demand extraordinary evidence.
Why it works: Isolated claims that contradict broad expert agreement are occasionally right—but the base rate is low. Extraordinary claims require extraordinary evidence.
The Literacy Lesson
Why Smart People Are Vulnerable
The misinformation problem isn't about intelligence—it's about mode.
Most of us default to intuitive mode—fast, efficient, emotion-driven. This works fine for most daily decisions. But wellness claims are designed to exploit intuitive processing: emotional language, tribal signaling, confident delivery.
Sultan et al. (2024) found analytic thinking predicted accuracy better than IQ or education. The difference: whether you pause to evaluate, or accept based on how content feels.
The skill: Recognize when you're in intuitive mode and deliberately shift to analytic mode. The accuracy pause ("Is this actually true?") is the trigger. You don't need to be smarter. You need to be slower—at the right moments.
―――――――――――――――――――――――――――――
Verify This
Here's the meta-question: why should you trust me? I just told you to verify sources laterally. So do it. Full reference list is at the end of every article.
―――――――――――――――――――――――――――――
Coming Next Week
Nutrition - The Dietary Guidelines for Americans 2025-2030 just reversed decades of advice. We'll break down what actually changed, what the science supports, and what's still politics. The evaluation framework you learned this week? We're applying it immediately.
―――――――――――――――――――――――――――――
Editor's Note
Most wellness information isn't wrong. It's miscalibrated—small effects sold as transformations, correlations packaged as causation, preliminary findings presented as settled science. The problem isn't that experts are lying. It's that the incentive structure rewards overconfidence and punishes nuance. This series is about calibration. Matching your confidence to the actual evidence. — Brian
―――――――――――――――――――――――――――――
References
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2(10), 732-735. https://doi.org/10.1038/nclimate1547
Lu, J., Shen, C., & Chen, H. (2023). Inoculation, boosters, and skepticism: A meta-analysis of psychological interventions against misinformation. Psychological Bulletin, 149(5-6), 283-315.
Wineburg, S., & McGrew, S. (2017). Lateral reading: Reading less and learning more when evaluating digital information. Stanford History Education Group Working Paper No. 2017-A1. https://doi.org/10.2139/ssrn.3048994
Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A. A., Eckles, D., & Rand, D. G. (2021). Shifting attention to accuracy can reduce misinformation online. Nature, 592(7855), 590-595. https://doi.org/10.1038/s41586-021-03344-2
Sultan, M., Tump, A. N., Ehmann, N., Lorenz-Spreen, P., Hertwig, R., Gollwitzer,
A., & Kurvers, R. H. J.M. (2024). Susceptibility to online misinformation: A systematic meta-analysis of demographic and psychological factors. Proceedings of the National Academy of Sciences, 121(47), e2409329121. https://doi.org/10.1073/pnas.2409329121
Van der Linden, S. (2023). Foolproof: Why misinformation infects our minds and how to build immunity. W. W. Norton.

