Risk is not what most people think it is.
In common use, the word conflates at least three distinct concepts: probability, magnitude, and uncertainty about uncertainty. Medicine conflates them routinely. Finance conflates them routinely. And this conflation is not merely semantic — it produces systematically wrong decisions at scale.
Probability vs. Uncertainty
Frank Knight’s distinction, made in 1921 and still under-applied, is precise: risk is a situation where outcomes are unknown but probabilities can be assigned. Uncertainty is a situation where even the probabilities are unknown.
Most significant decisions in medicine, investing, and policy operate under Knightian uncertainty, not quantifiable risk. The confidence intervals we report are confidence intervals about models — and the models are themselves guesses about the structure of the domain.
This is not nihilism. It is calibration. Acting under uncertainty requires a different epistemic posture than acting under quantified risk.
The Medical Instance
Consider the clinician reasoning about treatment selection for a complex patient.
The randomized controlled trial provides efficacy estimates for a population — typically younger, less comorbid, more adherent than the patient sitting across the desk. When we apply population-level effect sizes to individual patients, we are making an inference of unknown validity. The epidemiology does not tell us what will happen to this patient. It tells us what happened, on average, to patients who resembled them on the measured variables.
Unmeasured variables — the ones not in the dataset — are doing substantial work, and we cannot know how much.
This is not an argument against evidence-based medicine. It is an argument for epistemic humility about what evidence-based medicine provides: a prior, not a certainty.
The Financial Instance
The same structure appears in finance, in sharper relief, because markets are reflexive in a way that biology is not. The model that accurately describes a mispricing tends, when widely adopted, to eliminate the mispricing it describes.
This means that any quantitative model of risk is partially self-defeating. The degree of self-defeat depends on adoption rate, time horizon, and how far from equilibrium the original estimate was.
The practical consequence: drawdown risk is almost always underestimated in quantitative frameworks that assume historical distributions are stable forward estimates. They are not. Distributions have tails that fat-tailed distributions underestimate, and regime changes that make historical data structurally misleading.
Asymmetric Stakes and the Asymmetry of Error
A principle that partially resolves the dilemma: when the downside of underestimating risk is catastrophic and irreversible, and the downside of overestimating risk is costly but recoverable, act conservatively.
This is not merely precautionary principle heuristics. It is decision theory applied to asymmetric payoff structures. A 10% chance of losing everything is not exchangeable with a 10% chance of losing half — not because the expected values differ (in simple EV they may not) but because the continuation condition for future decision-making is violated by the catastrophic outcome and preserved by the lesser one.
Ruin is qualitatively different from loss.
Toward Calibrated Confidence
What does good epistemic practice under uncertainty look like?
It looks like explicit acknowledgment of what is known, what is modeled, and what is guessed. It looks like forecasts with stated confidence intervals and mechanisms for revising them when evidence arrives. It looks like decisions that preserve optionality where possible, accept irreversibility only when the asymmetry is strongly favorable, and track the difference between the decision process and the outcome.
Good decisions under uncertainty produce bad outcomes routinely. Bad decisions produce good outcomes routinely. Conflating outcomes with process quality is the most common way that practitioners fail to learn from experience — they attribute the good outcome to their judgment and the bad outcome to bad luck, when the reverse may be equally plausible.
The cure for this is structural documentation: writing down the reasoning before the outcome is known, then evaluating the reasoning against the outcome rather than evaluating the outcome in isolation.
It is not a comfortable practice. It makes the errors visible. That visibility is the point.