The Overconfidence Problem: Why Your 80% Sure Calls Are Wrong 40% of the Time
You have probably heard that humans are overconfident. But until you see the actual numbers, it is hard to appreciate just how badly our brains miscalibrate confidence and reality.
The Gap Nobody Feels
Here is the core finding from decades of calibration research: when people say they are 90% confident in something, they are right about 70% of the time. When they say 80%, the actual hit rate is closer to 60%. This pattern shows up in study after study, across professions, cultures, and education levels.
The tricky part is that you cannot feel this gap. There is no internal alarm that goes off when your confidence is miscalibrated. A surgeon who says there is a 95% chance of a successful operation feels exactly the same level of certainty whether the true odds are 95% or 75%. The only way to detect overconfidence is to keep score over many predictions and check the numbers.
Concrete Examples That Might Sting
Let me make this tangible with some examples from domains where overconfidence is well documented.
Startup founders consistently overestimate their odds of success. Surveys show that founders rate their own startup's chance of success at 70% or higher on average. The base rate for venture-backed startups returning their invested capital is roughly 25 to 35%. For pre-seed companies, it is lower. The gap between perceived and actual odds is enormous, and it affects everything from fundraising strategy to hiring plans.
Venture capitalists are not immune. When VC partners rate a deal as "high conviction" at the investment committee stage, the implied confidence is something like 70 to 80% that the company will be a meaningful winner. Actual fund return data suggests that roughly 1 in 10 to 1 in 5 investments in a portfolio drive the majority of returns. That math does not support 70% conviction across the board.
Project timelines are another classic. Software engineers asked to give a 90% confidence interval for task completion (meaning they are 90% sure the task will be done within this range) miss the range about 50% of the time. That is not a small miss. Their 90% confidence intervals perform like 50% intervals.
What the Research Actually Says
Philip Tetlock ran the most comprehensive study of prediction accuracy ever conducted. Over twenty years, he tracked thousands of predictions from political experts, economists, and intelligence analysts. The headline result: the average expert was barely better than a simple algorithm that just extrapolated base rates.
But the more interesting finding was about who did better. Tetlock identified a small group he called superforecasters who consistently outperformed. They were not smarter or more experienced. They were better calibrated. When they said 70%, things happened about 70% of the time. When they said 30%, things happened about 30% of the time.
Kahneman's research added another dimension. He showed that overconfidence is not just about big predictions. It shows up in trivial knowledge questions, medical diagnoses, legal judgments, and financial forecasts. The bias is not domain-specific. It is a feature of how human brains process uncertainty.
Two Types of Overconfidence (They Are Different)
Researchers distinguish between two forms, and the difference matters for how you fix them.
Overconfidence in knowledge is about what you think you know. When someone answers trivia questions with 95% confidence and gets 30% wrong, that is knowledge overconfidence. The fix here is relatively simple: encounter more situations where you are wrong, and your internal confidence calibrator adjusts.
Overconfidence in prediction is about what you think will happen. This is harder to fix because outcomes are delayed and noisy. You predicted this product launch would succeed, it did, but maybe it succeeded for reasons you did not anticipate. You predicted a hire would work out, they did not, but maybe external factors were the real cause. The signal-to-noise ratio is much worse for predictions, which is why you need a large sample of tracked predictions before patterns emerge.
A Quick Exercise You Can Do Right Now
Try this. Write down 10 yes-or-no predictions about the next two weeks. They can be about anything: work, sports, weather, politics, personal life. For each one, assign a confidence level between 50% and 99%.
Be honest with the numbers. If you genuinely think something is a coin flip, write 50%. If you are nearly certain, write 95%. Do not cluster everything at 80% because it feels safe.
Two weeks from now, check which predictions came true. Then look at your calibration. Did your 90% predictions happen about 9 out of 10 times? Did your 60% predictions happen about 6 out of 10 times? Most people who do this exercise for the first time discover that their confidence levels are consistently too high. It is a jarring experience, and it is exactly the kind of feedback that starts the improvement process.
Can You Actually Fix This?
Yes, but not by trying to "feel less confident." That does not work. What works is structured feedback over time. The superforecasters Tetlock identified improved their calibration through deliberate practice: making many predictions, tracking outcomes, and reviewing where their confidence was miscalibrated.
The key insight is that calibration is a skill, not a personality trait. You can train it the same way you train any other skill. But you need a system that shows you where you are off and by how much. That is what a Brier score measures, and it is what makes prediction tracking fundamentally different from keeping a list of guesses.
Find Out How Overconfident You Are
Our free Calibration Challenge gives you 10 questions with confidence levels. In 2 minutes, you will see exactly where your confidence departs from reality. No signup, no email, just your score.