Why Does Foresight Matter When Predictions Fail?
Business leaders and policymakers are constantly told to “prepare for the future.” But whose advice should you trust? Should you focus on professional foresight processes that map trends and scenarios, or rely on so-called “superforecasters” whose predictions are statistically validated?
And more importantly: should foresight only be judged by whether the prediction comes true?
The debate often comes down to one question: What’s the point of foresight—prediction, or preparation?
What Foresight Groups Actually Do (and Don’t Do)
Professional foresight groups—ranging from corporate teams to think tanks like the Institute for the Future—are often misunderstood. Their job isn’t to say, “Here’s exactly what will happen.” Instead, they help organisations:
-
Spot early signals of change—from shifting demographics to emerging technologies.
-
Explore multiple futures through scenario planning, so strategies aren’t blindsided. (See our guide to scenario planning.)
-
Challenge assumptions and broaden strategic thinking, helping leaders consider risks and opportunities they might otherwise overlook.
In short, foresight is less about crystal-ball predictions and more about building resilience and agility. It equips organisations to navigate uncertainty by considering what could happen, not just what’s most likely.
The Role of Superforecasters
By contrast, “superforecasters” (a term popularised by Philip Tetlock’s Good Judgment Project) are individuals who consistently outperform experts at making probabilistic predictions about specific events. For example:
-
“Will a major cyberattack disrupt the energy grid in the next 18 months?”
-
“Will Country X default on its sovereign debt by 2027?”
Superforecasters excel when:
-
The question is clear and measurable.
-
The event has a definable time horizon.
-
Data exists to inform probability estimates.
Their value lies in providing evidence-based probabilities, helping decision-makers allocate resources or hedge risks more precisely.
But they’re less useful when the challenge is to imagine open-ended futures where uncertainty is irreducible—like exploring how emerging technologies might reshape entire industries over decades.
Should We Judge Foresight by Accuracy Alone?
It’s tempting to evaluate foresight like we do a weather forecast: “Did the prediction happen or not?” But that’s a flawed metric when foresight’s primary purpose is to prompt action and preparation, not passive observation.
Consider the case of Y2K (the Year 2000 problem). In the late 1990s, foresight professionals warned that computer systems might fail when clocks rolled over to 1 January 2000. Governments and businesses invested billions to fix the problem. As a result, major disruptions were largely avoided.
Afterwards, critics claimed Y2K was overhyped—“See? Nothing happened!”—but this ignores the reality that the foresight was effective precisely because it drove preventive action. Had leaders waited to see if predictions were “valid” before acting, the outcome could have been catastrophic.
The same thing happened with the hole in the ozone layer. Foresight revealed the emerging problem. Experts stepped in and took steps to avert disaster. And cynics called it a hoax.
Four Standards for Evaluating Foresight
If accuracy isn’t the sole benchmark, how should you evaluate foresight efforts? A more useful framework considers four dimensions:
-
Relevance:
Does the foresight address significant strategic risks or opportunities for your organisation? A weak signal about a niche technology might not be worth acting on—unless it aligns with your industry. -
Influence on Decision-Making:
Did the foresight process lead to better decisions, more robust strategies, or greater organisational alignment? The value often lies in the discussions and strategic adjustments it sparks. -
Actionability:
Were leaders able to use the foresight to allocate resources, launch initiatives, or mitigate risks in tangible ways? -
Accuracy (When Measurable):
For specific, time-bound forecasts (like superforecasting questions), did reality align with the assigned probabilities? Accuracy still matters, but it’s not the only measure.
For more on embedding foresight into your planning, see our guide to Strategy Development Frameworks.
How Leaders Can Use Both Approaches
The best organisations don’t treat foresight and superforecasting as rivals—they integrate both.
-
Use foresight groups to scan the horizon, identify scenarios, and stress-test strategy. This is especially valuable for long-term innovation, geopolitical risks, or industry disruptions where probabilities are murky.
-
Use superforecasters for specific, quantifiable predictions that inform resource allocation, risk hedging, or tactical decisions.
By combining exploratory foresight with data-driven forecasting, leaders get both breadth and precision—preparing for multiple possibilities while sharpening bets on the most probable outcomes.
The Takeaway for Strategy Leaders
If you’re a CEO, strategy director, or consultant, the key isn’t to ask, “Which is better—foresight or superforecasters?” The question should be: “What decisions am I trying to make, and what kind of future insight will best support them?”
-
For exploring the unknown and building resilient strategies, foresight is invaluable—even if its “predictions” never come true.
-
For making probability-driven bets on specific events, superforecasters can deliver statistically reliable insights.
In a world defined by complexity and uncertainty, your strategy toolkit needs both.
Want to build more resilient strategies?
-
Book a strategy consultation with Chris C Fox Consulting to discuss how to integrate foresight into your planning.
-
Schedule a demo of StratNavApp.com at this link to see how you can manage strategy development and execution collaboratively.
-
Or try StratNavApp.com for free: