Skip to content
Susan's Business Book Reports
Share
Explore
Books!

icon picker
Superforecasting

Book Summary, December 2020, Susan Alban

Susan Voiceover

Businesspeople are constantly making decisions in circumstances of uncertainty. We make probabilistic guesses daily about whom will be successful in a given role, about which project to prioritize over another, about how to approach a negotiation.
Superforecasting brilliantly surfaces skills and practices that can improve and inform regular decision making processes for individuals and for teams. It also helps illustrate common pitfalls (anchoring on the inside / specific viewpoint, punditry/myopia, etc).

Judging forecasts

Lack of specificity and documentation

Many use vague language like “could” “might” and “likely"
“could” may mean 50/50 to me and 30/70 to you.
Being so vague hinders productive discussion
Other forecasts don’t get specific enough about the actual outcome and timeline that they are expecting
Lack of specificity means it’s very difficult to judge many forecasts. To make them easier to judge, must be specific percentage, specific outcome, specific timeline.
And it’s difficult to judge one forecast — if there are many many forecasts, it’s very likely that with some distribution, a couple will get it right; that doesn’t mean they’re great forecasters; need many forecasts to be able to determine this because sometime you just get lucky.
e.g., if a meteorologist says there is a 70% chance of rain, you can’t judge her based on one day. You need to look at ~100 days she made this forecast and there should be about 70 where it rained and about 30 where it did not.
E.g., everyone is trying to beat the market on Wall Street. One people do that every year; some people do it for many years. However to only hold up these individuals as exceptional disregards the many people who were trying to achieve the same thing and did not.
Often forecasts aren’t written down or documented and people misremember what they said would(n’t) happen. —> revisionist historians.

💡Susan voiceover - in the context of work/life (versus professional forecasting), consider substituting “judging a forecast” with “learning from a decision making process.” E.g., as a capital allocator, we are constantly looking for ways to improve our investment decision making processes. Doing this work is predicated on documenting our forecasts as specifically as possible such that when we revisit them, we can have a historical view of what we actually thought and why.

What is a great forecast?

A great forecast is both well-calibrated, which means that they happen at the right percentage; AND with strong resolution, meaning that the forecaster picks a high percentage when it is highly likely (ie, picks the poles and is decisive)

Skills of great forecasters

Fox trumps the hedgehog

Forecasters who have one strong belief (hedgehog who knows one thing) will not do as well as a forecaster who can encompass multiple perspectives and points of view (fox)
Ie, dogmatic forecasters who organize all of their thinking around one big idea won’t do as well.
Be careful about having a specific underlying belief on which a lot of your argument or forecast is built. It can be difficult to remove that or reexamine and alter it. Having an ego attachment to something (e.g., I am an xyz expert) can make you too attached to a particular viewpoint or to being right in general.
An example from this are television pundits who have a lens through which they see everything and make many judgements. E.g., the pundit who sees every decision or outcome through the lens of globalization

Many perspectives

Dragonfly with many eye lenses is important!
Need to take into account multiple perspectives. 4th, 5th, 6th, etc. perspective will help sharpen judgements.
Extract wisdom from crowds (prediction markets can be better than individual pundits)

Start with the outside view

BASE RATES: Is something likely to happen? Start with the most general application and ask how likely that occurrence is in the general population (base rate of an occurrence), then zoom in further and further to get more specific / tailored.
It’s more important to start with this generalized benchmark and then get more and more specific because then you are anchoring in the right place. (Versus starting with the specific instance and then making adjustments for general market/outside trends, which may cause you to inaccurately anchor yourself... anchor in the base rate)
Note: this draws extensively on the work of Daniel Kahneman, author of “Thinking, Fast and Slow”

Case / Fermi method

was a physicist who helped invent atomic bomb and loved brainteasers
Breaking down problem into its structured components - e.g., how many piano tuners are there in Chicago?
Start with population of Chicago, then what percent has pianos, how often must a piano be tuned, how many pianos can a professional tuner tune in a day/week/year, etc.
Break down and clearly state underlying assumptions enables you to challenge them and unpack them and triangulate into improved answers.
Note: McKinsey, Bain, et al use this interview methodology to evaluate consulting candidates because they have determined that it is indicative of on-the-job success.

Probabilistic thinking

Having a binary view is limiting and often inaccurate. Need to introduce a third “maybe” setting onto your dial.
Then set a probability that something is true. More specific probabilities (down to the single digit - 61%, 43%, etc) were more accurate over time than less specific probabilities (to the nearest 5% or 10%)
Understanding that nothing is preordained and there is a lot of randomness;
There are an almost unlimited number of paths that exist and the one that happened wasn’t anointed or special.
There isn’t specific meaning in anything in particular; it just is —> associated with the best forecasters
However happier people do ascribe meaning to events... just something to be aware of.

Check your work

Write down your methodology to distance yourself from your judgement and enable yourself to better check it.
Ask yourself to assume that your initial judgment is wrong, seriously consider why that may be, and then make another judgement —> improves accuracy almost as much as getting a second estimate from another person.
Wait a couple weeks before making a second estimate — helps harness the “wisdom of the crowds” from within yourself.
Intellectual humility - When you have a judgement, just believing you’re right doesn’t make it so. Need to have the humility to step back from your system 1 (quick knee-jerk instinctive) reaction and examine it. (Citing Daniel Kahneman )

Working on teams

Teams of super forecasters became even stronger.
Good practices - dividing workload, calls/face-to-face interaction, writing/documenting/critiquing forecasts.
Having the leader leave the room (JFK cuban missile crisis following Bay of Pigs example) can be important
Premortem practice - team is told to assume a course of action has failed and to explain why — makes team members feel safe to express doubts they may have about the plan.
💡We do this at before making an investment decision
DIVERSITY - “diversity trumps ability” “the aggregation of different perspectives is a potent way to improve judgement but the key word is different. Combining uniform perspectives only produces more of the same, while slight variation will produce slight improvement. It is the diversity of the perspectives that makes the magic work."

Other elements

Regression to the mean - the more something is luck, the more steep and quickly the regression to the mean will be.
Superforecasters did not regress to the mean at a steep rate, which demonstrates that forecasting is not luck, but skill!
Scope insensitivity - people would pay the same amount to clean up an oil spill that endangered 2K birds, 20K, 200K… Important to train yourself and others not to be indifferent or dismissive of scope/scale or timeline (very different to say something will happen in one week than in one year)
Asking good questions: good forecasts are the result of asking good and important questions. E.g., Thomas Friedman asking in 2002 is Iraq the way it is because of Sadam Hussein or is Sadam Hussein the way he is because of Iraq (ie, highly tribalized artificial state)?
Good forecasts pick a side if there is an obvious winner/loser - extremeizing algorithms that adjust forecasts to be more extreme can improve accuracy.
Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.