What’s the Dr Pepper?
After all, what’s the worst that could happen?
“what’s the dr pepper?” is some advice for dealing with decision making in complex systems and I find myself asking the question a lot.
Complex systems differ from the dice rolling coin flipping probability we were taught in a crucial way - the set of all the possible outcomes is not enumerable, i.e. we cannot know all the possibilities and then rank them in terms of likelihood. This is like rolling a dice only to find occasionally it doesn’t give a numeric answer - say it lands on an edge or you lose it - but we can’t simply re-roll some events.
The vast majority of life’s decisions fall into this category of having a “non enumerable” outcome set. Typically we cannot list the outcome set because we cannot enumerate the set of inputs either (which has interesting philosophical links e.g. to a clockwork universe but I won’t discuss that here else I’ll go on a total bender). What’s important is that in most situations, like a D6 die roll, we know some of the possible outcomes of decisions and usually “pretend” the rare outcomes won’t happen.
We do see probability problems being solved by companies like home insurers and it might seem like they have the answer; they can’t know all the different events that will cause a payout (enumerate the outcome set) but they can make a prediction on the rate of payouts across a group. This is a subtly different problem - it’s a way of moving from a detailed understanding of the risks of an individual house (which we cannot know entirely) towards a statistical guess across a lot of houses. This type of guess is only useful across the population (i.e. many houses) and does not answer any questions about a specific house. This is why we don’t allow people to opt out of car insurance - it’s impossible to know or prove your own payout risk (even by a rational actor).
There are many times we end up trying to “insure ourselves” like this however. We list out all the possible bad outcomes and try to apportion a probable chance to each. We then dedicate resources to mitigate against each negative possibility in relation to our guess at its likelihood.
For example, imagine you’re the only person alive on earth. You have a house that you rely on for shelter. You know losing it would result in serious hardship. In the old world, you would buy insurance but in the new world there is no insurance, so you have to protect against that hardship. You might build a risk register like this:
Possibility | Mitigation |
---|---|
Cooking fire | Make sure to cook outside, no fire in the house |
Flood | Build on a hill |
Electrical fire | Check fuses and sockets on cycle. Turn everything off as you leave |
Hurricane | Add shutters to windows Pre season roof check |
... | ... |
If you spent a long time on this you might think you’re safe. However consider something else you know to be very likely true: every house ever built has been both cared for and yet ultimately destroyed. Knowing what’s going to destroy it is what’s not possible - and yet, that’s exactly what we’ve just tried to predict with that risk register, even though we know it will be destroyed eventually (perhaps ideally after you die). This is the same idea that eventually your die roll will slip under some furniture or land on a non flat surface, you just don’t know how or when.
Instead of trying to enumerate the outcome set, in most cases it seems we’re better off side-stepping that problem altogether. We can do that by thinking about the worst thing that could happen and plan for that instead. This is preparing for the worst, the Dr Pepper event.
In our end of the world house example, the worst that can happen is we have nowhere to sleep if the house is destroyed. Therefore the most complete solution to all of the “house is destroyed problems” is to build another place to sleep. Redundancy is a common mitigation strategy which doesn’t come naturally to most people because it feels wasteful, but it’s the only thing that can protect us from the unpredictable house destroying events which we know exist (every house is destroyed by something!).
None of this is to say a risk register is a bad idea that won’t lead to sensible precautions but like many complex systems, risk registers can become recursive when you try to account for unknowns. This is like writing down a risk that you don’t know all the risks - it’s true but it’s also very difficult to make use of it in that dimension.
Risk registers are useful in that they force us to run through some of the common failure modes of all the systems we use, but they also give us tunnel vision. In giving each eventuality a rating and apportioning some of our resources in relation to that rating we feel in control. That feeling of control is an illusion! What’s worse is often this list only ever grows and scar tissue forms around performing actions, slowing down your progress.
I have come to believe that most of the time we should be focussed on protecting assets in the worst case scenario - the Dr Pepper event - because that doesn’t need any predictions and therefore avoids the problem of trying to guess everything that could go wrong.
Two books that heavily influenced my thinking on this topic are:
- The Black Swan: The Impact of the Highly Improbable (Nassim Nicholas Taleb)
- Team of Teams: New Rules of Engagement for a Complex World (General Stanley McChrystal)
And a bonus short video on complexity: