The case of Stanislav Petrov
On September 26, 1983 a duty officer of the Soviet early-warning nuclear strike system, Stanislav Petrov, found himself in a massively ambiguous situation. First, as he was sitting quietly in his underground bunker near Moscow, a loud siren went off. Next, a bright red message START was displayed across his computer terminal. A satellite system named Oko, the Eye, notified him that five ballistic missiles were just launched from a US missile base in North Dakota.
Is this World War III? Should we launch a retaliatory strike? He had 15 minutes to make that decision. Being a duty officer, Petrov was not authorised to press the Big Red “End the World” Button himself. He was, however, to report the event up the chain of command, initiating a sequence that might lead to the all-out nuclear war. He didn’t, and thus probably saved the civilisation.
How did Petrov arrive to the decision?
By his own account, Petrov had only 50% confidence that this was a real nuclear attack. On one hand, the nuclear missile launch from South Dakota was entirely possible. The US nuclear exercise nicknamed Able Archer 83 was underway, totally looking like a massive first strike preparation. Medium-range Pershing II missiles were just deployed to West Germany. On top of that, a Korean Boeing 747 was recently shot down by the Soviet fighter plane killing 269 people on board of which 62 were US citizens including one acting Senator.
On the other hand, apart from the five missiles in North Dakota there was no indication that anything else was launched from anywhere. The dominant nuclear doctrine shared by both the US and the Soviets was that one should attack simultaneously with all their arsenal to minimise retaliatory strike. Where were all the other missiles? For a few priceless minutes he waited to see if the ground radars would confirm the five missiles incoming. There was nothing on the radars. Petrov had to report something, so he reported a false alarm, which of course it really was.
Perhaps he just got lucky and we are hailing him only because of survivor’s bias? I would argue that he indeed made the right call. Given the gravity of consequences, in situations like these we should err on the side of caution. The investigation later revealed that a rare alignment of sunlight on high-altitude clouds created an illusion of the missile launch. Clouds, ironically, are an archetypal case of ambiguity.
What do you do when you see yellow?
Imagine you’re driving a car and approaching an intersection at the exact moment the traffic light turns yellow. The Vienna convention on road traffic says that you must stop unless you can stop safely, but in this particular jurisdiction nobody would fine you if you proceed. What do you do?
Some people habitually, automatically stop, without thinking twice or once even. Some people speed up. Of course a lot depends on the situation: how far are you from the intersection, how busy the intersection is, are you running late (of course you are), the weather conditions, the driving culture, etc. It also depends on your personal risk tolerance. In any case, the call is yours. Yellow light is ambiguous, it doesn’t tell you what to do.
Are you keen on assessing all of the above factors momentarily, or maybe you will be better off following a simple rule “Don’t gamble with amber”? Following this rule minimises decision variance, makes you a much more consistent and predictable decision-maker. Perhaps you will lose a total of a few hours of your life standing on the intersections unnecessarily. However, if we all do this quite a few lives will be saved. It’s the right thing to do. Generally, I believe it is a good idea to formulate Yellow Light Rules for oneself and follow them consistently.
Yes, here are a few examples
The place to look for examples is the judiciary. For ages the courts relied on complex practices like trials by ordeal or consulting oracles in ambiguous situations. Gradually, these were replaced by simple heuristics. One of the oldest-known Yellow Light Rules is found in Roman law: In dubio pro reo. When in doubt, rule in favour of the accused. A similar rule is called the rule of lenity: when the law or evidence is ambiguous, apply it for the benefit of the defendant. In contract law there’s a rule called Contra Proferentem, which stipulates that in case a contract is ambiguously worded it should be interpreted against the interests of whoever drafted it. Elon Musk recently voiced a rule for Twitter moderation: “if in doubt, let the tweet exist”. You get the idea.
How do we decide which side deserves a preference?
A priori estimates. Most people are not criminals therefore it is reasonable to assume innocent until proven guilty. We might also assume the opposite: everybody is guilty of something therefore everybody deserves a little time in pre-trial detention. The question remains, however, should we as a society have laws according to which everybody's guilty of something? Aren’t they just a tiny bit too strict?
Recompense. The state is vastly more powerful than the defendant, the landlord is more powerful than the tenant, therefore we should compensate for this power imbalance. Men are more powerful than women, therefore #BelieveAllWomen. Oh, wait, but that clashes with the presumption of innocence. We have to decide which rule gets the priority.
Desired goals or values. In the absence of firm data, err on the side of your ultimate goals, your values or espoused virtues. If kindness is your virtue, what would be the kindest thing to do? If bravery is your virtue, what would be the bravest thing? If you’re trying to become more extraverted and find yourself agonising over whether you should go to the party or nor — go. If you’re trying to become more cautious in dealing with the environment — use precautionary principle and don’t build the dam or the nuclear reactor.
Wait, but what’s the price?
There will be costs associated with using the rules, like time wasted at pointless parties or nuclear reactors never built. A huge problem with some of these rules, like the precautionary principle, is that there will always be doubts. If you’re really strict with the rule you will never be able to build anything. Is that what you ultimately want? What are you really aiming at here? Are you trying to preserve the environment at all costs (including human lives) or do you prioritise humanity over non-human nature? Are you aiming at increasing a metric like quality life-years? When you come to think of it, there might be data on how nuclear reactors impact quality life-years. Use a priori estimates then.
Daryl Bem, a highly influential social psychology professor at Cornell, has taught students to “err on the side of discovery” since at least 1980’s. This lead to a widespread practice dubbed “false positive psychology”, where researchers tortured the data until the data told what the researchers wanted to hear. This, in turn, led to an almost complete unraveling of the field, an event known as “the replication crisis” in the 2010’s. Erring on the side of discovery was good for researchers: it led to more publications, more promotions, more funding. But wait, what’s the cost for humanity at large? Is the massive loss of trust in scientific results an acceptable side-effect? It probably isn’t. Rather, like Petrov, the researchers should be erring on the side of caution.
In the end, if everything fails, just toss a coin. Coins have only two sides, there’s no yellow, it’s either red or green. If you don’t like the outcome of the toss, that’s understandable. Interpret that as a clear and unambiguous signal that you should do the exact reverse.