The Self-Driving Cars Government Conspiracy
So Google and Uber are making self-driving cars which is pretty legit if you ask me. It will reduce prices and increase availability of transportation.
But it could totally be a government conspiracy according to my Facebook friend.
Now hold on, before you laugh, I want to agree with him. Kind of.
There is a chance that it's a government conspiracy, it's just a really small chance. Most people would round it to 0. As rigorous thinkers, though, we should recognize this chance isn't 0%.
It's a behavioral flaw in humans, called <a href="https://en.wikipedia.org/wiki/Neglect_of_probability">probability neglect, that we don't deal well with considering the consequences or benefits of low-probability events. We just round them. But this is a huge mistake from an ROI view.
Consider the possibility that government just wants all cars to be controlled by central computers, so the government can do something totally evil. Like drive everyone off a cliff for example.
That would be a huge cost! If you value your own life at a billion dollars and other people's lives at like a million dollars because that's how you are, that would add up to a total cost of about 319 trillion dollars, given the population of the US today.
So if there was a .0001% chance that Google and Uber's self driving car race is due to secret CIA funding and back room deals, would the total expected negative value be 0?
No, we would expect a loss of 319 million dollars. So the value of self drive cars would have to outweigh this negative for us to rationally support it. But I think self driving cars would be way more than that much of a benefit anyway.
In order to combat probability neglect, I propose a principle called the Possibility Principle. The principle simply states that <a href="http://biblehub.com/matthew/19-26.htm">all things are possible.
More formally, that any event which is structurally possible should be expected to occur with a probability greater than 0.
What should the expected probability be? There are a few approaches I can think of, in order of recommendation:
- Even-split approach. There are known and unknown probabilities of various events. They have to add to 100%. Whichever events occur with unknown probability, just estimate they are an even share of the remainder.
- Ordinal approach. You don't know the probabilities, but you know their rank frequency. That is, you know event 1 occurs more than event 2, which occurs more than event 3, and so on. So you can use an ordinal distribution (.5, .25, .125...) instead of an even split.
- Alpha approach. In statistics we have an arbitrarily selected alpha level which indicates our risk tolerance. We can come up with a similar rule of thumb which says any event I expect to occur with less than alpha probability will be estimated as occurring with alpha probability. So if my alpha is 1%, then the least probability in my model is 1%. Note, of course, this can't be used for cases with infinite structural possibilities.
Green is a color. It doesn't have a flavor. Artistic license doesn't apply here.
It is also not structurally possible for 2 + 2 = 7, or for variously defined games to achieve certain outcomes.
So if we just train ourselves to use the possibility principle it is an easy way to fight probability neglect. I also think it has a side benefit of leading to a more optimistic mental approach to things.