The past three years have made obvious a simple solution to the Fermi Paradox.
Any advanced civilization will eventually discover nuclear and (in our case) biological weapons. These present a systemic risk to the entire uniplanetary civilization. AI may also present a systemic risk, but that is less clear.
The civilization now has two choices:
- Stop all technological progress. Create a global totalitarian state that has the teeth to make sure nobody is developing nuclear weapons, biological weapons, or artificial intelligence.
In the best imaginable case, this approach eliminates that particular set of tail risks. In the human case, we are far too corrupt for this approach to plausibly work. It’s possible but unlikely that other life forms do not suffer the corruption problems that plague humans. Even for those species, this approach is still guaranteed suicide. Eventually, the asteroid is coming. Being multiplanetary is not optional.
- Continue to advance technology until the species can become multiplanetary and eventually multi-galactic.
In the long term, this approach is the only way to mitigate the tail risks of nuclear and biological warfare. However, in the short term, this approach increases said tail risks to levels that are probably not survivable.
In other words, we’re in the 4th quarter, backed up against our own end zone, down by 6, with 3 seconds left, and the choice is between taking a knee and a hail-mary.
- I had this thought watching Peter Thiel say the same thing, in different terms:
- The Twin Nuclei Problem
- Read Taleb’s Incerto series, particularly Antifragile.