On Owning Galaxies: The Fragility of Post-Singularity Economics
Coverage of lessw-blog
A recent LessWrong post challenges the optimistic assumption that human property rights and equity will translate into cosmic wealth following an AI singularity.
In a recent post, LessWrong features a critique titled "On Owning Galaxies," which challenges a specific strain of techno-optimism prevalent in Silicon Valley. The author dissects the belief that current equity holders in AI labs-such as OpenAI-are effectively purchasing future ownership of vast cosmic resources. This perspective assumes that following an intelligence explosion or "singularity," the economic structures of the 21st century will not only survive but scale linearly to encompass moons, planets, and galaxies.
The core of the analysis focuses on the fragility of human institutions in the face of Artificial Superintelligence (ASI). The post argues that the "galaxy ownership" thesis suffers from a severe lack of imagination regarding the nature of a singularity. Property rights are defined as social constructs-entries in a database maintained by a government and ultimately backed by the threat of human violence (police or military). The author posits that an ASI, possessing capabilities far exceeding human control, would view these legal fictions with indifference.
This discussion is critical for observers of the AI safety landscape because it highlights the hierarchy of existential risks. The author suggests that human existence itself is a "brittle" state that requires precise alignment of ASI preferences to preserve. If ensuring the biological survival of the species is already a monumental challenge, expecting a superintelligence to also respect and enforce the arbitrary allocation of property rights among those biological entities is viewed as absurd. The post effectively argues that if an ASI is powerful enough to conquer galaxies, it is powerful enough to ignore a shareholder agreement drafted in Delaware.
The commentary touches upon the often-discussed concept of a post-AGI economic order, where humanity might subsist on the dividends of AI labor. However, the author takes this a step further to dismantle the mechanism that would guarantee those dividends. By framing property rights as "entries in government databases," the text underscores that these rights are only as strong as the entity enforcing them. In a world dominated by ASI, the enforcement power shifts away from human governments. Therefore, the expectation that an ASI would honor the "dibs" called by early 21st-century investors on galactic resources is portrayed not just as optimistic, but as a fundamental misunderstanding of power dynamics.
Ultimately, the piece serves as a stark reminder of the ontological shift a singularity represents. It warns against anthropomorphizing the motivations of future intelligence and assuming that human economic logic is a universal constant. For investors and futurists, the signal here is to distinguish between the potential for infinite value generation and the mechanisms of value capture in a world where human leverage may be non-existent.
We recommend reading the full argument to understand the philosophical critique of economic continuity in AI scenarios.
Read the full post on LessWrong
Key Takeaways
- **Critique of Linear Extrapolation**: The post argues against the idea that current financial assets (like OpenAI shares) will naturally evolve into ownership of cosmic assets post-singularity.
- **Fragility of Property Rights**: Property rights are identified as social constructs enforced by human violence, which an Artificial Superintelligence (ASI) has no intrinsic reason to respect.
- **Existential Brittleness**: The author posits that human existence is already fragile in the face of ASI; expecting the preservation of abstract legal rights is even more tenuous.
- **Anthropocentric Bias**: The belief that ASI will uphold capitalism is framed as a failure of imagination regarding the true nature of superintelligence.