PSEEDR

Defining AGI Through the Lens of Mind Upload Sustainability

Coverage of lessw-blog

· PSEEDR Editorial

In a recent post, lessw-blog explores a critical thought experiment: if the entire human population were instantly converted into high-fidelity digital uploads, would civilization possess the physical agency required to survive?

In a recent discussion, lessw-blog poses a provocative question regarding the future of human consciousness and the functional definition of Artificial General Intelligence (AGI). The post asks: "If all humans were turned into high-fidelity mind uploads tomorrow, would we be self-sustaining?" While the premise sounds like pure science fiction, the core of the argument addresses a very immediate technical and philosophical divide in the AI community regarding what constitutes a truly autonomous system.

The post contextualizes this thought experiment by contrasting two distinct definitions of AGI. The first is the standard economic definition, often cited by organizations like OpenAI, which describes AGI as highly autonomous systems that outperform humans at most economically valuable work-essentially, "all cognitive labor." The second definition, attributed in the post to Vitalik Buterin, raises the bar significantly: an AGI is only truly general if it can independently continue civilization even if biological humans disappear. This implies not just cognitive processing, but the physical capability to maintain the substrate upon which that intelligence runs.

The Physical Substrate Problem

The significance of this inquiry lies in the dependency of digital minds-whether they are uploaded humans or synthetic AGI-on physical infrastructure. The author prompts readers to consider the logistics of a civilization that exists entirely within servers. Without biological hands to maintain power grids, repair cooling systems, and manufacture replacement hardware, a purely digital society faces an immediate existential threat.

This thought experiment serves as a stress test for our current technological trajectory. It highlights that "intelligence" in a vacuum is insufficient for survival. If a society of high-fidelity human uploads (who presumably retain all human knowledge and creativity) cannot operate the physical machinery of the world through robotics and automation, then "cognitive labor" is not enough to sustain a civilization.

Why This Matters

For researchers and strategists in the AI safety and risk domains, this distinction is vital. It suggests that the development of AGI is not merely a software challenge but a robotics and infrastructure challenge. A system that can write poetry or solve mathematical theorems but cannot physically repair its own power source is, by the stricter definition, not fully autonomous. This reframes the conversation around AGI from one of intellectual capability to one of physical agency and self-preservation.

We recommend reading the full post to explore the nuances of this definition and the community discussion surrounding the requirements for a self-sustaining digital civilization.

Read the full post at LessWrong

Key Takeaways

  • The post contrasts the definition of AGI as 'cognitive labor' against a stricter definition requiring the physical ability to sustain civilization.
  • The thought experiment highlights the absolute dependency of digital intelligence on physical infrastructure like power, cooling, and hardware maintenance.
  • It argues that high-fidelity mind uploads, despite possessing human-level intelligence, might fail to survive without advanced robotics to manipulate the physical world.
  • The discussion reframes AGI development as a challenge of physical agency and autonomy, rather than solely software capability.

Read the original post at lessw-blog

Sources