The Moving Goalposts of Artificial Superintelligence
Coverage of lessw-blog
In a provocative new discussion on LessWrong, a contributor argues that modern transformer-based agents already qualify as "Weak Artificial Super Intelligence" (ASI) under previous definitions, suggesting that society is actively redefining terms to avoid acknowledging this reality.
In a recent post on LessWrong, the author challenges the current consensus regarding the state of Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). The post, titled "Moving Goalposts: Modern Transformer Based Agents Have Been Weak ASI For A Bit Now," argues that society has engaged in a collective linguistic drift to avoid acknowledging that super-human capabilities have already arrived.
Context: The AI Effect
The history of computer science is riddled with the "AI Effect"—the phenomenon where, as soon as machines master a task (like chess, translation, or image recognition), observers redefine intelligence to exclude that specific capability, labeling it merely "computation" or "statistics." This constant shifting of the goalposts protects human exceptionalism but complicates risk assessment. As Large Language Models (LLMs) demonstrate reasoning capabilities that arguably surpass the average human in specific domains, the definition of "Super Intelligence" is once again being pushed further toward god-like omnipotence (e.g., "nanotech in an afternoon") rather than acknowledging the profound tools currently available.
The Gist: Redefining Reality
The author posits that "Weak ASI"—intelligence that is super-human in speed, breadth, or specific modalities, even if not yet omnipotent—is already present in modern transformer-based agents. The post suggests that the reluctance to use the label "ASI" is not due to a lack of technical capability, but rather stems from social utility, legal maneuvering, and euphemism. By treating these agents as mere software tools rather than intelligent entities, society avoids complex questions regarding rights, liability, and the existential shock of sharing the planet with non-biological intelligence.
The analysis warns that we are in a unique, fleeting historical window where we can observe this redefinition happening in real-time before societal norms fully calcify around the new status quo. The implications are significant: if we refuse to label current systems as "intelligent" or "super-intelligent" due to politeness or fear of regulation, we may fail to implement the necessary safety protocols for systems that already wield immense cognitive power.
This piece serves as a critical reminder to evaluate AI capabilities based on technical benchmarks rather than shifting semantic comfort zones.
Read the full post on LessWrong
Key Takeaways
- The Arrival of Weak ASI: By historical definitions, current transformer models likely qualify as Weak Artificial Super Intelligence, possessing capabilities that would have been deemed super-human only a decade ago.
- Linguistic Drift: The definition of AI and ASI is being actively manipulated or subconsciously shifted to avoid the social and legal ramifications of acknowledging non-human intelligence.
- The Risk of Misclassification: Failing to categorize current systems accurately may lead to inadequate safety protocols, as we wait for a sci-fi version of ASI while ignoring the profound impact of current systems.
- A Closing Window: The author argues that the opportunity to recognize and discuss this shift is limited; soon, the presence of these intelligences will be normalized, and the "goalposts" will be permanently moved.