Mobilizing Against AGI: The "Don't Build It" Conditional March
Coverage of lessw-blog
In a recent post on LessWrong, a community member proposes a "conditional kickstarter" mechanism to organize a massive protest aimed at halting Artificial General Intelligence (AGI) research.
In a recent post on LessWrong, the author outlines a strategy to mobilize public sentiment against the development of Artificial General Intelligence (AGI). Titled 'Conditional Kickstarter for the "Don't Build It" March,' the proposal introduces a pledge system designed to overcome the inertia and coordination failures often associated with large-scale protests.
The Context: From Theory to Activism
The conversation surrounding AI existential risk (x-risk) has largely been confined to academic papers, technical forums, and open letters. While awareness of the potential dangers of superintelligence is growing, translating that intellectual concern into tangible political pressure remains a significant hurdle. The primary obstacle is the "collective action problem": individuals are hesitant to commit time and resources to a cause if they believe they will be the only ones showing up. A sparsely attended protest can often be more damaging to a movement than no protest at all, as it signals a lack of public support.
The Gist: Assurance Contracts for Social Change
The LessWrong post proposes a solution using a mechanism similar to crowdfunding platforms like Kickstarter. The author has launched a pledge page where individuals can commit to attending a protest in Washington D.C. aimed at banning AGI research. However, the commitment is conditional: the march will only be triggered if 100,000 people sign up. If the threshold is not met, the event does not happen, and no one is obligated to travel.
This "assurance contract" model is designed to lower the barrier to entry. It allows concerned citizens to signal their intent without the risk of participating in a failed event. The author argues that a gathering of this magnitude would be impossible for policymakers to ignore, serving as definitive proof that AI safety is a substantial public concern rather than a fringe issue.
Current Status and Viability
At the time of the post's publication, the initiative had gathered 711 signatures. The immediate tactical goal is to surpass 1,000 sign-ups to demonstrate initial viability and encourage wider sharing. The post serves as both a technical explanation of the protest's mechanism and a call to action for those who believe AGI development poses an existential threat to humanity.
This initiative represents an interesting experiment in modern activism, applying game-theoretic principles to real-world governance challenges. Whether or not it reaches its ambitious target, it highlights the increasing urgency with which the AI safety community is seeking to engage the broader public.
For those interested in the intersection of AI governance, activism, and coordination mechanisms, the full post offers a detailed look at the rationale behind this strategy.
Read the full post on LessWrong
Key Takeaways
- The initiative proposes a conditional protest to ban AGI research, triggered only if 100,000 people pledge to attend.
- The 'conditional kickstarter' model aims to solve the collective action problem by removing the risk of attending a low-turnout event.
- The goal is to demonstrate to policymakers that AI existential risk is a mainstream public concern warranting strict regulation.
- The project is currently in the early validation phase, seeking to cross the 1,000-signature mark to prove viability.
- This represents a shift in AI safety advocacy from intellectual debate to organized, physical activism.