Curated Digest: We can prevent progress! Conceptual clarity, and inspiration from the FDA
Coverage of lessw-blog
A critical examination from lessw-blog challenges the common assertion that technological progress cannot be stopped, using the FDA's regulatory impact as a comparative model for AI risk mitigation.
The Hook: In a recent post, lessw-blog discusses the feasibility and desirability of intentionally slowing technological progress, specifically challenging the fatalistic view that "we can't prevent progress" when it comes to artificial intelligence. This piece serves as a critical examination of how society views technological momentum and the regulatory mechanisms available to manage it.
The Context: The contemporary debate around AI safety and governance frequently encounters a significant philosophical roadblock. Proponents of rapid, unconstrained development often argue that technological momentum is an unstoppable force of nature-a runaway train that society must simply adapt to rather than attempt to brake. This deterministic mindset can severely stifle meaningful discussions about proactive regulation, safety guardrails, and catastrophic risk mitigation. Understanding whether technological progress can, and more importantly should, be managed is a critical societal imperative as artificial intelligence capabilities scale rapidly toward potentially dangerous thresholds. If policymakers and technologists operate under the assumption that intervention is impossible, the default path leaves humanity vulnerable to unmitigated risks.
The Gist: To dismantle this fatalism, lessw-blog argues that the very concept of "progress" is frequently conflated in public discourse. The term is often used interchangeably to mean both "increasing technological understanding and tools" and "things generally getting better for humanity." By explicitly separating these two definitions, the author provides much-needed conceptual clarity. lessw-blog highlights that preventing "things from getting better" is historically common, citing the Bronze Age collapse as a stark reminder that societal improvement is not a guaranteed upward trajectory. Furthermore, if AI were to cause catastrophic harm, it would definitively stop things from getting better. On the other hand, preventing "increasing technological information and tools" is also demonstrably possible in the modern era. The post points directly to the United States Food and Drug Administration (FDA) and its historical impact on the pharmaceutical industry as concrete evidence. The author posits that regulatory bodies can, in fact, significantly slow down the pace of technological and scientific output to prioritize safety and efficacy. By drawing this parallel, the author challenges readers to consider a critical question: if one acknowledges that the FDA successfully slows down pharmaceutical advancements to ensure public safety, why would a similar regulatory friction be impossible or irrelevant for artificial intelligence?
Key Takeaways:
- Challenging Technological Fatalism: The post systematically dismantles the common argument that technological progress is an unstoppable force, particularly regarding AI development.
- Deconstructing Progress: It separates the broad idea of "progress" into two distinct concepts: the mere accumulation of technological tools versus the actual improvement of human conditions.
- Historical Precedent for Regression: The author uses historical examples, such as the Bronze Age collapse, to illustrate that societal improvement can be halted or entirely reversed.
- The FDA as a Regulatory Model: The FDA's deliberate regulatory friction in the pharmaceutical industry is presented as undeniable proof that institutional mechanisms can successfully slow technological advancement.
Conclusion: For professionals engaged in AI safety, governance, and public policy, this analysis provides a highly useful framework for countering technological determinism. By shifting the conversation from "can we stop it?" to "how have we managed similar risks before?", the author opens the door for more pragmatic regulatory design. Read the full post to explore the complete argument and its profound implications for the future of AI regulation.
Key Takeaways
- The assertion that we cannot prevent progress is a flawed argument often used to dismiss AI risk mitigation.
- Progress must be decoupled into two meanings: increasing technological tools and actual societal improvement.
- The FDA's regulation of the pharmaceutical industry proves that technological advancement can be intentionally slowed.
- Historical events like the Bronze Age collapse show that preventing societal improvement is entirely possible.