# The Case for $100M Grants to Automate AI Safety

> Coverage of lessw-blog

**Published:** April 03, 2026
**Author:** PSEEDR Editorial
**Category:** risk

**Tags:** AI Safety, Funding, Automated Alignment, Risk Mitigation, LessWrong

**Canonical URL:** https://pseedr.com/risk/the-case-for-100m-grants-to-automate-ai-safety

---

A recent post on LessWrong argues that the AI safety community must radically scale its funding, advocating for $100M+ grants to incentivize automated AI safety research in a short timeline scenario.

In a recent post, lessw-blog discusses the urgent need for a massive influx of capital into the AI safety ecosystem, specifically targeting the automation of safety research. Titled "There should be $100M grants to automate AI safety," the piece argues that current philanthropic and institutional funding models are far too conservative for the reality of rapid artificial intelligence advancement.

The conversation around AI safety has traditionally focused on theoretical alignment, interpretability, and manual research conducted by specialized teams. However, as frontier models become increasingly capable, the timeline to artificial general intelligence appears to be shrinking rapidly. This scenario, often referred to within the community as a "short timeline" world, suggests that transformative AI could arrive much sooner than historical estimates predicted. In this high-stakes environment, human researchers alone may not be able to keep pace with the rapid iteration and scaling of AI systems. To bridge this gap, the concept of utilizing "automated AI labor" for safety has emerged. This involves using advanced AI models themselves to conduct alignment research, perform automated red-teaming, and execute code verification at a scale and speed that is fundamentally impossible for human-only teams.

lessw-blog's post explores these critical dynamics, asserting that the current financial infrastructure supporting AI safety is inadequate. The author suggests that funders should aim to allocate between $1 billion and $50 billion per year across the ecosystem over the next two to three years. A core component of this proposed financial strategy is the introduction of $100M+ "automated AI safety scaling grants." These massive grants, primarily allocated for compute and API budgets, would heavily incentivize researchers to build scalable safety pipelines. The author emphasizes that normal, incremental spending increases are entirely insufficient given the current trajectory of AI capabilities. Instead, the community needs aggressive, large-scale financial encouragement to translate automated AI labor directly and differentially into robust safety measures. By providing these substantial grants, funders would give researchers the necessary confidence and resources to transition from theoretical models to massive, automated safety operations.

This publication highlights a critical and urgent perspective within the AI safety community, advocating for a strategic shift towards automated methods to address existential risks. Given the growing concerns surrounding AI regulation and safety, the call for $100M+ grants proposes a concrete, large-scale financial mechanism to accelerate risk mitigation efforts. For professionals tracking the intersection of AI risk, philanthropic strategy, and technical alignment, this piece offers a provocative look at how capital must be deployed to outpace AI capabilities. **[Read the full post](https://www.lesswrong.com/posts/qdhyrN4uKwBAftmQx/there-should-be-usd100m-grants-to-automate-ai-safety)** to explore the detailed arguments for radically scaling automated safety research.

### Key Takeaways

*   Funders should heavily incentivize AI safety work with $100M+ grants dedicated to compute or API budgets for automated AI labor.
*   The AI ecosystem requires an estimated $1-50B per year in safety funding over the next 2-3 years to address short timeline risks.
*   Past AI safety funding has been too conservative, and incremental spending increases are no longer sufficient to keep pace with AI development.
*   Large-scale scaling grants would provide researchers with the financial confidence to build and operate robust, automated safety pipelines.

[Read the original post at lessw-blog](https://www.lesswrong.com/posts/qdhyrN4uKwBAftmQx/there-should-be-usd100m-grants-to-automate-ai-safety)

---

## Sources

- https://www.lesswrong.com/posts/qdhyrN4uKwBAftmQx/there-should-be-usd100m-grants-to-automate-ai-safety
