{
  "@context": "https://schema.org",
  "@type": "TechArticle",
  "id": "bg_3d3d478ec8c1",
  "canonicalUrl": "https://pseedr.com/risk/curated-digest-you-cant-trust-violence",
  "alternateFormats": {
    "markdown": "https://pseedr.com/risk/curated-digest-you-cant-trust-violence.md",
    "json": "https://pseedr.com/risk/curated-digest-you-cant-trust-violence.json"
  },
  "title": "Curated Digest: You can't trust violence",
  "subtitle": "Coverage of lessw-blog",
  "category": "risk",
  "datePublished": "2026-04-12T12:04:28.772Z",
  "dateModified": "2026-04-12T12:04:28.772Z",
  "author": "PSEEDR Editorial",
  "tags": [
    "AI Safety",
    "Existential Risk",
    "AI Governance",
    "Ethics",
    "LessWrong"
  ],
  "wordCount": 489,
  "sourceUrls": [
    "https://www.lesswrong.com/posts/Sih2sFHEgusDEuxtZ/you-can-t-trust-violence"
  ],
  "contentHtml": "\n<p class=\"mb-6 font-serif text-lg leading-relaxed\">lessw-blog addresses a critical ethical and strategic challenge for the AI safety movement: the emergence and unequivocal denouncement of violence in the name of AI risk reduction.</p>\n<p><strong>The Hook</strong></p><p>In a recent post, lessw-blog discusses the troubling emergence of violent rhetoric and actions within the periphery of the AI safety movement. Titled \"You can't trust violence,\" the publication directly confronts recent incidents and online calls for violence against AI companies, drawing a hard line between legitimate existential risk advocacy and unacceptable rogue actions. The author provides a firsthand account of navigating these extremes, signaling a pivotal moment for the community's public posture.</p><p><strong>The Context</strong></p><p>As artificial intelligence capabilities accelerate at an unprecedented pace, the discourse surrounding existential risk (x-risk) has intensified dramatically. This topic is critical because the AI safety community relies heavily on public credibility, international cooperation, and legitimate regulatory engagement to implement meaningful safeguards. When fringe actors resort to violence-such as a recent Molotov cocktail incident targeting OpenAI CEO Sam Altman, which the post identifies as the first violent act in the name of AI safety-it threatens to discredit decades of rigorous, peaceful advocacy. Understanding how the core community polices its boundaries, manages internal extremism, and responds to public scrutiny is essential for anyone tracking the future of AI governance, corporate security, and international regulation.</p><p><strong>The Gist</strong></p><p>lessw-blog's post explores these volatile dynamics by detailing the author's own proactive measures, including personally warning major AI companies about potential violent intentions from Sam Kirchner, a former leader of the \"Stop AI\" group. The author argues forcefully that violence is not only morally indefensible but strategically disastrous. It would inevitably backfire by justifying severe government crackdowns on the movement, hindering transparent public oversight, and destroying the fragile international cooperation required to manage global AI risks. Furthermore, the post clarifies the crucial distinction between rogue vigilantism and Eliezer Yudkowsky's controversial advocacy for state-enforced policies. The author frames Yudkowsky's stance as a call for the legitimate application of government power and international treaties, rather than an endorsement of grassroots violence. Ultimately, the author pushes back against critics who unfairly conflate the factual assertion that AI poses unacceptable risks with an inherently violent ideology, reinforcing that the movement's core objective is the preservation of humanity through peaceful, structural alignment.</p><p><strong>Conclusion</strong></p><p>For a deeper understanding of the internal dynamics, ethical boundaries, and strategic imperatives of the AI safety movement during a period of heightened tension, this piece is highly recommended. <a href=\"https://www.lesswrong.com/posts/Sih2sFHEgusDEuxtZ/you-can-t-trust-violence\">Read the full post</a>.</p>\n\n<h3 class=\"text-xl font-bold mt-8 mb-4\">Key Takeaways</h3>\n<ul class=\"list-disc pl-6 space-y-2 text-gray-800\">\n<li>The core AI safety community unequivocally denounces violence against AI companies and researchers.</li><li>Recent fringe incidents, including a Molotov cocktail attack, threaten to discredit the movement and justify government crackdowns.</li><li>Proactive self-policing is occurring, with community members warning authorities about individuals expressing violent intentions.</li><li>There is a clear distinction drawn between rogue vigilantism and advocacy for state-enforced international AI regulations.</li><li>Violence is viewed as a strategically disastrous approach that would destroy the cooperation needed to mitigate existential risks.</li>\n</ul>\n\n<p class=\"mt-8 text-sm text-gray-600\">\n<a href=\"https://www.lesswrong.com/posts/Sih2sFHEgusDEuxtZ/you-can-t-trust-violence\" target=\"_blank\" rel=\"noopener\" class=\"text-blue-600 hover:underline\">Read the original post at lessw-blog</a>\n</p>\n"
}