Center on Long-Term Risk Announces Fellowship for S-Risk Reduction

Coverage of lessw-blog

ยท PSEEDR Editorial

In a recent announcement on LessWrong, the Center on Long-Term Risk (CLR) has opened applications for its Fundamentals Program, a fellowship designed to train researchers in mitigating suffering risks associated with transformative AI.

In a recent post, the Center on Long-Term Risk (CLR) announced the launch of the CLR Fundamentals Program, an introductory fellowship aimed at expanding the research talent pool focused on "s-risks"—risks of astronomical suffering.

While the broader AI safety community frequently focuses on existential risks (x-risks) related to human extinction or loss of control, CLR occupies a distinct and critical niche. They investigate scenarios where transformative AI (TAI) results in outcomes that are not merely fatal to civilization, but actively dystopian, characterized by large-scale suffering. This distinction is vital for a comprehensive risk portfolio; preventing extinction does not automatically guarantee a positive future. The CLR Fundamentals Program is designed to bridge the gap for individuals already familiar with TAI concepts who wish to specialize in these specific, often neglected failure modes.

The program's curriculum highlights the technical and strategic complexity of avoiding s-risks. Unlike general alignment research, CLR emphasizes multiagent AI safety. As autonomous systems proliferate, the interactions between powerful AI agents (and their human operators) introduce game-theoretic risks. If these systems fail to cooperate, the resulting conflicts could escalate to catastrophic levels. To address this, the program explores Safe Pareto Improvements (SPIs), which are theoretical mechanisms designed to ensure cooperation and mutual benefit between competing agents, minimizing the incentive for conflict.

Additionally, the syllabus covers the Model Personas agenda. As Large Language Models (LLMs) become more sophisticated, they simulate various agents or "personas." Understanding the nature of these simulations is crucial for predicting how advanced systems might behave in complex social or strategic environments. The program also touches on AI governance, epistemology, and the specific dangers posed by malevolent or fanatical actors who might intentionally leverage TAI to cause harm.

This initiative represents a significant opportunity for researchers looking to pivot into high-impact AI safety work. It is not a general introduction to AI; rather, it is a specialized on-ramp for those ready to engage with CLR's priority research areas. The program seeks applicants who view s-risk reduction as a potential career priority and possess a baseline understanding of the current AI safety discourse.

The deadline for applications is Monday, January 19th. For professionals and students monitoring the AI safety landscape, this program offers a structured pathway to contribute to one of the field's most challenging and morally urgent sub-domains.

Read the full post

Key Takeaways

Read the original post at lessw-blog

Sources