# Savage's Axioms and the Expected Utility Dilemma: A Curated Digest

> Coverage of lessw-blog

**Published:** April 22, 2026
**Author:** PSEEDR Editorial
**Category:** risk

**Tags:** Decision Theory, Expected Utility, AI Alignment, Savage's Axioms, Rationality

**Canonical URL:** https://pseedr.com/risk/savages-axioms-and-the-expected-utility-dilemma-a-curated-digest

---

A recent analysis from lessw-blog challenges the foundational coherence defense of Expected Utility theory, revealing critical incompatibilities between Savage's Axioms and Strict Dominance with profound implications for AI alignment.

**The Hook**

In a recent post, lessw-blog discusses a fascinating theoretical friction within decision theory, specifically focusing on the tension between Savage's Axioms and the principle of Strict Dominance in Expected Utility (EU) theory. The piece, titled "Savage's Axioms Make Dominated Acts EU Maxima," unpacks how these foundational mathematical rules interact in scenarios involving countably or uncountably infinite state spaces.

**The Context**

Expected Utility theory forms the bedrock of rational decision-making frameworks, heavily relied upon in economics, game theory, and artificial intelligence. A primary defense of EU-often called the "coherence defense"-is that it inherently protects agents against exploitation, such as "money pumps," by ensuring they never choose strictly dominated acts (actions that are worse than another available action in every possible state). However, as AI systems become more complex and operate in environments with virtually infinite variables, the mathematical guarantees of the underlying decision theories face unprecedented stress testing. For AI/ML researchers, particularly those focused on risk, regulation, and safety, this is not merely an academic exercise. The alignment of artificial general intelligence relies heavily on the assumption that an ideal rational agent will consistently avoid strictly dominated strategies. If the theoretical basis for this avoidance is weaker than commonly assumed, it introduces a profound vulnerability. An AI system programmed to maximize expected utility based on these axioms might exhibit unexpected behaviors that could be exploited, leading to suboptimal or even dangerous outcomes in critical, real-world applications.

**The Gist**

lessw-blog has released analysis demonstrating that Savage's Axioms (specifically P1-P7), which are often considered more fundamental than the von Neumann-Morgenstern (vNM) framework, can actually lead to dominated acts tying with dominating acts in EU calculations. The author argues that if the state space is countably infinite, Savage's Axioms and Strict Dominance cannot hold simultaneously for a preference relation. While the vNM framework provides one approach to expected utility, Savage's formulation is often preferred because it derives both probabilities and utilities directly from preferences. However, lessw-blog's post highlights that this derivation comes at a cost when dealing with infinity. Furthermore, if the state space is uncountably infinite, this same incompatibility persists under the axiom of constructibility. In practical terms, this means it is mathematically possible to construct an act that pays strictly more than a dominated act, yet the dominated act paradoxically remains an EU maximum. This theoretical vulnerability suggests that relying purely on Savage's formulation of EU maximization might not guarantee protection against strictly dominated choices in complex environments.

**Conclusion**

This deep dive into decision theory exposes potential flaws in the theoretical underpinnings of rational agency, carrying significant weight for AI safety and alignment research. To fully grasp the mathematical proofs and the nuances of how these axioms interact, we highly recommend reviewing the original work. [Read the full post](https://www.lesswrong.com/posts/8ppB4ixfoKdGDqeHf/savage-s-axioms-make-dominated-acts-eu-maxima-9)

### Key Takeaways

*   Savage's Axioms (P1-P7) and the principle of Strict Dominance are mathematically incompatible in countably infinite state spaces.
*   In uncountably infinite state spaces, this incompatibility remains true under the axiom of constructibility.
*   Under Savage's framework, dominated acts can tie with dominating acts, allowing a dominated act to still be an Expected Utility maximum.
*   This finding challenges the coherence defense of Expected Utility theory, which assumes EU maximization inherently blocks exploitation and money pumps.
*   The theoretical vulnerability has significant implications for AI alignment, suggesting that AI systems relying on EU maximization might be susceptible to suboptimal or exploitable decision-making in complex environments.

[Read the original post at lessw-blog](https://www.lesswrong.com/posts/8ppB4ixfoKdGDqeHf/savage-s-axioms-make-dominated-acts-eu-maxima-9)

---

## Sources

- https://www.lesswrong.com/posts/8ppB4ixfoKdGDqeHf/savage-s-axioms-make-dominated-acts-eu-maxima-9
