PSEEDR

The Geometry of Doubt: Analyzing Disjunctive and Conjunctive Fallacies

Coverage of lessw-blog

· PSEEDR Editorial

In a recent analysis, lessw-blog dissects the mathematical symmetry between two opposing errors in probabilistic reasoning, illustrating how slight biases in estimation can lead to massive distortions in final judgments.

In a recent post, lessw-blog discusses a nuanced failure mode in probabilistic reasoning: the symmetry between the "multiple-stage fallacy" and its reverse counterpart in disjunctive arguments. As artificial intelligence systems become increasingly integrated into high-stakes decision-making and risk assessment, the ability to accurately model the likelihood of complex events is critical. This publication highlights how intuitive biases can compromise even rigorous mathematical frameworks.

The Context: The Fragility of Decomposition
In systems engineering and AI safety, practitioners often rely on decomposition to estimate the probability of rare or complex events. Because it is difficult to intuit the likelihood of a catastrophic failure or a specific breakthrough, analysts break the event down into smaller, manageable components. The assumption is that estimating the probability of these sub-components is easier and more accurate. However, lessw-blog argues that this methodology is susceptible to systematic error accumulation, depending on whether the components are linked in a chain (conjunction) or represent alternative pathways (disjunction).

The Gist: Multiplying vs. Summing Errors
The author contrasts two specific mathematical behaviors:

  • The Multiple-Stage Fallacy (Conjunction): When calculating the probability that a series of independent events will all occur (e.g., Event A AND Event B AND Event C), one multiplies their individual probabilities. If an analyst is skeptical and slightly underestimates each individual probability, the multiplication of these small numbers results in a final probability that is artificially low. This can lead to the dismissal of valid risks because the chain of events seems statistically impossible.
  • The Reverse Fallacy (Disjunction): Conversely, when calculating the probability that at least one of several mutually exclusive events will occur (e.g., Event A OR Event B OR Event C), one sums their probabilities. The post identifies a "reverse" fallacy here: if an analyst slightly overestimates the likelihood of each individual option-often due to a cognitive bias that favors "moderate" probabilities over negligible ones-the sum can inflate the final probability significantly. This makes an event appear inevitable when it may actually be unlikely.

Why It Matters
The core insight is that humans, and potentially AI models trained on human reasoning, struggle with the extremes of probability. We tend to round very small numbers up to "plausible" and very high numbers down to "likely," avoiding the edges of the distribution. In complex arguments with many steps or many options, these slight deviations do not cancel out; they compound. For PSEEDR readers working in ML robustness or safety, this highlights a vulnerability in logical decomposition: the structure of the argument may be sound, but the sensitivity to input error is higher than intuitively expected.

We recommend reading the full analysis to understand the mathematical mechanics behind these biases.

Read the full post at LessWrong

Key Takeaways

  • The 'multiple-stage fallacy' occurs when multiplying slightly underestimated probabilities in a chain, making plausible events seem impossible.
  • The 'reverse multiple-stage fallacy' occurs when summing slightly overestimated probabilities in a disjunction, making unlikely events seem inevitable.
  • Both fallacies stem from a cognitive bias where humans find moderate-range probabilities more intuitive than extremes.
  • Decomposing complex AI risks into sub-events is structurally sound but highly sensitive to systematic input errors.

Read the original post at lessw-blog

Sources