PSEEDR

The 'd' Parameter: Analyzing the Math Behind AGI Timelines

Coverage of lessw-blog

· PSEEDR Editorial

A recent analysis from lessw-blog highlights how minute adjustments to the 'd' parameter in forecasting models can shift the predicted arrival of superintelligence by decades.

In a recent post, lessw-blog discusses the mathematical assumptions that underpin AI Futures Timelines models, specifically focusing on the sensitivity of growth predictions to specific variables. As the industry attempts to gauge the arrival time of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI), the debate often centers on compute availability or algorithmic efficiency. However, this analysis suggests that the mathematical function chosen to represent progress-specifically the "d" parameter-may be the most critical factor of all.

The Context: The Shape of Progress

Forecasting the trajectory of AI development is not merely an academic exercise; it dictates capital allocation, safety research priorities, and national policy. Most observers are familiar with the concept of exponential growth, often referenced in relation to Moore's Law. However, theoretical models for AI often grapple with the possibility of superexponential growth-a scenario where the rate of progress itself accelerates, potentially leading to a singularity-like event.

Understanding the difference between these growth modes is essential. If progress is merely exponential, society has a predictable runway to adapt. If it is superexponential, the window for preparation could close much faster than anticipated. The lessw-blog post explores the mechanics of this distinction.

The Gist: The Sensitivity of 'd'

The author argues that the "d" parameter in timeline models is the primary determinant of the growth trajectory. The analysis breaks down three distinct scenarios based on this value:

  • d < 1 (Superexponential): This indicates a trajectory that curves upward toward a vertical asymptote, implying a rapid takeoff to AGI.
  • d = 1 (Exponential): This represents steady, compounding growth, similar to historical computing trends.
  • d > 1 (Subexponential): This suggests diminishing returns or a slower rate of advancement.

According to the post, many current models assume a value of d < 1. This assumption is based on the intuition of an "infinite 80% time horizon," where AI eventually completes human tasks with high accuracy, leading to compounding optimization loops. However, the author notes that small changes in "d" drastically alter the timeline. A minor shift toward 1 can move the predicted date for ASI from the near future to decades away. This sensitivity highlights the fragility of current forecasts; they are heavily dependent on a variable that is difficult to empirically verify.

The post also contrasts these superexponential assumptions with the default exponential trajectories found in other analyses, such as the METR time horizons graph, illustrating a divergence in how different groups model the future of intelligence.

Why This Matters

For strategists and researchers, this highlights a significant margin of error in timeline predictions. If the underlying assumption of superexponential growth (d < 1) is incorrect, or if the "d" value is slightly higher than estimated, the urgency and strategies required to manage AI transition changes fundamentally.

We recommend reading the full analysis to understand the mathematical arguments and the specific implications of the "infinite time horizon" concept.

Read the full post at lessw-blog

Key Takeaways

  • The 'd' parameter is identified as the most significant variable in AI Futures Timelines models, dictating the shape of the growth curve.
  • Values of d < 1 imply superexponential growth and a rapid approach to AGI, while d = 1 implies standard exponential growth.
  • Forecasting models are highly sensitive to 'd'; small adjustments can shift ASI arrival predictions by years or decades.
  • Current models often assume superexponential growth based on the concept of an 'infinite 80% time horizon' for task accuracy.
  • The analysis contrasts these superexponential assumptions with other models, such as METR, which may rely on exponential baselines.

Read the original post at lessw-blog

Sources