PSEEDR

Design Sketches for "Angels-on-the-Shoulder": Augmenting Human Agency

Coverage of lessw-blog

· PSEEDR Editorial

In a recent post on LessWrong, the author outlines a conceptual framework for "angels-on-the-shoulder"-AI tools specifically engineered to improve human decision-making and reduce unforced errors during the transition to advanced AI systems.

In a recent post, lessw-blog discusses the concept of "angels-on-the-shoulder," a class of AI tools designed to sit firmly on the user's side of the table. As artificial intelligence systems become increasingly capable, the interaction between human cognition and algorithmic outputs becomes a critical safety surface. The post argues that rather than simply consuming AI outputs, humans need customized tools that actively assist in navigation, ensuring decisions are well-informed and aligned with the user's true values.

Contextualizing the Need for "Angels"
The current digital landscape is dominated by algorithms optimizing for engagement metrics-often at the expense of user intent or well-being. This creates an environment where "unforced errors" in judgment are common, driven by distraction or misinformation. The "angels-on-the-shoulder" framework proposes a shift toward systems that prioritize "endorsed decisions"-choices that a user would stand by upon reflection, rather than impulsive reactions triggered by dark patterns or engagement loops.

The Core Proposition
The author presents design sketches for how these supportive systems might function. The goal is to create tools that make individuals situationally aware and less prone to cognitive pitfalls. Two specific implementations are highlighted:

  • Aligned Recommender Systems: Unlike standard feeds that maximize time-on-site, these systems would filter and present information based on the user's explicit goals and long-term values, acting as a protective filter against noise.
  • Personalized Learning Systems: These tools would adapt to the user's specific knowledge gaps, facilitating rapid upskilling and ensuring that the human operator remains competent and comprehending in an increasingly complex environment.

Why It Matters
This discussion is particularly significant for the broader field of AI alignment. By focusing on practical, near-term applications that enhance human agency, the post addresses a crucial gap: how to ensure humans remain the pilots of their own lives as the machinery around them grows more powerful. These "angels" serve as a necessary bridge, helping society navigate the transition to advanced AI systems without ceding control or cognitive autonomy.

For those interested in the intersection of UX design, AI safety, and human augmentation, this post offers a pragmatic look at what beneficial AI assistants could look like.

Read the full post on LessWrong

Key Takeaways

  • The concept of "angels-on-the-shoulder" refers to AI tools designed to minimize human error and maximize endorsed decisions.
  • Current algorithmic incentives often conflict with user agency; these tools aim to realign technology with human intent.
  • Proposed implementations include aligned recommender systems that filter for value rather than engagement.
  • Personalized learning systems are suggested as a method to maintain human competence in complex environments.
  • These tools are presented as critical infrastructure for safely navigating the transition to more advanced AI.

Read the original post at lessw-blog

Sources