Google Open Sources A2UI: A Secure Protocol for Agent-Driven Interfaces

New protocol shifts from code generation to declarative intents to secure AI-generated UIs.

· 3 min read · PSEEDR Editorial

On December 15, 2025, Google released A2UI (Agent-to-User Interface), an open-source protocol designed to standardize how artificial intelligence agents render visual interfaces across platforms. Currently in Public Preview (v0.8), the project addresses critical security and latency challenges inherent in allowing Large Language Models (LLMs) to generate user interfaces.

As AI agents transition from simple text-based chat interfaces to executing complex workflows, the industry has struggled with a fundamental interface problem: how to safely display dynamic controls-such as forms, dashboards, and data visualizations-generated by a stochastic model. Google's introduction of A2UI attempts to solve this by shifting the paradigm from code generation to declarative data transmission.

The Security-First Architecture

The prevailing method for 'Generative UI' often involves LLMs writing executable code (such as React components or raw HTML/JavaScript) which is then rendered by the client. While flexible, this approach introduces significant security risks, primarily the potential for Cross-Site Scripting (XSS) attacks or the execution of hallucinated, malicious logic.

According to the official release documentation, A2UI mitigates this by utilizing a strict, declarative JSON format. Instead of sending code, the agent sends a JSON payload describing the intent of the interface (e.g., "display a confirmation button with ID 'submit'"). The client-side application then maps these intents to pre-built, secure local components. This architecture ensures that no arbitrary code is executed on the user's device, maintaining a strict separation between the AI's logic and the application's rendering engine.

Optimized for LLM Streaming

A distinct technical advantage of A2UI is its native support for incremental updates, a feature specifically designed for the token-by-token generation nature of LLMs. Traditional Server-Driven UI (SDUI) frameworks often require a complete payload before rendering. In contrast, A2UI supports a flat list of components with ID references, enabling the interface to update in real-time as the model generates the JSON stream. This reduces perceived latency, allowing users to interact with parts of the UI while the rest is still being constructed.

Cross-Platform Interoperability

The v0.8 release emphasizes framework agnosticism. Google has provided initial renderers for both Web (supporting Lit and Angular) and Flutter. This allows a single AI agent to output one JSON payload that renders as a native web component in a browser and a native widget on a mobile device. This contrasts with competitor solutions like Vercel's AI SDK, which is currently heavily optimized for the React ecosystem. By leveraging Flutter, Google positions A2UI as a viable solution for mobile-first agentic applications.

Strategic Implications and Limitations

The release of A2UI places Google in direct competition with Microsoft's Adaptive Cards, an established standard for declarative UI in enterprise chat applications. However, A2UI appears more specifically tuned for the generative AI era, where the interface structure is not static but hallucinated on the fly based on conversation context.

Despite its advantages, the protocol introduces implementation overhead. Because the system relies on a pre-registered library of client-side components, developers must implement matching UI components across all target platforms (Web, iOS, Android) before an agent can utilize them. If an agent attempts to call a component that does not exist in the client's registry, the UI cannot be rendered, limiting the infinite flexibility often promised by generative AI.

Google has released the project under the Apache 2.0 license, signaling a desire to establish A2UI as an industry standard for the emerging agent economy.

Key Takeaways

Sources