Proteus
Intent-based adaptive interface: one phrase → composed view, governed by a design-system registry.
Problem
Static UIs force users through menus and fixed layouts. Generative “AI UI” can adapt to intent but risks inconsistent, inaccessible, or unsafe output. The gap: adaptive UX that stays on-brand and within guardrails—no hallucinated components or arbitrary markup.
Solution
Proteus uses a three-layer pipeline: Intent (natural language → schema-bound intent, allowlisted domains/actions), Compose (map intent to a layout from a design-system registry—no net-new components), and Render (stream LayoutSpec; only registered components; data-only props, no eval). The result: one phrase yields a composed view (e.g. map + table + CTA) that is always from the registry and governance-safe.
Why it matters
Adaptive interfaces reduce friction and support diverse workflows (e.g. “show me risk in APAC”) without building every path by hand. Governance—registry-only components, schema-bound intent, no raw HTML or eval—keeps the experience predictable and accessible. Fits internal tools, dashboards, and RegTech/compliance views where consistency and safety are non-negotiable.
Tech choices
- Next.js — App shell, routing, and future API routes for intent/composition; good fit for a single-page “command” surface.
- Schema-bound intent — IntentSpec (domain, entities, visualization, actions) is validated and allowlisted; LLM or rule-based parser can fill it, but output is never free-form.
- Component registry — Only pre-built atoms/molecules/templates; composition picks which and how they’re arranged, never invents new UI.
- No eval, data-only props — Renderer receives LayoutSpec (JSON); components get data, not code. Prevents injection and keeps a11y and design tokens consistent.