AI-Driven UI Paradigm

Fidus doesn't have fixed screens or predetermined flows. Instead, an LLM analyzes context in real-time and dynamically decides what interface to render—creating situational UI that adapts to each moment.

No Fixed Screens

The same user query can produce different UIs based on context, time, location, and user expertise.

LLM Decides UI Form

Chat, form, widget, wizard—the LLM chooses the best interface pattern for each situation.

Context-Adaptive

UI complexity adapts to user expertise, urgency, available time, and current activity.

A Day with Fidus

Watch how the same AI assistant dynamically adapts its interface throughout the day based on context, user input, and real-time signals—without any fixed screens or predetermined flows.

8:15 AM
📶🔋
8:15AM
Morning
💬

Watch how Fidus adapts throughout the day — automatic loop, no interaction needed

Traditional vs AI-Driven UI

Understanding the fundamental difference between predetermined interfaces and context-adaptive UI.

AspectTraditional UIAI-Driven UI (Fidus)
NavigationFixed screens, user navigates to featuresNo fixed screens, features surface contextually
UI DecisionsHardcoded in JavaScript: if morning → show weatherLLM analyzes context and decides UI form
Form ComplexitySame form for all users (expert and beginner)Adaptive: chat for beginners, quick form for experts
DashboardStatic widgets, same for everyoneOpportunity Surface with dynamic, context-relevant cards
User ControlAuto-hide notifications after X secondsUser dismisses (swipe/X), no auto-hide
Response FormatPredetermined: "Create Budget" always shows formContext-dependent: text, form, widget, or wizard
ProactivityRule-based: if Friday 7am → show coffee budget alertSignal-based: LLM detects patterns and suggests
ExtensibilityNew UI requires code changes and deploymentNew components added to registry, LLM learns via RAG

New UI Patterns in AI-Driven Systems

Fidus introduces 8 novel interaction patterns that emerge from context-aware, LLM-driven UI decisions. These patterns are fundamentally different from traditional UI paradigms.

1. Context-Driven UI Rendering

The LLM analyzes context (time, user history, data complexity, urgency) and dynamically selects the UI form—text response, widget, form, or wizard—at runtime. The same query produces different UIs in different contexts.

Example: "Show my budget"

Context 1: Stable Mid-Month

User has stable spending, mid-month, no concerns

LLM Decision: Simple text response

"Your October budget: 660 EUR spent of 1000 EUR (66%). You're on track!"
Context 2: Near Limit, End-Month

User is at 95% of budget, 3 days left in month

LLM Decision: OpportunityCard with urgency indicator

Shows visual progress bar, breakdown by category, and action buttons: "View Transactions", "Adjust Budget"
When to use: For all user queries where multiple UI forms are possible. Let the LLM decide based on:
  • Urgency: High urgency → OpportunityCard with visual emphasis
  • Data complexity: Large datasets → Interactive widgets
  • User expertise: Beginner → Conversational flow, Expert → Quick form
  • Time context: Morning vs evening affects UI presentation

2. Contextual Opportunity Surfacing (Proactive Cards)

The system proactively detects opportunities based on signals (calendar events, budget thresholds, weather patterns) and surfaces relevant, dismissible cards on the Opportunity Surface (Dashboard). No fixed widgets—only context-relevant cards.

Examples of Opportunity Cards

Urgent
Budget Alert

Detected: 95% of food budget spent, 3 days left in month

Suggestion
Travel Booking

You have a flight to Berlin tomorrow, but no hotel booked

Pattern Detected
Recurring Expense

Coffee purchases every Monday at 9am—create recurring budget?

When to use: For proactive suggestions that are contextually relevant right now. Key characteristics:
  • User controls dismissal: Swipe or X button, never auto-hide timers
  • Signal-based: Triggered by data signals, not hardcoded rules
  • Urgency levels: Urgent (red), Suggestion (blue), Pattern (neutral)
  • Actionable: Always include 1-2 action buttons for immediate response

3. Adaptive Form Complexity

Instead of one-size-fits-all forms, the LLM adapts form complexity based on user expertise and intent clarity. Expert users with clear intent get quick forms; beginners get conversational wizards.

Example: Budget Creation

Expert User, Clear Intent

Query: "Create food budget 500 EUR monthly"

LLM Decision: Quick form with pre-filled values
Category:Food ✓
Amount:500 EUR ✓
Period:Monthly ✓
Beginner User, Unclear Intent

Query: "I want to save money"

LLM Decision: Conversational wizard
Assistant: I can help! What area would you like to budget for?
When to use: For any data input task. LLM analyzes:
  • User expertise: Track past interactions to classify as beginner/intermediate/expert
  • Intent clarity: Parse query for completeness (all fields vs partial)
  • Task complexity: Simple tasks → form, complex → wizard, ambiguous → chat
  • User preference: Learn if user prefers forms vs conversation over time

4. Dynamic Search & Filtering (Context-Based Filters)

Instead of showing all possible filter options upfront, the LLM suggests relevant filters based on context, query intent, and data distribution. Filters adapt to what's actually useful right now.

Example: Transaction Search

Query: "Show my transactions"

No specific context

LLM Suggested Filters:
This MonthLarge Amounts (>100 EUR)Uncategorized
Query: "Why is my food budget high?"

Specific category context

LLM Suggested Filters:
Food CategoryLast 7 DaysTop Merchants
When to use: For search, filtering, and data exploration interfaces. LLM considers:
  • Query context: Extract category, time, amount hints from user query
  • Data distribution: Suggest filters that actually narrow results meaningfully
  • User patterns: Learn frequently-used filter combinations
  • Progressive disclosure: Show 3-4 relevant filters first, "More filters" for rest

5. Generated Form Inputs (LLM-Suggested Fields)

The LLM analyzes user intent and suggests not just form fields, but also pre-filled values, smart defaults, and contextually-relevant field options based on history and patterns.

Example: Appointment Creation

Query: "Schedule dentist appointment"
LLM-Generated Form:
Title:
Dentist Appointment (suggested)
Location:
Dr. Schmidt Dental, Main St 42 (from history)
Duration:
(Based on past dentist appointments)
Suggested Times:
Tomorrow 10:00Friday 14:00Next Monday 9:00
(Considering your calendar and dentist hours)
When to use: For forms where historical data or context can improve input accuracy:
  • Repeat actions: Pre-fill based on previous similar entries
  • Smart defaults: Analyze patterns (e.g., dentist appointments usually 60 min)
  • Option generation: Create relevant options from calendar, contacts, locations
  • Validation: LLM can catch inconsistencies (e.g., appointment duration vs type)

6. Smart Action Buttons (Generated from Context)

Action buttons aren't hardcoded—the LLM generates contextually relevant actions based on data state, user permissions, and current context. Same data entity shows different actions in different situations.

Example: Calendar Event Actions

Context: Upcoming Event (2 hours away)
Team Meeting
Today 14:00 - 15:00
Context: Past Event (completed yesterday)
Team Meeting
Yesterday 14:00 - 15:00
When to use: For any data display where actions depend on state and context:
  • Time-dependent: Past events → review actions, upcoming → preparation actions
  • State-dependent: Pending budget → approve/reject, exceeded → adjust/view details
  • Permission-aware: Only show actions user has permission to execute
  • Context-specific: On mobile → "Get Directions", on desktop → "Open in Maps"

7. Progressive Disclosure (Based on User Expertise)

Information and options are revealed progressively based on user expertise level and interaction patterns. Beginners see simplified views; experts see advanced options immediately.

Example: Budget Configuration

Beginner View
Essential Settings
Monthly Limit:1000 EUR
Category:Food & Dining
Expert View
All Settings
Monthly Limit:1000 EUR
Category:Food & Dining
Alert Threshold:80%
Rollover:Enabled
Auto-categorization:ML-based
When to use: For configuration screens and data-heavy displays:
  • User classification: Track feature usage to classify beginner/intermediate/expert
  • Essential first: Always show 3-5 most important options immediately
  • Contextual expansion: "Show Advanced" only appears if relevant
  • Learn preferences: If user always expands, start showing advanced by default

8. Temporal UI (Time-Sensitive Elements)

UI elements that appear, transform, or disappear based on time context. Not rule-based ("every morning show X"), but LLM-driven relevance decisions that consider time as one of many signals.

Examples of Time-Context UI Changes

Morning (7-9 AM)
Today's Overview Card
Might include:
  • • Weather for commute
  • • First appointment time
  • • Traffic conditions
  • • Breakfast budget reminder
Work Hours (9-5 PM)
Focus Mode Active
Might include:
  • • Next meeting countdown
  • • Urgent tasks only
  • • Minimal notifications
  • • Quick expense logging
Evening (6-10 PM)
Reflection & Planning
Might include:
  • • Today's spending summary
  • • Tomorrow's agenda
  • • Uncategorized transactions
  • • Meal planning suggestions
When to use: For dashboard and opportunity surfacing, NOT fixed schedules:
  • LLM decides relevance: Time is ONE signal, not the only signal
  • User patterns matter: Night owl users don't get morning cards at 7am
  • Override capability: User can always request any information regardless of time
  • Examples not rules: Document as "might show" not "always shows"

UI Form Decision Matrix

The LLM chooses UI form based on multiple context signals. This matrix shows typical mappings, but remember: these are examples, not rules. The LLM weighs all factors dynamically.

User QueryContext SignalsUser LevelLLM Decision → UI Form
"Show my budget"
• Mid-month
• 66% spent
• On track
AnyText Response
"Your October budget: 660 EUR of 1000 EUR (66%). You're on track!"
"Show my budget"
• End of month
• 95% spent
• 3 days left
AnyOpportunityCard
Visual card with urgency indicator, progress bar, category breakdown, actions
"Create food budget 500 EUR monthly"
• Complete intent
• All params present
ExpertQuick Form (pre-filled)
Form with all fields pre-filled from query, one-click submit
"I want to save money"
• Vague intent
• Missing params
BeginnerConversational Wizard
Step-by-step guided conversation with option buttons
"Schedule dentist"
• Previous visits exist
• Calendar available
IntermediateForm with Smart Suggestions
Form with location from history, duration from past visits, suggested time slots
"Show my transactions"
• No specific query
• Large dataset
AnyWidget with Dynamic Filters
Transaction list with LLM-suggested filters (This Month, Large Amounts, Uncategorized)
"Why is my food budget high?"
• Specific category
• Analysis needed
AnyWidget with Context Filters
Transaction widget pre-filtered to Food category, Last 7 Days, Top Merchants
"Plan trip to Paris"
• Multi-step task
• Many decisions
BeginnerMulti-Step Wizard
Dates → Flights → Hotels → Activities (step-by-step with confirmations)
"Book Paris Apr 5-12"
• Clear params
• Multi-step task
ExpertInline Widget Sequence
Flight options → Hotel options → Quick actions (Book All, Compare)
(No query - Morning 7am)
• Time: 7:00 AM
• Workday
• Commute pattern
AnyProactive Card
Today's Overview: Weather, first meeting, traffic, budget reminder
⚠️
Important: This matrix shows typical examples, not deterministic rules. The LLM considers ALL context signals simultaneously and may choose different UI forms based on factors not shown here (user preferences, recent history, device type, etc.).

UI Decision Layer Architecture

The LLM-based UI Decision Layer is the brain of Fidus's adaptive interface. It receives context, consults the Component Registry via RAG, and returns structured UI decisions—NOT hardcoded rules.

How It Works

1️⃣

Context Gathering

System collects: user query, time, location, calendar state, budget status, user expertise level, device type, recent interactions

2️⃣

LLM + RAG Decision

LLM receives context + Component Registry (via RAG). It analyzes signals and selects optimal UI form from available components

3️⃣

Structured Response

LLM returns JSON schema with UI form type, component name, pre-filled data, and rendering instructions

Component Registry (RAG Knowledge Base)

The Component Registry is a structured knowledge base that the LLM queries via RAG to understand available UI components, their use cases, and required props.

Example: Component Registry Entry
{
  "componentName": "OpportunityCard",
  "description": "Proactive card for time-sensitive opportunities",
  "whenToUse": [
    "Urgent alerts (budget exceeded, calendar conflict)",
    "Time-sensitive suggestions (flight soon, no hotel)",
    "Pattern-based recommendations (recurring expense detected)"
  ],
  "requiredProps": {
    "urgency": "urgent | important | normal",
    "title": "string",
    "description": "string",
    "actions": "Array<{ label: string, action: string }>"
  },
  "examples": [
    "Budget alert: 95% spent, 3 days left in month",
    "Travel reminder: Flight tomorrow at 9am, no hotel booked"
  ]
}

Example: LLM Prompt with Context

System Prompt to LLM
You are the UI Decision Layer for Fidus, a privacy-first AI assistant.

TASK: Analyze the user query and context, then decide which UI form to render.

CONTEXT:
- User Query: "Show my budget"
- Time: 2024-10-28 14:30 (Monday afternoon, work hours)
- User Level: Intermediate (used Fidus for 3 months)
- Budget Status:
  - Category: Food & Dining
  - Limit: 1000 EUR/month
  - Spent: 950 EUR (95%)
  - Days Left: 3
  - Trend: 15% over average daily spending
- Device: Mobile (iPhone)
- Recent Activity: User checked budget 3 times today

AVAILABLE COMPONENTS (from Component Registry):
1. Text Response - Simple text answer
2. OpportunityCard - Urgent/proactive alerts with actions
3. BudgetWidget - Visual budget display with chart
4. DetailCard - Detailed breakdown with sections

DECISION CRITERIA:
- Urgency: Is this time-sensitive?
- Data Complexity: Simple vs complex data?
- User Expertise: Beginner (explain) vs Expert (concise)?
- Context: Does context suggest specific UI form?

Respond with structured JSON UI decision.

Example: LLM Response (Structured JSON)

LLM Output
{
  "uiDecision": {
    "reasoning": "User is at 95% of budget with only 3 days left, AND has checked budget 3 times today, indicating concern. This is urgent and actionable. OpportunityCard is best fit.",

    "selectedComponent": "OpportunityCard",

    "props": {
      "urgency": "urgent",
      "title": "Budget Alert: Food & Dining",
      "description": "You've spent 950 EUR of 1000 EUR (95%). Only 3 days left in October.",
      "visualData": {
        "progress": 95,
        "breakdown": [
          { "category": "Restaurants", "amount": 450, "percentage": 47 },
          { "category": "Groceries", "amount": 350, "percentage": 37 },
          { "category": "Coffee/Snacks", "amount": 150, "percentage": 16 }
        ]
      },
      "actions": [
        { "label": "View Transactions", "action": "navigate:transactions?category=food", "variant": "primary" },
        { "label": "Adjust Budget", "action": "navigate:budget/edit", "variant": "secondary" }
      ],
      "dismissible": true
    },

    "alternativeConsidered": "BudgetWidget was considered, but OpportunityCard better conveys urgency and provides immediate actions."
  }
}

Key Architecture Principles

✅ What the LLM Does

  • • Analyzes ALL context signals simultaneously
  • • Queries Component Registry via RAG
  • • Reasons about urgency, complexity, user level
  • • Selects optimal component from registry
  • • Pre-fills props with context data
  • • Provides reasoning for decision (explainability)

❌ What the LLM Does NOT Do

  • • Follow hardcoded if/else rules
  • • Use predetermined time-based triggers
  • • Ignore context in favor of defaults
  • • Generate UI components from scratch
  • • Make decisions without Component Registry
  • • Skip reasoning/explainability

Key Examples: See AI-Driven UI in Action

These are the most impactful examples that demonstrate the paradigm shift from traditional to AI-driven UI. Each example shows how the same user intent produces different interfaces based on context.

🌅

1. A Day with Fidus — Interactive Phone Demo

Watch realistic scenarios throughout a day where Fidus adapts its UI based on time, context, and user activity. The phone demo cycles through multiple scenarios automatically, showing budget queries, calendar conflicts, travel bookings, and proactive suggestions—all rendered differently based on context.

What you'll see:
  • • Same query ("Show budget") → different UIs (text vs urgent card)
  • • Flight cards with primary/secondary button hierarchy
  • • Form filling animation with progressive field reveal
  • • Booking confirmation with swipeable dismissal
  • • Dashboard showing the completed booking
↑ Scroll to Interactive Demo
🎯

2. Context-Driven UI Rendering Pattern

The foundation of AI-driven UI: the LLM analyzes context signals (time, urgency, data complexity, user expertise) and dynamically selects the UI form. See the budget query example showing text response vs OpportunityCard based on budget status and timing.

Key insight: The same user query produces completely different UIs:
  • • Mid-month, 66% spent → Simple text: "You're on track!"
  • • End-month, 95% spent → Urgent card with actions and breakdown
↓ Jump to Pattern #1
📝

3. Adaptive Form Complexity Pattern

Forms adapt to user expertise and intent clarity. Expert users with complete parameters get quick pre-filled forms, while beginners with vague requests get conversational wizards. The budget creation example shows both extremes.

The contrast:
  • • Expert: "Create food budget 500 EUR monthly" → Quick form, one click
  • • Beginner: "I want to save money" → Guided conversation with options
↓ Jump to Pattern #3
🧠

4. UI Form Decision Matrix — LLM Decision Mapping

A comprehensive table showing 10 real-world scenarios where user query + context signals + user level map to specific UI forms. This matrix illustrates the complexity of context-aware UI decisions and emphasizes that these are examples, not deterministic rules.

Why it matters: Shows that AI-driven UI decisions consider multiple factors simultaneously— not just one rule like "if morning then show weather." The LLM weighs urgency, user level, data complexity, and context to choose the optimal UI form.
↓ Jump to Decision Matrix
💡
Pro tip: Start with the Interactive Phone Demo to see the paradigm in action, then dive into specific patterns to understand the underlying principles. The Decision Matrix ties everything together with concrete examples.

Implementation Architecture

How do you actually build an AI-driven UI system? This section provides developer guidance on designing, implementing, and extending the context-adaptive interface architecture.

What Gets Designed & Built

1. Component Registry

A structured knowledge base (JSON/YAML) documenting all available UI components:

  • • Component name and description
  • • When to use (conditions)
  • • Required props and types
  • • Example scenarios
  • • Visual examples (screenshots)
/registry/components/
├─ OpportunityCard.json
├─ BudgetWidget.json
└─ DetailCard.json

2. Domain Context Schemas

TypeScript/Zod schemas defining context structure for each domain:

  • • User state (expertise level, preferences)
  • • Domain data (budget, calendar, travel)
  • • Temporal context (time, location)
  • • Recent activity history
  • • Device and platform info
/schemas/context/
├─ BudgetContext.ts
├─ CalendarContext.ts
└─ UserContext.ts

3. UI Decision Prompts

System prompts for the LLM that include:

  • • Role definition (UI Decision Layer)
  • • Decision criteria (urgency, complexity, etc.)
  • • Component Registry reference (RAG)
  • • Output format (structured JSON schema)
  • • Examples of good decisions
/prompts/
├─ ui-decision-layer.md
└─ component-registry-rag.md

4. Response Templates

Zod schemas for LLM response validation:

  • • UI decision structure
  • • Component name (enum)
  • • Props (typed by component)
  • • Reasoning field (explainability)
  • • Alternatives considered
/schemas/responses/
├─ UIDecisionSchema.ts
└─ ComponentPropsSchema.ts

Where to Work in the Codebase

🎯 Core Supervisor (Orchestration)
packages/api/fidus/domain/
└─ orchestration/
Orchestration supervisor (core, built-in) receives user query, gathers context, calls UI Decision Layer (LLM), and returns structured UI response with component + props.
🔌 External Domain Supervisors (MCP)
External MCP Servers
(added by admin at runtime)
Domain supervisors (Calendar, Finance, Travel, etc.) are external MCP servers. Admin adds/removes them at runtime via MCP plugin system—not hardcoded in core.
🤖 UI Decision Agent (LLM Layer)
packages/api/fidus/agents/
└─ ui_decision_agent.py
LLM agent that receives context + Component Registry (RAG), applies decision criteria, returns structured JSON with component + props.
🎨 Dynamic UI Renderer (Frontend)
packages/web/components/
└─ dynamic-renderer.tsx
Dynamic renderer receives UI decision JSON, validates props, renders the selected component with pre-filled data.
💡
Core vs. External Architecture: Fidus has a minimal core (Orchestration + Proactivity supervisors) and external domain supervisors (Calendar, Finance, Travel) that are MCP servers added dynamically by admins. This allows extending Fidus with new domains without modifying core code.

Adding New UI Components

  1. 1. Build the Component
    Create React component in packages/web/components/ or add to @fidus/ui if reusable.
  2. 2. Document in Component Registry
    Create JSON entry with: name, description, whenToUse, requiredProps, examples. This becomes RAG knowledge for the LLM.
  3. 3. Add to Dynamic Renderer
    Update DynamicRenderer.tsx to handle new component name and validate its props with Zod schema.
  4. 4. Test with Context Variations
    Write tests that provide different contexts to UI Decision Layer and verify it selects the new component when appropriate.
  5. 5. Update Documentation
    Add example to this page under relevant pattern section, showing when LLM would choose this component.

Do's and Don'ts

Do

  • Let the LLM decide: Provide context and Component Registry, let LLM choose UI form based on reasoning.
  • Use structured outputs: Validate LLM responses with Zod schemas to ensure type-safe component rendering.
  • Document components thoroughly: Component Registry is LLM's knowledge base—clear docs = better decisions.
  • Test context variations: Test same query with different contexts to verify adaptive behavior.
  • Track LLM reasoning: Log reasoning field from UI decisions for debugging and improving prompts.

Don't

  • Hardcode UI logic: Avoid if (morning) show weather—let LLM decide based on full context.
  • Skip Component Registry: LLM can't choose components it doesn't know about via RAG.
  • Use auto-hide timers: User controls dismissal—never setTimeout(hide, 3000).
  • Create fixed screens: No CalendarScreen.tsx—create CalendarWidget.tsx that appears contextually.
  • Ignore reasoning field: LLM reasoning helps debug bad decisions and improve prompts over time.