AI-Driven UI Paradigm
Fidus doesn't have fixed screens or predetermined flows. Instead, an LLM analyzes context in real-time and dynamically decides what interface to render—creating situational UI that adapts to each moment.
No Fixed Screens
The same user query can produce different UIs based on context, time, location, and user expertise.
LLM Decides UI Form
Chat, form, widget, wizard—the LLM chooses the best interface pattern for each situation.
Context-Adaptive
UI complexity adapts to user expertise, urgency, available time, and current activity.
A Day with Fidus
Watch how the same AI assistant dynamically adapts its interface throughout the day based on context, user input, and real-time signals—without any fixed screens or predetermined flows.
LLM Decision Process
Real-time context analysis
Watch how Fidus adapts throughout the day — automatic loop, no interaction needed
Traditional vs AI-Driven UI
Understanding the fundamental difference between predetermined interfaces and context-adaptive UI.
| Aspect | Traditional UI | AI-Driven UI (Fidus) |
|---|---|---|
| Navigation | Fixed screens, user navigates to features | No fixed screens, features surface contextually |
| UI Decisions | Hardcoded in JavaScript: if morning → show weather | LLM analyzes context and decides UI form |
| Form Complexity | Same form for all users (expert and beginner) | Adaptive: chat for beginners, quick form for experts |
| Dashboard | Static widgets, same for everyone | Opportunity Surface with dynamic, context-relevant cards |
| User Control | Auto-hide notifications after X seconds | User dismisses (swipe/X), no auto-hide |
| Response Format | Predetermined: "Create Budget" always shows form | Context-dependent: text, form, widget, or wizard |
| Proactivity | Rule-based: if Friday 7am → show coffee budget alert | Signal-based: LLM detects patterns and suggests |
| Extensibility | New UI requires code changes and deployment | New components added to registry, LLM learns via RAG |
New UI Patterns in AI-Driven Systems
Fidus introduces 8 novel interaction patterns that emerge from context-aware, LLM-driven UI decisions. These patterns are fundamentally different from traditional UI paradigms.
1. Context-Driven UI Rendering
The LLM analyzes context (time, user history, data complexity, urgency) and dynamically selects the UI form—text response, widget, form, or wizard—at runtime. The same query produces different UIs in different contexts.
Example: "Show my budget"
Context 1: Stable Mid-Month
User has stable spending, mid-month, no concerns
"Your October budget: 660 EUR spent of 1000 EUR (66%). You're on track!"
Context 2: Near Limit, End-Month
User is at 95% of budget, 3 days left in month
Shows visual progress bar, breakdown by category, and action buttons: "View Transactions", "Adjust Budget"
- • Urgency: High urgency → OpportunityCard with visual emphasis
- • Data complexity: Large datasets → Interactive widgets
- • User expertise: Beginner → Conversational flow, Expert → Quick form
- • Time context: Morning vs evening affects UI presentation
2. Contextual Opportunity Surfacing (Proactive Cards)
The system proactively detects opportunities based on signals (calendar events, budget thresholds, weather patterns) and surfaces relevant, dismissible cards on the Opportunity Surface (Dashboard). No fixed widgets—only context-relevant cards.
Examples of Opportunity Cards
Budget Alert
Detected: 95% of food budget spent, 3 days left in month
Travel Booking
You have a flight to Berlin tomorrow, but no hotel booked
Recurring Expense
Coffee purchases every Monday at 9am—create recurring budget?
- • User controls dismissal: Swipe or X button, never auto-hide timers
- • Signal-based: Triggered by data signals, not hardcoded rules
- • Urgency levels: Urgent (red), Suggestion (blue), Pattern (neutral)
- • Actionable: Always include 1-2 action buttons for immediate response
3. Adaptive Form Complexity
Instead of one-size-fits-all forms, the LLM adapts form complexity based on user expertise and intent clarity. Expert users with clear intent get quick forms; beginners get conversational wizards.
Example: Budget Creation
Expert User, Clear Intent
Query: "Create food budget 500 EUR monthly"
Beginner User, Unclear Intent
Query: "I want to save money"
- • User expertise: Track past interactions to classify as beginner/intermediate/expert
- • Intent clarity: Parse query for completeness (all fields vs partial)
- • Task complexity: Simple tasks → form, complex → wizard, ambiguous → chat
- • User preference: Learn if user prefers forms vs conversation over time
4. Dynamic Search & Filtering (Context-Based Filters)
Instead of showing all possible filter options upfront, the LLM suggests relevant filters based on context, query intent, and data distribution. Filters adapt to what's actually useful right now.
Example: Transaction Search
Query: "Show my transactions"
No specific context
Query: "Why is my food budget high?"
Specific category context
- • Query context: Extract category, time, amount hints from user query
- • Data distribution: Suggest filters that actually narrow results meaningfully
- • User patterns: Learn frequently-used filter combinations
- • Progressive disclosure: Show 3-4 relevant filters first, "More filters" for rest
5. Generated Form Inputs (LLM-Suggested Fields)
The LLM analyzes user intent and suggests not just form fields, but also pre-filled values, smart defaults, and contextually-relevant field options based on history and patterns.
Example: Appointment Creation
Query: "Schedule dentist appointment"
- • Repeat actions: Pre-fill based on previous similar entries
- • Smart defaults: Analyze patterns (e.g., dentist appointments usually 60 min)
- • Option generation: Create relevant options from calendar, contacts, locations
- • Validation: LLM can catch inconsistencies (e.g., appointment duration vs type)
6. Smart Action Buttons (Generated from Context)
Action buttons aren't hardcoded—the LLM generates contextually relevant actions based on data state, user permissions, and current context. Same data entity shows different actions in different situations.
Example: Calendar Event Actions
Context: Upcoming Event (2 hours away)
Context: Past Event (completed yesterday)
- • Time-dependent: Past events → review actions, upcoming → preparation actions
- • State-dependent: Pending budget → approve/reject, exceeded → adjust/view details
- • Permission-aware: Only show actions user has permission to execute
- • Context-specific: On mobile → "Get Directions", on desktop → "Open in Maps"
7. Progressive Disclosure (Based on User Expertise)
Information and options are revealed progressively based on user expertise level and interaction patterns. Beginners see simplified views; experts see advanced options immediately.
Example: Budget Configuration
Beginner View
Expert View
- • User classification: Track feature usage to classify beginner/intermediate/expert
- • Essential first: Always show 3-5 most important options immediately
- • Contextual expansion: "Show Advanced" only appears if relevant
- • Learn preferences: If user always expands, start showing advanced by default
8. Temporal UI (Time-Sensitive Elements)
UI elements that appear, transform, or disappear based on time context. Not rule-based ("every morning show X"), but LLM-driven relevance decisions that consider time as one of many signals.
Examples of Time-Context UI Changes
Today's Overview Card
- • Weather for commute
- • First appointment time
- • Traffic conditions
- • Breakfast budget reminder
Focus Mode Active
- • Next meeting countdown
- • Urgent tasks only
- • Minimal notifications
- • Quick expense logging
Reflection & Planning
- • Today's spending summary
- • Tomorrow's agenda
- • Uncategorized transactions
- • Meal planning suggestions
- • LLM decides relevance: Time is ONE signal, not the only signal
- • User patterns matter: Night owl users don't get morning cards at 7am
- • Override capability: User can always request any information regardless of time
- • Examples not rules: Document as "might show" not "always shows"
UI Form Decision Matrix
The LLM chooses UI form based on multiple context signals. This matrix shows typical mappings, but remember: these are examples, not rules. The LLM weighs all factors dynamically.
| User Query | Context Signals | User Level | LLM Decision → UI Form |
|---|---|---|---|
| "Show my budget" | • Mid-month • 66% spent • On track | Any | Text Response "Your October budget: 660 EUR of 1000 EUR (66%). You're on track!" |
| "Show my budget" | • End of month • 95% spent • 3 days left | Any | OpportunityCard Visual card with urgency indicator, progress bar, category breakdown, actions |
| "Create food budget 500 EUR monthly" | • Complete intent • All params present | Expert | Quick Form (pre-filled) Form with all fields pre-filled from query, one-click submit |
| "I want to save money" | • Vague intent • Missing params | Beginner | Conversational Wizard Step-by-step guided conversation with option buttons |
| "Schedule dentist" | • Previous visits exist • Calendar available | Intermediate | Form with Smart Suggestions Form with location from history, duration from past visits, suggested time slots |
| "Show my transactions" | • No specific query • Large dataset | Any | Widget with Dynamic Filters Transaction list with LLM-suggested filters (This Month, Large Amounts, Uncategorized) |
| "Why is my food budget high?" | • Specific category • Analysis needed | Any | Widget with Context Filters Transaction widget pre-filtered to Food category, Last 7 Days, Top Merchants |
| "Plan trip to Paris" | • Multi-step task • Many decisions | Beginner | Multi-Step Wizard Dates → Flights → Hotels → Activities (step-by-step with confirmations) |
| "Book Paris Apr 5-12" | • Clear params • Multi-step task | Expert | Inline Widget Sequence Flight options → Hotel options → Quick actions (Book All, Compare) |
| (No query - Morning 7am) | • Time: 7:00 AM • Workday • Commute pattern | Any | Proactive Card Today's Overview: Weather, first meeting, traffic, budget reminder |
UI Decision Layer Architecture
The LLM-based UI Decision Layer is the brain of Fidus's adaptive interface. It receives context, consults the Component Registry via RAG, and returns structured UI decisions—NOT hardcoded rules.
How It Works
Context Gathering
System collects: user query, time, location, calendar state, budget status, user expertise level, device type, recent interactions
LLM + RAG Decision
LLM receives context + Component Registry (via RAG). It analyzes signals and selects optimal UI form from available components
Structured Response
LLM returns JSON schema with UI form type, component name, pre-filled data, and rendering instructions
Component Registry (RAG Knowledge Base)
The Component Registry is a structured knowledge base that the LLM queries via RAG to understand available UI components, their use cases, and required props.
{
"componentName": "OpportunityCard",
"description": "Proactive card for time-sensitive opportunities",
"whenToUse": [
"Urgent alerts (budget exceeded, calendar conflict)",
"Time-sensitive suggestions (flight soon, no hotel)",
"Pattern-based recommendations (recurring expense detected)"
],
"requiredProps": {
"urgency": "urgent | important | normal",
"title": "string",
"description": "string",
"actions": "Array<{ label: string, action: string }>"
},
"examples": [
"Budget alert: 95% spent, 3 days left in month",
"Travel reminder: Flight tomorrow at 9am, no hotel booked"
]
}Example: LLM Prompt with Context
You are the UI Decision Layer for Fidus, a privacy-first AI assistant.
TASK: Analyze the user query and context, then decide which UI form to render.
CONTEXT:
- User Query: "Show my budget"
- Time: 2024-10-28 14:30 (Monday afternoon, work hours)
- User Level: Intermediate (used Fidus for 3 months)
- Budget Status:
- Category: Food & Dining
- Limit: 1000 EUR/month
- Spent: 950 EUR (95%)
- Days Left: 3
- Trend: 15% over average daily spending
- Device: Mobile (iPhone)
- Recent Activity: User checked budget 3 times today
AVAILABLE COMPONENTS (from Component Registry):
1. Text Response - Simple text answer
2. OpportunityCard - Urgent/proactive alerts with actions
3. BudgetWidget - Visual budget display with chart
4. DetailCard - Detailed breakdown with sections
DECISION CRITERIA:
- Urgency: Is this time-sensitive?
- Data Complexity: Simple vs complex data?
- User Expertise: Beginner (explain) vs Expert (concise)?
- Context: Does context suggest specific UI form?
Respond with structured JSON UI decision.Example: LLM Response (Structured JSON)
{
"uiDecision": {
"reasoning": "User is at 95% of budget with only 3 days left, AND has checked budget 3 times today, indicating concern. This is urgent and actionable. OpportunityCard is best fit.",
"selectedComponent": "OpportunityCard",
"props": {
"urgency": "urgent",
"title": "Budget Alert: Food & Dining",
"description": "You've spent 950 EUR of 1000 EUR (95%). Only 3 days left in October.",
"visualData": {
"progress": 95,
"breakdown": [
{ "category": "Restaurants", "amount": 450, "percentage": 47 },
{ "category": "Groceries", "amount": 350, "percentage": 37 },
{ "category": "Coffee/Snacks", "amount": 150, "percentage": 16 }
]
},
"actions": [
{ "label": "View Transactions", "action": "navigate:transactions?category=food", "variant": "primary" },
{ "label": "Adjust Budget", "action": "navigate:budget/edit", "variant": "secondary" }
],
"dismissible": true
},
"alternativeConsidered": "BudgetWidget was considered, but OpportunityCard better conveys urgency and provides immediate actions."
}
}Key Architecture Principles
✅ What the LLM Does
- • Analyzes ALL context signals simultaneously
- • Queries Component Registry via RAG
- • Reasons about urgency, complexity, user level
- • Selects optimal component from registry
- • Pre-fills props with context data
- • Provides reasoning for decision (explainability)
❌ What the LLM Does NOT Do
- • Follow hardcoded if/else rules
- • Use predetermined time-based triggers
- • Ignore context in favor of defaults
- • Generate UI components from scratch
- • Make decisions without Component Registry
- • Skip reasoning/explainability
Key Examples: See AI-Driven UI in Action
These are the most impactful examples that demonstrate the paradigm shift from traditional to AI-driven UI. Each example shows how the same user intent produces different interfaces based on context.
1. A Day with Fidus — Interactive Phone Demo
Watch realistic scenarios throughout a day where Fidus adapts its UI based on time, context, and user activity. The phone demo cycles through multiple scenarios automatically, showing budget queries, calendar conflicts, travel bookings, and proactive suggestions—all rendered differently based on context.
- • Same query ("Show budget") → different UIs (text vs urgent card)
- • Flight cards with primary/secondary button hierarchy
- • Form filling animation with progressive field reveal
- • Booking confirmation with swipeable dismissal
- • Dashboard showing the completed booking
2. Context-Driven UI Rendering Pattern
The foundation of AI-driven UI: the LLM analyzes context signals (time, urgency, data complexity, user expertise) and dynamically selects the UI form. See the budget query example showing text response vs OpportunityCard based on budget status and timing.
- • Mid-month, 66% spent → Simple text: "You're on track!"
- • End-month, 95% spent → Urgent card with actions and breakdown
3. Adaptive Form Complexity Pattern
Forms adapt to user expertise and intent clarity. Expert users with complete parameters get quick pre-filled forms, while beginners with vague requests get conversational wizards. The budget creation example shows both extremes.
- • Expert: "Create food budget 500 EUR monthly" → Quick form, one click
- • Beginner: "I want to save money" → Guided conversation with options
4. UI Form Decision Matrix — LLM Decision Mapping
A comprehensive table showing 10 real-world scenarios where user query + context signals + user level map to specific UI forms. This matrix illustrates the complexity of context-aware UI decisions and emphasizes that these are examples, not deterministic rules.
Implementation Architecture
How do you actually build an AI-driven UI system? This section provides developer guidance on designing, implementing, and extending the context-adaptive interface architecture.
What Gets Designed & Built
1. Component Registry
A structured knowledge base (JSON/YAML) documenting all available UI components:
- • Component name and description
- • When to use (conditions)
- • Required props and types
- • Example scenarios
- • Visual examples (screenshots)
├─ OpportunityCard.json
├─ BudgetWidget.json
└─ DetailCard.json
2. Domain Context Schemas
TypeScript/Zod schemas defining context structure for each domain:
- • User state (expertise level, preferences)
- • Domain data (budget, calendar, travel)
- • Temporal context (time, location)
- • Recent activity history
- • Device and platform info
├─ BudgetContext.ts
├─ CalendarContext.ts
└─ UserContext.ts
3. UI Decision Prompts
System prompts for the LLM that include:
- • Role definition (UI Decision Layer)
- • Decision criteria (urgency, complexity, etc.)
- • Component Registry reference (RAG)
- • Output format (structured JSON schema)
- • Examples of good decisions
├─ ui-decision-layer.md
└─ component-registry-rag.md
4. Response Templates
Zod schemas for LLM response validation:
- • UI decision structure
- • Component name (enum)
- • Props (typed by component)
- • Reasoning field (explainability)
- • Alternatives considered
├─ UIDecisionSchema.ts
└─ ComponentPropsSchema.ts
Where to Work in the Codebase
└─ orchestration/
(added by admin at runtime)
└─ ui_decision_agent.py
└─ dynamic-renderer.tsx
Adding New UI Components
- 1. Build the ComponentCreate React component in
packages/web/components/or add to@fidus/uiif reusable. - 2. Document in Component RegistryCreate JSON entry with: name, description, whenToUse, requiredProps, examples. This becomes RAG knowledge for the LLM.
- 3. Add to Dynamic RendererUpdate
DynamicRenderer.tsxto handle new component name and validate its props with Zod schema. - 4. Test with Context VariationsWrite tests that provide different contexts to UI Decision Layer and verify it selects the new component when appropriate.
- 5. Update DocumentationAdd example to this page under relevant pattern section, showing when LLM would choose this component.
Do's and Don'ts
✓ Do
- •Let the LLM decide: Provide context and Component Registry, let LLM choose UI form based on reasoning.
- •Use structured outputs: Validate LLM responses with Zod schemas to ensure type-safe component rendering.
- •Document components thoroughly: Component Registry is LLM's knowledge base—clear docs = better decisions.
- •Test context variations: Test same query with different contexts to verify adaptive behavior.
- •Track LLM reasoning: Log reasoning field from UI decisions for debugging and improving prompts.
✗ Don't
- •Hardcode UI logic: Avoid
if (morning) show weather—let LLM decide based on full context. - •Skip Component Registry: LLM can't choose components it doesn't know about via RAG.
- •Use auto-hide timers: User controls dismissal—never
setTimeout(hide, 3000). - •Create fixed screens: No
CalendarScreen.tsx—createCalendarWidget.tsxthat appears contextually. - •Ignore reasoning field: LLM reasoning helps debug bad decisions and improve prompts over time.