Skip to content

Lab 070: Agent UX Patterns β€” Chat, Adaptive Cards & Proactive NotificationsΒΆ

Level: L100 Path: All paths Time: ~60 min πŸ’° Cost: Free β€” Mock interaction data (no Teams or Azure Bot Service required)

What You'll LearnΒΆ

  • Core UX patterns for AI agent interactions in enterprise environments
  • Design effective chat interfaces with typing indicators and source citations
  • Build Adaptive Cards for structured data display and user input
  • Implement proactive notification patterns for agent-initiated messages
  • Apply accessibility best practices to agent UX
  • Measure UX quality using user satisfaction metrics

Prerequisite

Familiarity with chatbot concepts is recommended. No front-end development experience is required β€” this lab analyzes UX patterns using mock interaction data.

IntroductionΒΆ

An AI agent's intelligence is only as effective as its user experience. Poor UX β€” missing typing indicators, no source citations, inaccessible Adaptive Cards β€” erodes user trust and adoption. Great agent UX follows established patterns:

UX Pattern Purpose Impact
Typing Indicator Shows the agent is processing Reduces perceived latency
Source Citation Links answers to source documents Builds trust and verifiability
Adaptive Cards Structured display with actions Enables rich interactions
Proactive Notifications Agent-initiated messages Keeps users informed
Error Messaging Clear, actionable error states Reduces frustration
Accessibility Screen reader support, keyboard nav Ensures inclusive access

The ScenarioΒΆ

You are a UX Designer auditing an enterprise agent's interaction patterns. You have data on 12 UX patterns used across the organization, including satisfaction scores, implementation status, and accessibility compliance. Your job: identify high-impact patterns, find gaps, and recommend improvements.


PrerequisitesΒΆ

Requirement Why
Python 3.10+ Run analysis scripts
pandas Analyze UX pattern data
pip install pandas

Quick Start with GitHub Codespaces

Open in GitHub Codespaces

All dependencies are pre-installed in the devcontainer.

πŸ“¦ Supporting FilesΒΆ

Download these files before starting the lab

Save all files to a lab-070/ folder in your working directory.

File Description Download
broken_ux.py Bug-fix exercise (3 bugs + self-tests) πŸ“₯ Download
ux_patterns.csv Dataset πŸ“₯ Download

Step 1: Understanding Agent UX PrinciplesΒΆ

Effective agent UX follows a layered approach:

User Input β†’ [Typing Indicator] β†’ Agent Processing β†’ [Response Formatting]
                                                            ↓
                                                   β”Œβ”€β”€ Plain Text Chat
                                                   β”œβ”€β”€ Adaptive Card
                                                   β”œβ”€β”€ Source Citation
                                                   └── Error Message
                                                            ↓
                                              [Accessibility Check] β†’ User

Key principles:

  1. Responsiveness β€” Always acknowledge user input immediately (typing indicators)
  2. Transparency β€” Cite sources and explain confidence levels
  3. Structure β€” Use Adaptive Cards for complex data, plain text for simple answers
  4. Proactivity β€” Notify users of important events without requiring a prompt
  5. Accessibility β€” Ensure all interactions work with screen readers and keyboard navigation

Why UX Matters for Agent Adoption

Research shows that agents with proper UX patterns (source citations, typing indicators, clear errors) have 2-3x higher user retention than agents with bare text responses. Users trust agents more when they can verify answers and understand the agent's state.


Step 2: Load and Explore UX PatternsΒΆ

The dataset contains 12 UX patterns with satisfaction scores and implementation data:

import pandas as pd

patterns = pd.read_csv("lab-070/ux_patterns.csv")
print(f"Total patterns: {len(patterns)}")
print(f"Categories: {sorted(patterns['category'].unique())}")
print(f"\nAll patterns:")
print(patterns[["pattern_id", "pattern_name", "category", "satisfaction_score"]]
      .to_string(index=False))

Expected:

Total patterns: 12

Step 3: Satisfaction AnalysisΒΆ

Identify the highest and lowest satisfaction patterns:

print("Patterns ranked by satisfaction score:")
ranked = patterns.sort_values("satisfaction_score", ascending=False)
print(ranked[["pattern_name", "category", "satisfaction_score"]].to_string(index=False))

highest = patterns.loc[patterns["satisfaction_score"].idxmax()]
print(f"\nHighest satisfaction: {highest['pattern_name']} ({highest['satisfaction_score']})")
print(f"Average satisfaction: {patterns['satisfaction_score'].mean():.2f}")

Expected:

Highest satisfaction: Source Citation (4.8)
Average satisfaction: 4.17

Source Citations Win

Source Citation has the highest satisfaction score (4.8 out of 5.0). Users strongly prefer agents that link answers to verifiable sources β€” it builds trust and allows users to dive deeper. This pattern should be implemented in every enterprise agent.


Step 4: Category AnalysisΒΆ

Analyze patterns by category:

print("Average satisfaction by category:")
cat_stats = patterns.groupby("category").agg(
    count=("pattern_id", "count"),
    avg_satisfaction=("satisfaction_score", "mean")
).sort_values("avg_satisfaction", ascending=False)
print(cat_stats.to_string())

Categories group related patterns (e.g., "trust" patterns like source citations and confidence indicators, "responsiveness" patterns like typing indicators and streaming).


Step 5: Accessibility Compliance CheckΒΆ

Check which patterns meet accessibility standards:

accessible = patterns[patterns["accessible"] == True]
not_accessible = patterns[patterns["accessible"] == False]
print(f"Accessible patterns: {len(accessible)} / {len(patterns)}")
print(f"Non-accessible patterns: {len(not_accessible)}")

if len(not_accessible) > 0:
    print(f"\nPatterns needing accessibility fixes:")
    print(not_accessible[["pattern_name", "category", "satisfaction_score"]].to_string(index=False))

Accessibility Gaps

Any non-accessible pattern is a compliance risk and excludes users who rely on assistive technologies. Adaptive Cards must include altText for images, label for inputs, and proper speak properties for screen readers.


Step 6: UX Quality DashboardΒΆ

Build a comprehensive UX quality report:

total = len(patterns)
avg_sat = patterns["satisfaction_score"].mean()
highest_name = patterns.loc[patterns["satisfaction_score"].idxmax(), "pattern_name"]
highest_score = patterns["satisfaction_score"].max()
accessible_count = (patterns["accessible"] == True).sum()

dashboard = f"""
╔════════════════════════════════════════════════════════╗
β•‘     Agent UX Patterns β€” Quality Report                 β•‘
╠════════════════════════════════════════════════════════╣
β•‘ Total Patterns:              {total:>5}                     β•‘
β•‘ Average Satisfaction:        {avg_sat:>5.2f}                     β•‘
β•‘ Highest Satisfaction:  {highest_name:>12} ({highest_score})           β•‘
β•‘ Accessible Patterns:         {accessible_count:>5} / {total}                β•‘
β•‘ Categories:                  {patterns['category'].nunique():>5}                     β•‘
β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
"""
print(dashboard)

πŸ› Bug-Fix ExerciseΒΆ

The file lab-070/broken_ux.py has 3 bugs in how it analyzes UX pattern data:

python lab-070/broken_ux.py
Test What it checks Hint
Test 1 Pattern count Should count all rows with len(), not unique categories
Test 2 Highest satisfaction pattern Should use idxmax(), not idxmin()
Test 3 Average satisfaction Should use mean(), not median()

🧠 Knowledge Check¢

Q1 (Multiple Choice): Why are typing indicators important for AI agent UX?
  • A) They make the agent smarter
  • B) They reduce perceived latency and signal that the agent is actively processing the request
  • C) They are required by Microsoft Teams
  • D) They improve the agent's response accuracy
βœ… Reveal Answer

Correct: B) They reduce perceived latency and signal that the agent is actively processing the request

Typing indicators provide immediate visual feedback that the agent received the user's message and is working on a response. Without them, users may think the agent is broken or unresponsive, especially during longer processing times. This simple pattern significantly improves perceived responsiveness and user trust.

Q2 (Multiple Choice): What is the primary benefit of Adaptive Cards over plain text responses?
  • A) They are faster to render
  • B) They enable structured data display with interactive elements like buttons, inputs, and formatted layouts
  • C) They work without internet
  • D) They are simpler to implement
βœ… Reveal Answer

Correct: B) They enable structured data display with interactive elements like buttons, inputs, and formatted layouts

Adaptive Cards transform agent responses from plain text into rich, interactive experiences. They can display tables, images, action buttons, input forms, and formatted text β€” enabling users to interact with data directly rather than typing follow-up queries. They are particularly effective for approval workflows, data summaries, and multi-step processes.

Q3 (Run the Lab): Which UX pattern has the highest user satisfaction score?

Sort patterns by satisfaction_score descending and check the top entry.

βœ… Reveal Answer

Source Citation with a satisfaction score of 4.8

Source Citation is the highest-rated UX pattern (4.8 out of 5.0). Users strongly prefer agents that link answers to verifiable source documents, as it builds trust and allows them to verify information. This pattern should be a default in every enterprise agent.

Q4 (Run the Lab): What is the average satisfaction score across all patterns?

Compute patterns['satisfaction_score'].mean().

βœ… Reveal Answer

4.17 average satisfaction

The average satisfaction score across all 12 UX patterns is 4.17 out of 5.0, indicating generally positive user reception. However, the variance between the highest (4.8) and lowest scores suggests that some patterns need improvement to match the quality of top performers.

Q5 (Run the Lab): How many UX patterns are in the dataset?

Check len(patterns).

βœ… Reveal Answer

12 patterns

The dataset contains 12 UX patterns spanning categories like trust (source citations, confidence indicators), responsiveness (typing indicators, streaming), structure (Adaptive Cards, carousels), proactivity (notifications, suggestions), and accessibility (screen reader support, keyboard navigation).


SummaryΒΆ

Topic What You Learned
Chat UX Design responsive chat with typing indicators and streaming
Source Citations Build trust by linking answers to verifiable sources
Adaptive Cards Display structured data with interactive elements
Proactive Notifications Enable agent-initiated messages for timely updates
Accessibility Ensure inclusive UX with screen reader and keyboard support
Satisfaction Metrics Measure and compare UX pattern effectiveness

Next StepsΒΆ

  • Lab 069 β€” Declarative Agents (configure agent behavior via manifests)
  • Lab 066 β€” Copilot Studio Governance (govern agent deployments)
  • Lab 008 β€” Responsible AI (foundational UX and safety principles)