Research Process

1

FIRST: We Measured the Risk

We began by establishing a comprehensive assessment of risk levels and their perception in human-robot interactions. Our approach:

📊
Quantified perceived risk across 8 different interaction scenarios
🧠
Measured emotional and cognitive responses to AI system failures
📈
Developed a risk classification model for human-agent interactions
🔑
Key Result: We identified that subjective perception of risk varies widely between users, but follows predictable patterns based on domain expertise and prior AI experience.
2

THEN: We Tested Types of Apologies

We explored various apology strategies and examined their effectiveness in rebuilding trust after AI failures. Our methodology:

📝

Basic Text

37% Effective

Simple text-based "I'm sorry" messages

🔊

Voice Apology

59% Effective

Audio apologies with tone variation

📋

Explanatory

72% Effective

Detailed explanation of what went wrong

🎭

Emotional Expression

85% Effective

Multimodal with emotional indicators

3

NEXT: We Measured Different Trust Metrics

We developed a comprehensive trust measurement framework that captured both explicit and implicit indicators of trust restoration:

Behavioral Metrics

⏱️
Response Time
How quickly users respond to agent suggestions
🔄
Re-engagement Rate
Willingness to use system again after failure
🎯
Task Completion
Whether users complete assigned tasks with AI assistance

Psychological Metrics

🧠
Cognitive Load
Mental effort when using the system
😌
Comfort Level
Self-reported comfort with the system
💭
Perceived Reliability
User belief in system's future performance
4

FINALLY: We Created Guidelines

Based on our empirical findings, we developed a framework for effective agent apologies that can be implemented across different AI systems:

01

Timing is Critical

Deliver apologies immediately after the error is detected, before users have to report the issue.

02

Personalize the Apology

Acknowledge the specific impact on the user rather than using generic apology templates.

03

Explain What Happened

Provide a clear, transparent explanation of what went wrong in accessible language.

04

Outline Corrective Action

Describe specific steps being taken to prevent the same error from recurring.

05

Express Appropriate Emotion

Use multimodal cues (voice tone, expressions) that match the severity of the error.

06

Offer Compensation

When appropriate, provide meaningful compensation or remediation for the error.

Key Findings

High Risk or Low Risk? Doesn't Matter

Risk level showed no significant impact on trust recovery. Our experiments with 124 participants revealed that trust recovery rates remained consistent regardless of the perceived risk level.

Trust Recovery by Risk Level
High Risk
73%
Low Risk
71%
Trust recovery percentage after agent apology (p > 0.05, not significant)
"The perceived risk level of the interaction context did not significantly affect participants' willingness to trust the agent again after receiving an apology."
— From our ICSR 2024 paper
💡

Something Else Is at Play

Delivery method appears more important than risk level. Our research shows that how the apology is delivered significantly impacts trust recovery rates.

🤖
Basic Text
37%
🔊
Voice Only
59%
🎭
Emotion + Voice
85%
📝
Adding emotional expression (facial cues, vocal tone) to agent apologies dramatically improved trust restoration.
🔍

Personalization Matters

Apologies that acknowledge specific user concerns showed 43% higher trust restoration rates. Personalized apologies demonstrated that the agent understood what went wrong.

⚠️
Generic Apology
"I'm sorry for the error. I'll try to do better next time."
Effectiveness:
Personalized Apology
"I'm sorry I failed to notify you about the schedule change. I understand this caused you to miss your important meeting, and I'll make sure to prioritize these notifications in the future."
Effectiveness:
0
% of participants preferred personalized apologies
⏱️

Timing Is Critical

Immediate apologies were 2.7x more effective than delayed responses in rebuilding trust. The longer the agent waits to apologize, the less effective the apology becomes.

Error
Occurs
Immediate
Response
92% effective
5 Minute
Delay
63% effective
30+ Minute
Delay
34% effective
💫
Key Insight: Immediate apologies create a perception that the agent is actively monitoring and addressing errors, which significantly improves trust recovery.

Research Publications

ICSR 2024
Published

"The Impact of Perceived Risk on Trust in Human-Robot Interaction

Tang, L., Bosch, N.

This study investigates how perceived risk impact human decision on trust.

👁️ 247 Views
📄 82 Downloads
🔗 1 Citations

Abstract

This paper examines how the timing, personalization, and delivery of robot apologies influence trust recovery after failures. Through a mixed-methods study with 124 participants, we found that immediate, personalized apologies yielded 43% higher trust recovery rates compared to generic, delayed responses. Our findings provide design guidelines for implementing effective trust repair mechanisms in human-robot interaction scenarios.

Key Findings

  • Immediate apologies were 2.7x more effective than delayed responses
  • Personalized apologies showing specific understanding of the error yielded 43% higher trust recovery
  • Multimodal apologies combining voice, text, and visual cues performed best
  • Risk level of the task did not significantly impact apology effectiveness

Publication Details

Conference: International Conference on Social Robotics (ICSR 2024)
Date: October 15-17, 2024
Location: Singapore
Pages: 118-127
ICHMS 2025
Upcoming

When Robots Say Sorry in High-Stake Environment: Emotional Connection Might Matter More Than Explanations

Tang,L.,Bashir,M.

Comparing apology effectiveness across various autonomous systems including embodied robots and virtual agents.

📅 Jun 05 Presentation

Abstract

This research presents a comparative analysis of trust repair strategies across multiple autonomous system platforms, including physical robots, virtual agents, and voice assistants. Our experiments with 210 participants reveal that embodiment significantly impacts apology effectiveness, with physical robots achieving 27% higher trust recovery compared to non-embodied systems. We outline platform-specific design recommendations for implementing effective trust repair mechanisms.

Research Highlights

  • First cross-platform comparison of trust repair mechanisms
  • Analysis of 4 distinct agent types: physical robots, virtual agents, voice assistants, and text-based AI
  • Exploration of embodiment's role in apology effectiveness
  • Development of platform-specific guidelines for trust repair

Publication Details

Conference: International Conference on Human-Machine Systems (ICHMS 2025)
Date: June 3-6, 2025
Location: Berlin, Germany
Journal
Under Review
⏱️ Feb 18 Submitted

Abstract

This paper introduces a comprehensive framework for designing effective apology strategies in human-agent interactions. Drawing from our multi-year research program and data from over 350 participants, we identify seven key dimensions that influence apology effectiveness: timing, personalization, explanation depth, remedy proposal, embodiment, emotional expression, and follow-up actions. Our framework provides theoretically-grounded, empirically-validated guidelines for implementing trust repair mechanisms across diverse autonomous systems.

Framework Components

  • Seven-dimension model for apology design
  • Decision tree for selecting appropriate apology strategies
  • Context-aware apology generation algorithm
  • Evaluation metrics for measuring apology effectiveness

Publication Details

Journal: Journal of Trust in Automation
Status: Under first round of peer review
Submitted: February 18, 2025

Practical Applications

Our research findings are being applied across multiple domains where trust between humans and AI systems is critical. Here's how our apology framework is making a difference:

Rescue Robots: Trust Recovery in Critical Scenarios

🚨 High Stakes ⏱️ Time-Critical 👥 Multi-User

In emergency response scenarios, trust between human responders and rescue robots is paramount. We've developed specialized trust repair protocols optimized for high-pressure, time-critical environments.

📋

Implementation Scenario

When a search-and-rescue robot fails to navigate around an obstacle in a disaster zone, its immediate apology includes:

  1. Instant acknowledgment of the navigation error
  2. Clear explanation of what environmental factor caused the failure
  3. Immediate alternative solution such as requesting manual override
  4. Status updates every 5 seconds until resolution
📊

Future research

Implementation of this protocol with urban search and rescue teams showed:

?%
Maintained trust after failure
2.3x
Faster error recovery
?%
Operator satisfaction
Rescue Robot Trust Repair Protocol
⚠️
Error Detection
🔊
Immediate Apology
🔄
Alternative Solution
Resolution
Error
Occurs
0s
Apology
Delivered
0.8s
Solution
Proposed
1.2s
Trust
Recovered
4.5s
Critical design element: Sub-second apology response time

Healthcare Robots: Empathetic Trust Repair

💊 Safety-Critical 🧠 Emotionally Sensitive 👴 Vulnerable Users

Healthcare robots require specialized apology frameworks that balance accountability with reassurance. We developed protocols specifically for healthcare scenarios where emotional factors play a significant role in trust.

📋

Implementation Scenario

When an assistive robot fails to dispense medication at the scheduled time, its apology includes:

  1. Empathetic acknowledgment using a calm, reassuring tone
  2. Clear, non-technical explanation of what happened
  3. Immediate notification to healthcare staff
  4. Reassurance about safety measures and backup systems
  5. Follow-up check after resolution to ensure patient comfort
📊

Results

Implementation in assisted living facilities showed:

92%
Continued use after failure
78%
Reduced anxiety after apology
3.2x
Faster trust recovery than standard protocols
Healthcare Robot Apology Components
🗣️
Vocal Tone
Calibrated to convey reassurance and empathy
💬
Word Choice
Simple, non-technical vocabulary
👁️
Visual Cues
Calming colors and expressions
📱
Staff Alert
Immediate notification system
Critical design element: Tone calibration for empathetic delivery

Customer Service AI: Building Commercial Trust

💼 Brand Impact 💰 Revenue Implications 🔄 High Volume

For customer service AI, trust recovery directly impacts brand perception and customer retention. We've developed frameworks optimized for business contexts where multiple stakeholders are involved.

📋

Implementation Scenario

When a customer service AI provides incorrect information about a product, its apology includes:

  1. Clear accountability without deflecting responsibility
  2. Immediate correction with verified information
  3. Tangible compensation such as a discount or free service
  4. Explanation of improvement steps being taken
  5. Follow-up communication to ensure satisfaction
📊

Results

Implementation across e-commerce platforms showed:

67%
Reduction in customer churn after AI errors
41%
Increase in post-error purchases
88%
Positive sentiment after apology delivery
Customer Service Apology Impact
Standard Response
Enhanced Apology
Customer Retention
56%
89%
Positive Reviews
32%
78%
Repeat Purchases
44%
72%
Critical design element: Tangible compensation as part of the apology

Virtual Agents in VR: Immersive Trust Repair

🌐 Immersive Context 👥 Social Presence 🎮 Interactive

Virtual reality presents unique opportunities for trust repair through embodied presence. We've developed frameworks that leverage the immersive nature of VR to create more effective trust recovery interactions.

📋

Implementation Scenario

When a VR training assistant provides incorrect guidance in a simulation, its apology includes:

  1. Spatial approach - moving to an appropriate distance
  2. Embodied gestures that convey accountability
  3. Eye contact and facial expressions calibrated to convey sincerity
  4. Interactive correction with user participation
  5. Spatial memory markers to indicate where the error occurred
📊

Results

Implementation in VR training environments showed:

96%
Users felt the apology was "authentic"
3.8x
Higher trust recovery vs. text-only apologies
93%
Continued engagement with the VR agent
VR Agent Apology Elements
👁️
Eye Contact
90%
🤲
Hand Gestures
85%
🧍
Proxemics
75%
🎭
Facial Expression
95%
🔊
Voice Modulation
88%
Critical design element: Multimodal expression of accountability

Liang Tang