Research Process
FIRST: We Measured the Risk
We began by establishing a comprehensive assessment of risk levels and their perception in human-robot interactions. Our approach:
THEN: We Tested Types of Apologies
We explored various apology strategies and examined their effectiveness in rebuilding trust after AI failures. Our methodology:
Basic Text
Simple text-based "I'm sorry" messages
Voice Apology
Audio apologies with tone variation
Explanatory
Detailed explanation of what went wrong
Emotional Expression
Multimodal with emotional indicators
NEXT: We Measured Different Trust Metrics
We developed a comprehensive trust measurement framework that captured both explicit and implicit indicators of trust restoration:
Behavioral Metrics
Psychological Metrics
FINALLY: We Created Guidelines
Based on our empirical findings, we developed a framework for effective agent apologies that can be implemented across different AI systems:
Timing is Critical
Deliver apologies immediately after the error is detected, before users have to report the issue.
Personalize the Apology
Acknowledge the specific impact on the user rather than using generic apology templates.
Explain What Happened
Provide a clear, transparent explanation of what went wrong in accessible language.
Outline Corrective Action
Describe specific steps being taken to prevent the same error from recurring.
Express Appropriate Emotion
Use multimodal cues (voice tone, expressions) that match the severity of the error.
Offer Compensation
When appropriate, provide meaningful compensation or remediation for the error.
Key Findings
High Risk or Low Risk? Doesn't Matter
Risk level showed no significant impact on trust recovery. Our experiments with 124 participants revealed that trust recovery rates remained consistent regardless of the perceived risk level.
"The perceived risk level of the interaction context did not significantly affect participants' willingness to trust the agent again after receiving an apology."— From our ICSR 2024 paper
Something Else Is at Play
Delivery method appears more important than risk level. Our research shows that how the apology is delivered significantly impacts trust recovery rates.
Personalization Matters
Apologies that acknowledge specific user concerns showed 43% higher trust restoration rates. Personalized apologies demonstrated that the agent understood what went wrong.
Timing Is Critical
Immediate apologies were 2.7x more effective than delayed responses in rebuilding trust. The longer the agent waits to apologize, the less effective the apology becomes.
Occurs
Response
Delay
Delay
Research Publications
"The Impact of Perceived Risk on Trust in Human-Robot Interaction
This study investigates how perceived risk impact human decision on trust.
Abstract
This paper examines how the timing, personalization, and delivery of robot apologies influence trust recovery after failures. Through a mixed-methods study with 124 participants, we found that immediate, personalized apologies yielded 43% higher trust recovery rates compared to generic, delayed responses. Our findings provide design guidelines for implementing effective trust repair mechanisms in human-robot interaction scenarios.
Key Findings
- Immediate apologies were 2.7x more effective than delayed responses
- Personalized apologies showing specific understanding of the error yielded 43% higher trust recovery
- Multimodal apologies combining voice, text, and visual cues performed best
- Risk level of the task did not significantly impact apology effectiveness
Publication Details
When Robots Say Sorry in High-Stake Environment: Emotional Connection Might Matter More Than Explanations
Comparing apology effectiveness across various autonomous systems including embodied robots and virtual agents.
Abstract
This research presents a comparative analysis of trust repair strategies across multiple autonomous system platforms, including physical robots, virtual agents, and voice assistants. Our experiments with 210 participants reveal that embodiment significantly impacts apology effectiveness, with physical robots achieving 27% higher trust recovery compared to non-embodied systems. We outline platform-specific design recommendations for implementing effective trust repair mechanisms.
Research Highlights
- First cross-platform comparison of trust repair mechanisms
- Analysis of 4 distinct agent types: physical robots, virtual agents, voice assistants, and text-based AI
- Exploration of embodiment's role in apology effectiveness
- Development of platform-specific guidelines for trust repair
Publication Details
Abstract
This paper introduces a comprehensive framework for designing effective apology strategies in human-agent interactions. Drawing from our multi-year research program and data from over 350 participants, we identify seven key dimensions that influence apology effectiveness: timing, personalization, explanation depth, remedy proposal, embodiment, emotional expression, and follow-up actions. Our framework provides theoretically-grounded, empirically-validated guidelines for implementing trust repair mechanisms across diverse autonomous systems.
Framework Components
- Seven-dimension model for apology design
- Decision tree for selecting appropriate apology strategies
- Context-aware apology generation algorithm
- Evaluation metrics for measuring apology effectiveness
Publication Details
Practical Applications
Our research findings are being applied across multiple domains where trust between humans and AI systems is critical. Here's how our apology framework is making a difference:
Rescue Robots: Trust Recovery in Critical Scenarios
In emergency response scenarios, trust between human responders and rescue robots is paramount. We've developed specialized trust repair protocols optimized for high-pressure, time-critical environments.
Implementation Scenario
When a search-and-rescue robot fails to navigate around an obstacle in a disaster zone, its immediate apology includes:
- Instant acknowledgment of the navigation error
- Clear explanation of what environmental factor caused the failure
- Immediate alternative solution such as requesting manual override
- Status updates every 5 seconds until resolution
Future research
Implementation of this protocol with urban search and rescue teams showed:
Occurs
Delivered
Proposed
Recovered
Healthcare Robots: Empathetic Trust Repair
Healthcare robots require specialized apology frameworks that balance accountability with reassurance. We developed protocols specifically for healthcare scenarios where emotional factors play a significant role in trust.
Implementation Scenario
When an assistive robot fails to dispense medication at the scheduled time, its apology includes:
- Empathetic acknowledgment using a calm, reassuring tone
- Clear, non-technical explanation of what happened
- Immediate notification to healthcare staff
- Reassurance about safety measures and backup systems
- Follow-up check after resolution to ensure patient comfort
Results
Implementation in assisted living facilities showed:
Customer Service AI: Building Commercial Trust
For customer service AI, trust recovery directly impacts brand perception and customer retention. We've developed frameworks optimized for business contexts where multiple stakeholders are involved.
Implementation Scenario
When a customer service AI provides incorrect information about a product, its apology includes:
- Clear accountability without deflecting responsibility
- Immediate correction with verified information
- Tangible compensation such as a discount or free service
- Explanation of improvement steps being taken
- Follow-up communication to ensure satisfaction
Results
Implementation across e-commerce platforms showed:
Virtual Agents in VR: Immersive Trust Repair
Virtual reality presents unique opportunities for trust repair through embodied presence. We've developed frameworks that leverage the immersive nature of VR to create more effective trust recovery interactions.
Implementation Scenario
When a VR training assistant provides incorrect guidance in a simulation, its apology includes:
- Spatial approach - moving to an appropriate distance
- Embodied gestures that convey accountability
- Eye contact and facial expressions calibrated to convey sincerity
- Interactive correction with user participation
- Spatial memory markers to indicate where the error occurred
Results
Implementation in VR training environments showed: