
“When AI makes mistakes, the right apology can transform broken trust into even stronger human-machine relationships
"I'm Sorry" - Can Agents Rebuild Trust?
Exploring how automated apologies impact human trust in AI and autonomous systems
The Experiment
Real Humans
Testing with authentic human participants in controlled research environments.
Robot Mistakes
Analyzing various error scenarios and their impact on trust.
Disasters
Studying response to critical error situations.
Virtual Reality
Immersive testing environment for realistic interactions.
Research Process
FIRST: We Measured the Risk
Comprehensive assessment of risk levels and their perception in human-robot interactions.
THEN: We Tested Types of Apologies
Exploring various apology strategies and their effectiveness in rebuilding trust.
NEXT: We Measured Different Trust Metrics
Examining response time, engagement level, and other key trust indicators.
FINALLY: We Created Guidelines
Developing a framework for effective agent apologies based on empirical data.
Key Findings
High Risk or Low Risk? Doesn't Matter
Risk level showed no significant impact on trust recovery.
Something Else Is at Play
Delivery method appears more important than risk level. Maybe the way robots deliver the apology?
Personalization Matters
Apologies that acknowledge specific user concerns showed 43% higher trust restoration rates.
Timing Is Critical
Immediate apologies were 2.7x more effective than delayed responses in rebuilding trust.
Research Publications
ICSR 2024
Presented and Published (In Print)
ICHMS 2025
Will Present (In Print)
Journal of Trust in Automation
Submitted (Under Review)
Practical Applications
Rescue Robots
Implementing trust-rebuilding protocols for critical emergency response scenarios.
Healthcare Robots
Developing trust-recovery mechanisms for assistive care scenarios.
Customer Service AI
Designing effective apology frameworks for service failures.
Virtual Agents in VR
Creating immersive trust repair interactions in virtual reality environments.