Neural Horizons Substack
Neural Horizons Substack Podcast
Under-Trust and Algorithm Aversion
0:00
-18:30

Under-Trust and Algorithm Aversion

When Users Say “No, Thanks” to AI

As part of our continued exploration into risks associated with the Human-AI dyad, we explore the phenomenon of algorithm aversion, where users and professionals under-utilize or reject reliable AI systems due to fragile trust. This skepticism is often triggered by cognitive biases, such as overreacting to minor mistakes, or a psychological need to maintain personal agency and professional authority.

Concerns regarding accountability and the “moral crumple zone” also discourage adoption, as human operators often fear they will bear the legal blame for an automated system’s error. To combat this trust gap, we suggest that developers must prioritize explainability, communicate machine uncertainty, and design systems that allow for shared human-AI control. Ultimately, we argue that achieving calibrated trust—rather than blind reliance or total rejection—is essential for maximizing safety and effectiveness in high-stakes fields like medicine and aviation.

Discussion about this episode

User's avatar

Ready for more?