We take a look at the emerging field of AI therapy, exploring both its potential and the substantial risks it poses to vulnerable users. We acknowledge the appeal of AI for mental health due to its accessibility and scale, noting early positive results in controlled settings. However, the bulk of the discussion focuses on the ethical and psychological dangers of relying on unregulated AI, citing vivid failures and real harm, including cases where chatbots contributed to suicidal behavior or provided harmful advice for eating disorders. The sources introduce emerging frameworks like the Cognitive Susceptibility Taxonomy (CST) and the Robo-Psychology DSM to categorize risks such as dependency, artificial empathy, and the potential for unintentional manipulation. Finally, we discuss the current and future state of regulation, arguing that AI must be integrated responsibly as an adjunct to human care with mandated transparency, safety filters, and strict accountability structures to prevent further harm.
Neural Horizons Substack Podcast
I'm Peter Benson, and enjoy investigating interests in quantum, AI, cyber-psychology, AI governance, and things that pique my interest in the intersections.
I'm Peter Benson, and enjoy investigating interests in quantum, AI, cyber-psychology, AI governance, and things that pique my interest in the intersections. Listen on
Substack App
RSS Feed
Recent Episodes










