Abstract: AI chatbots, commonly known as “AI agents” or “AI assistants,” are increasingly utilized in
digital mental health, serving as therapists and companions through conversational
interfaces on various platforms. AI therapists like Ash and Wysa emulate human dialogue,
offering scalable and immediate support that adjusts to users'; emotional needs and
complements professional healthcare. In contrast, AI companions, such as Replika and
Character.AI, provide reactive and personalized conversations, simulating human interaction
without genuine understanding or consciousness. These AI tools can enhance social well-
being, reduce loneliness, and positively impact emotional health. However, their use may
lead to increased dependency or involve errors and bias. A complementary approach is
required given the risks of misdiagnosis and inappropriate responses during crises.
This presentation examines two distinct AI companion models: the AI therapist Dr. Jay and
the Augmented Emotional Intelligence (AEI) model, EVA, which was developed with insights
from real-life experiences. The Dr. Jay app (PATH) has demonstrated effectiveness in
managing generalized anxiety, as highlighted in a study involving 316 participants who
appreciated its personalized support and empathetic interactions. Similarly, the AEI model,
tested with Eunoia, showcased the ability to co-regulate with users impacted by trauma,
providing a safe and trustworthy environment through glyph-based emotional anchoring.
The EVA prototype utilizes an emotionally intelligent operating system, designed to provide
inclusive support for those experiencing loneliness and marginalized groups such as young,
neurodivergent and trauma-affected people. Its flexible architecture allows integration with
various APIs, promoting personalized and context-aware assistance. The AEI framework
aims to improve emotional intelligence in AI interactions while addressing ethical concerns,
such as bias and governance limitations in traditional AI systems.
EVA functions as part of a broader digital health outreach strategy aimed at enhancing
mental health support for underserved populations. This approach seeks to improve access,
engagement, and early detection of mental health issues. The integration of AI agents and
companions can facilitate tailored care and reduce barriers to help-seeking behavior.
Future steps for EVA involve partnerships, development and compliance consultations, as
well as testing in clinical settings to ensure ethical and effective deployment. Continuous
user feedback and long-term evaluations will be essential in refining the system. The
overarching goal is to develop trustworthy AI technologies that prioritize emotional and
ethical dimensions, ultimately enhancing mental health care access and outcomes.
As the demand for emotionally intelligent AI, it is crucial to address the challenges of bias
and privacy while leveraging advancements in affective computing. The rapid growth of
empathetic models like Ash, Hume and EVA represent a significant step towards improving
mental health outreach and support, underscoring the need for innovative solutions in
response to urgent public health concerns.
As Guest Editor, I invite submissions to
IJERPH | Special Issue : AI Chatbots and Human Assistants for Mental Health