Name
Co-creating Responsible AI for mental Health with Adolescents
Time
11:40 AM - 11:50 AM (EST)
Description

Can Artificial Intelligence (AI) help youth navigate life’s challenges, particularly in relation to mental well-being? This talk explores how we can harness the potential of AI, especially Large Language Models (LLMs), to support adolescent well-being, while ensuring that the solutions we develop are safe, equitable, and youth-friendly. An increasing number of mental health interventions are integrating AI, including chatbots, predictive algorithms, and diagnostic tools. At the same time, general-purpose AI platforms, such as "companion chatbots," are being used by adolescents and young adults (AYA; ages 12–25) to explore or express mental health concerns, even when these tools weren’t originally designed for that purpose. With rising rates of mental health challenges among young people, many turn to technology for support. AI holds significant promise for delivering accessible, personalized mental health care. However, despite its growing integration into the adolescent mental health landscape, current ethical frameworks for AI and digital mental health often overlook the unique developmental, psychological, and technological contexts of AYA. Moreover, AI systems are frequently developed without meaningful engagement with youth—particularly those from marginalized groups, such as racial/ethnic minorities, LGBTQ+ youth, and socio-economically disadvantaged communities. A lack of indigenous and race-based data further contributes to harmful bias in AI, which disproportionately affects these populations. As a result, AI technologies are impacting young people without adequately reflecting their diverse needs, perspectives, or well-being. To date, global policy and industry efforts have not sufficiently addressed this gap. In this talk, I will present findings from my work co-creating a prototype LLM-based assistant with adolescents from diverse backgrounds. This assistant is designed to support healthy behaviors and social skills using the open-source Erasmian Large Language Model (ELM). ELM is grounded in community participation, prioritizes privacy and fair labor practices, and offers greater transparency and educational potential than commercial models like ChatGPT. Additionally, I will discuss my current policy-oreinted research at Stanford University and Hopelab in San Francisco, conducted as part of a Commonwealth Harkness Fellowship. My work examines how responsible AI can be better integrated into youth digital mental health through policy innovation, industry transformation, and direct collaboration with young people themselves. The talk will conclude with key priorities for future research and an open invitation to collaborate on shaping the future of responsible AI in youth mental health.

Caroline Figueroa
Location Name
Regatta Room