Tammie Foster, PhD
Research Fellow
Flinders University
AUSTRALIA
Dr Tammie Foster is a Research Fellow at Flinders University, working within the Digital Health Research Lab. With both lived and research experience in mental health and autism, her work focuses on developing and trialing innovative digital health programs to improve access to care, particularly for vulnerable and underserved populations.
Dr Foster’s current research includes AI-enhanced evaluations of remorse in sentencing and the co-design of culturally responsive digital throughcare solutions for Aboriginal communities. She is also involved in global collaborations focused on person-centered digital mental health pathways that support individual choice, autonomy, and continuity of care across service systems.
Her PhD research explored the complexities of remorse expression in autistic offenders and led to the development of the Offender Remorse Evaluation (ORE) measure. As an early career researcher, she is passionate about interdisciplinary collaboration, combining cutting-edge technology with community engagement to ensure scalable, evidence-informed solutions that deliver real-world mental health impact.
The rapid evolution of digital mental health interventions has yet to fully extend into forensic psychology and offender rehabilitation, leaving a significant gap in equitable legal assessments. Current judicial systems heavily rely on subjective human evaluations of offender remorse, often leading to inconsistent and biased assessments, especially for neurodivergent individuals such as autistic offenders, whose expressions of remorse may not align with neurotypical expectations.
This presentation introduces the Offender Remorse Evaluation (ORE) measure (Foster, 2025), a novel tool I developed as part of my PhD research to systematically assess remorse indicators in offender testimony. The measure has demonstrated effectiveness in distinguishing differences between autistic and neurotypical expressions of remorse, addressing a critical gap in legal evaluations and sentencing outcomes. Traditional remorse assessments often rely on facial expressions, verbal cues, and perceived affect, which can disadvantage neurodivergent individuals who may struggle with conventional emotional expression. The ORE measure provides a structured, evidence-based framework to improve the fairness and reliability of these evaluations.
Building on these findings, my ongoing research expands the ORE measure into the digital sphere by integrating natural language processing (NLP) and sentiment analysis to assess remorse more objectively. This involves comparing human evaluations of remorse against AI-generated assessments, enabling a data-driven approach to understanding how remorse is expressed, interpreted, and judged in legal contexts.
By testing ChatGPT-generated responses against human-coded transcripts, this research explores whether AI can accurately and fairly assess remorse indicators, such as admission of responsibility, self-transformation, and explicit/implicit remorse expressions. The findings will refine the ORE measure by identifying patterns in language use, emotional tone, and narrative structures associated with genuine remorse. This process also offers insights into bias mitigation, ensuring that digital tools do not reinforce existing prejudices in forensic assessments.
Key Aspects Covered
AI-driven transcript analysis: Investigating how ChatGPT-generated responses compare with human evaluations in detecting remorse indicators.
Refinement of the ORE measure: Enhancing AI’s ability to assess remorse indicators, including:
- Admission of responsibility (acknowledgment of wrongdoing).
- Self-transformation (evidence of personal growth or rehabilitation).
- Explicit expressions (direct statements of remorse).
Implications for digital mental health: Strengthening AI-assisted decision-making in forensic and clinical settings, supporting fairer rehabilitation pathways through objective remorse evaluation.
This presentation highlights the interdisciplinary collaboration required to ensure ethical AI adoption in forensic psychology, emphasising:
- Bias mitigation in AI-based remorse assessments.
- Neurodiversity considerations in digital forensic evaluations.
- Judicial fairness in rehabilitation strategies for vulnerable populations.
By leveraging AI-powered digital solutions, this research advances mental health equity by ensuring that neurodivergent voices are fairly represented in legal and rehabilitative settings. It offers a scalable, data-driven approach to remorse evaluation, with potential applications in sentencing decisions, parole assessments, and rehabilitation programs. Ultimately, this work contributes to a more inclusive and just legal system, where remorse is evaluated through scientifically validated, bias-aware methodologies rather than subjective perception alone.