Health equity, as defined by the World Health Organisation (WHO), is the ‘absence of unfair, avoidable or remediable differences among groups of people’. In high-income countries such as the UK, significant mental health inequalities persist, driven by factors such as socioeconomic status, gender, and ethnicity. Digital mental health technologies (DMHTs) offer potential to address these disparities by improving access, enabling early intervention, and delivering personalised, confidential care. However, if not designed and regulated with equity in mind, these tools risk reinforcing existing biases and endangering patient safety. For instance, AI systems used in mental health care have been shown to misclassify users and provide suboptimal care based on race, gender, or socioeconomic background.
Regulators are responsible for ensuring that healthcare products are safe, effective, and accessible. Given the potential harms posed by inequitable DMHTs, equity must be prioritised throughout regulatory processes. However, achieving equity-centred regulation remains challenging due to fragmented regulatory frameworks, poor coordination among stakeholders, underrepresentation of key populations in datasets, and low levels of public trust in digital health systems.
With the high-level aim of maximising equity-centred regulation of DMHTs, a scoping review was conducted with three key objectives: (1) to define digital mental health equity and its subcomponents; (2) to assess how equity is reflected in regulatory guidance and standards from key agencies from English speaking, high-income countries (MHRA, FDA, EU) across the product lifecycle; and (3) to develop practical toolkits to help regulators and developers integrate equity into their workflows and decision-making processes. The analysis revealed key opportunities for clarification in current regulations and informed a set of actionable recommendations to maximise equity in DMHTs:
- Co-design is key to mitigating bias through understanding the needs of end users and building trust with communities.
- Data diversity is crucial to ensure that AI models function effectively across different demographic groups. Regulators advocate for representative datasets, inclusive validation studies, and performance analyses across subgroups.
- Transparency promotes accountability through detailed documentation of data sources, algorithmic limitations, and public disclosure of AI model characteristics.
- Monitoring must extend beyond deployment, with regular audits and real-world evaluations across the total product lifecycle to identify and address emerging biases.
To operationalise these principles, an equity lens must be applied across the total product lifecycle (TPLC)—covering intended purpose, qualification and classification, clinical evaluation, and post-market surveillance—to identify and mitigate specific risks at each regulatory stage. Maximising impact also demands a whole-systems approach, involving coordinated collaboration across the digital mental health ecosystem, especially with developers and end-users. Additionally, novel regulatory mechanisms, including regulatory sandboxes, provide controlled environments to test digital tools for broader applicability across diverse populations.
Closing the digital divide in mental health requires more than innovation—it demands equity-centred regulation, grounded in co-design, data diversity, transparency, and continuous monitoring, from development to deployment.