Referrals to children and adolescent mental health services (CAMHS) in the UK have increased to unprecedented levels in recent years; in 2022, there were 1,425,193 mental health service referrals for children and young people (CYP) in the UK. This means that after referral, CYP usually have to wait a long time – often over a month – to be seen by mental health services.
In this healthcare ecosystem, it is vital that clinicians working with CYP can efficiently assess, diagnose, and treat mental health disorders. However, there are relatively few evidence-based standards for determining which CYP should receive a diagnosis, or even consensus amongst clinicians about whether diagnosing CYP is useful, as opposed to just deciding and providing the most effective course of treatment. Hence, researchers have investigated whether standardised diagnostic assessments, such as the Development and Wellbeing Assessment (DAWBA) help clinicians make diagnoses for CYP, with evidence from randomised controlled trials (RCTs) indicating that use of the DAWBA increased the alignment between the DAWBA and clinicians’ assessments (Aebi et al., 2012; Ford et al., 2013).
To contribute to a more robust evidence-base around the use of standardised assessments in CYP’s mental healthcare, the authors of this paper designed a RCT to assess whether the DAWBA helps clinicians make diagnoses, and whether implementing the DAWBA into UK CAMHS is cost effective.

A new trial tested if adding the DAWBA tool to CAMHS assessments would improve diagnoses, outcomes, or cost-effectiveness.
Methods
Participants in this RCT were 5–17-year-olds who had been referred to their local CAMHS in eight NHS Trusts across England. Participants were randomly assigned to either a control ‘assessment-as-usual’ group, or a treatment group where they additionally completed an online DAWBA questionnaire, which was used to create a report with algorithm-generated diagnostic predictions.
As this was a pragmatic trial, researchers did not collect data on whether clinicians read the DAWBA reports. However, reminders of the availability of the report were periodically uploaded to the clinical records of those who had completed the DAWBA, and the trial team visited the sites to remind clinicians to check. The primary outcome was the number of people diagnosed with an emotional disorder within one year. Secondary outcomes included both participant-related outcomes such as symptom levels, as well as service-related outcomes such as discharge rates.
Results
A total of 1,225 participants were recruited (42% male, 86% White). Mean age at randomisation was 11.9 years in the intervention group and 12 in the control group (SD = 3.1 years).
After a small amount of attrition (<1%), there were 610 participants in the DAWBA group and 609 in the control group. For those aged 5-11, parents completed the outcome data; for 11-17 year-olds, parents and/or the CYP could complete the outcome measures, with parents as the primary participant and CYP as the secondary participant for 11-15 year-olds and vice versa for the 16-17 year-olds. Outcome questionnaire completion at 12 months was 77% amongst caregivers and 62% amongst CYP; 80% of those in the DAWBA group completed the DAWBA questionnaire.
Primary outcome
Sixty-eight (11%) participants in the DAWBA group received a diagnosis of an emotional disorder within 12 months of randomisation versus 72 (12%) in the control group, with no significant difference between the two. Stratifying the results by sex and age did not result in any differences between the control and DAWBA groups, nor did any of the secondary outcomes significantly differ between the two groups.
Secondary outcomes
There were no differences between the control and DAWBA groups in any of the participant-related secondary outcomes, including CYP and parental depression or anxiety symptoms.
There were also no differences between the control and DAWBA groups in any of the service-related secondary outcomes, including diagnosis of an emotional disorder within 18 months of randomisation, referral, discharge, or the offer and initiation of treatment.
Health economic analysis
The DAWBA cost £10 per participant to administer. Other primary costs were calculated according to NICE guidelines (National Institute for Health and Care Excellence, 2023), with additional secondary costs (e.g., out-of-pocket expenses) also captured.
Some primary and secondary costs were higher in the intervention group (e.g., outpatient care), but overall, differences between the two groups were small and non-significant. There was no significant difference in quality of life between the two groups amongst either the CYP or parents as assessed via standardised questionnaires (e.g., the EQ-5D), with some small, non-significant differences amongst the CYP across the two surveys employed.

Clinicians didn’t diagnose more emotional disorders when given DAWBA reports – and no service or cost improvements were found.
Conclusions
In this study, administering the DAWBA questionnaire to CYP and their parents did not improve the rate at which clinicians were diagnosing emotional disorders, nor did it improve mental health, service-related, or economic outcomes. These results indicate that simply introducing the DAWBA into the CAMHS assessment process may not lead to clinical or economic benefits for services on its own. However, rates of diagnosis in this sample of CYP referred to CAMHS were similar to rates of diagnosis in the general population, indicating that disorders were possibly being under-diagnosed.

Real-world implementation meant uptake of the DAWBA was patchy – so did the tool fail, or was it ignored?
Strengths and limitations
Strengths
- The authors employed a rigorous methodology. This is a large study compared to previous RCTs evaluating standardised diagnostic assessments (e.g., Aebi et al., 2012), meaning the study was highly powered (i.e., more likely to be able detect changes associated with the DAWBA implementation if they were present). Randomised participants in both groups were also matched for age, gender, and recruiting site, thus reducing bias.
- The decision to use algorithmic diagnosis predictions in the DAWBA report give the study ecological validity and make the results highly applicable to clinical services, as it would not be realistic to have a member of staff perform and review every DAWBA assessment.
- Likewise, the inclusion of the analysis of health economic outcomes allowed the authors to evaluate critical issues facing CAMHS, with the central question: is it worth introducing new assessments into CAMHS? Delivering cost-effective care is a key consideration to policymakers and commissioners working to improve CAMHS (see e.g., Griffin et al., 2022), and with digital tools often being proposed as a cost-saving solution (Gentili et al., 2022), these results provide crucial evidence that standardised diagnostic assessments may not be worth their implementation cost.
Limitations
- Related to the first strength, statements in clinical notes that were not clearly diagnoses were marked as ‘uncertain’ and excluded from the main analyses. Clinicians adjudicated these uncertain cases; it would have been informative to know more about the criteria used in adjudication. Inclusion of uncertain cases in a follow-up analysis increased the rate of diagnosis closer to what was anticipated based on service and audit data, indicating the potential clinical validity of these cases. Given the thorough adjudication these cases went through, and as there were (non-significant) patterns indicating a higher proportion of uncertain cases in the DAWBA group (28% vs. 22%) as well as higher rates of some types of referral acceptance amongst the DAWBA group, it would have been interesting to see if there were group differences in diagnosis rates between the groups after the inclusion of uncertain cases. However, this RCT was pre-registered, meaning it may not have been possible to conduct additional analyses.
- The fact that this RCT was structured as a pragmatic trial affects the interpretability of the findings. Specifically, not collecting data on whether clinicians were actually reading the DAWBA reports means it isn’t possible to attribute the results to a specific explanation; as well as uptake, there is for example the question of the value clinicians assign to the DAWBA specifically, as well as to algorithmic processes. Nevertheless, this study answers a useful question, namely: given a realistic implementation strategy wherein clinicians may or may not utilise information from a standardised assessment which is available to them, does that the existence of that assessment increase the rate of diagnosis in services?

The implementation strategy used in this pragmatic RCT was realistic to how the DAWBA might be applied in services, but makes it difficult to answer the question – why didn’t using the DAWBA increase the rate of diagnosis?
Implications for practice
In addition to the primary results, there were several informative findings in this study. For example, 80% of those invited to complete a DAWBA did so, and data from the process evaluation (see Thomson et al., 2025) indicated that the CYP and families who completed the DAWBA found it useful. This indicates that introducing standardised diagnostic assessments may be valued by service users, even if it doesn’t necessarily change the rate of diagnosis.
However, diagnosis rates in this study were similar to those in the general population (Sadler et al., 2018), which indicates that clinicians in this study may have been under-diagnosing mental health disorders. This finding indicates the presence of larger issues in CAMHS, including the fact that not all CAMHS clinicians view giving CYP a diagnosis as a useful part of clinical practice, an issue which clinicians highlighted directly in qualitative interviews for the process evaluation (Thomson et al., 2025). In this time of high service demand, the threshold for diagnosis – as well as for accepting a referral, and other outcomes – may be overly high, excluding CYP who would benefit from receiving care.
In theory, standardised assessments make more information available to clinicians to make diagnoses. Despite these results, it may still be the case that standardised assessments have something to offer clinical services. Because this was structured as a pragmatic trial, it is difficult to know whether the DAWBA specifically does not offer helpful information, or the algorithmic predictions did not offer helpful information, or whether clinicians did not value – or even look at – the report.
However, what we can interpret from this study is that just throwing another diagnostic tool at CYP mental health services is not necessarily going to be beneficial. Whatever the reason, simply introducing a new diagnostic tool did not increase the number of CYP getting diagnoses, reduce the cost of their care, or improve their mental health. More RCTs with other standardised diagnostic assessments – including non-pragmatic RCTs where they gather data on clinicians’ use of the assessment – as well as further qualitative studies on clinician and service user attitudes to standardised assessments will help to answer some of these questions.

While service users may value the opportunity to complete a standardised questionnaire, it may not address key barriers in the pathway to diagnosis. Further research is needed to understand this.
Statement of interests
No conflicts of interest to declare.
Links
Primary paper
Sayal, K., Wyatt, L., Partlett, C., Ewart, C., Bhardwaj, A., Dubicka, B., … & Montgomery, A. (2025). The clinical and cost effectiveness of a STAndardised DIagnostic Assessment for children and adolescents with emotional difficulties: the STADIA multi‐centre randomised controlled trial. Journal of Child Psychology and Psychiatry, 66(6), 805-820.
Other references
Aebi, M., Kuhn, C., Metzke, C. W., Stringaris, A., Goodman, R., & Steinhausen, H. C. (2012). The use of the development and well-being assessment (DAWBA) in clinical practice: a randomized trial. European Child & Adolescent Psychiatry, 21, 559-567.
Ford, T., Last, A., Henley, W., Norman, S., Guglani, S., Kelesidi, K., … & Goodman, R. (2013). Can standardized diagnostic assessment be a useful adjunct to clinical assessment in child mental health services? A randomized controlled trial of disclosure of the Development and Well-Being Assessment to practitioners. Social Psychiatry and Psychiatric Epidemiology, 48, 583-593.
Gentili, A., Failla, G., Melnyk, A., Puleo, V., Tanna, G. L. D., Ricciardi, W., & Cascini, F. (2022). The cost-effectiveness of digital health interventions: a systematic review of the literature. Frontiers in Public Health, 10, 787135.
Griffin, N., Wistow, J., Fairbrother, H., Holding, E., Sirisena, M., Powell, K., & Summerbell, C. (2022). An analysis of English national policy approaches to health inequalities: ‘transforming children and young people’s mental health provision’ and its consultation process. BMC Public Health, 22(1), 1084.
National Institute for Health and Care Excellence (2023). NICE health technology evaluations: The manual. NICE process and methods [PMG36].
Sadler, K., Vizard, T., Ford, T. , Goodman, A., Goodman, R. & McManus, S. (2018). Mental Health of Children and Young People in England, 2017: Trends and characteristics. NHS Digital.
Thomson, L., Newman, K., Ewart, C., Bhardwaj, A., Dubicka, B., Marshall, T., … & Sayal, K. (2025). Barriers and facilitators to using standardised diagnostic assessments in child and adolescent mental health services: a qualitative process evaluation of the STADIA trial. European Child & Adolescent Psychiatry, 1-15.