To screen or not to screen – important factors to consider
There has been considerable recent interest in the use of screening tests to assist teachers in identifying children with DLD. This interest is very well-intentioned, and is in part a response to the idea that many children with DLD are ‘missed’ by the education system.
Screening is not without challenge though and screening for under-5s is currently not recommended by the US Preventive Task Force (Wallace et al. 2015). The reason is that we lack evidence that screening is beneficial- i.e. that identifying children with a screening tool is (a) better than the usual referral methods or (b) results in better long-term language outcomes for children who screen positive.
Screening at school entry might be better, but accuracy of tests varies considerably. Unfortunately, people can publish screening tests without subjecting them to rigorous peer review and therefore important information about the screen’s accuracy can be missing or difficult to interpret.
So before you start screening school-aged children for language disorder, here are some things to look for and think about:
Has the screen been developed from a robust, population sample? In screening studies, size really does matter, as you need to ensure there are enough children in the sample to provide a robust representation of the condition you are interested in. Also note, just demonstrating that a clinical group has a lower score to a non-clinical group on this measure is not evidence that it would make a good screening tool. Look for studies with at least more children than you have in your school!
What is the lag time between screen and diagnostic test? Particularly in the early years, there can be lots of movement in child language and early language delay is not always predictive of language disorder. Unfortunately, most screening studies provide the diagnostic test at virtually the same time as the screen, so we just don’t know what the long-term predictive value of the screen is. Diagnostic accuracy tends to decrease as the interval between test and screen increases, so be careful.
The key metrics to look for: sensitivity, specificity, AUC (or diagnostic accuracy), and positive predictive value. This is what they do:
- Sensitivity: tells you what proportion of children with language disorder that will be identified by the screening test. The closer the figure is to 1, the more children with the condition will be captured by the screen. This DOES NOT mean that all children who screen positive have the disorder though…
- Specificity: tells you how many typically developing are correctly identified by the screen. The closer that number is to zero, the more ‘false positives’ the test will identify.
- AUC (area under the curve): essentially this gives a value that can be interpreted as the overall ‘diagnostic accuracy’ of the test. Most of the time we might consider an AUC of .80 or better to have ‘acceptable’ accuracy, but you will soon see that once we take the prevalence rate into account, this can yield a very large number of false positives.
- Positive predictive value: You might find a test that reports sensitivity of 100%, but if the false positive rate is high, it will be a pretty useless test for figuring out who might actually have a language disorder. PPV takes this into account to tell you what percentage of children who screen positive are likely to actually have the disorder. The closer this value is to 1, the better.
- Once you have these values, you are in a better position to determine what the impact of screening will be on your school or service. SCALES estimates that ~10% of children at Year 1 have some form of language disorder. Let’s say your school has 3 forms for each year, from nursery to Year 6, or roughly 630 children. We therefore estimate ~63 children will have language disorder and ~567 will not. Now I’m going to show you impact of two screens:
One published and well-used screen reports a sensitivity of .88 and a specificity of .58 (note, this is for the preschool version, but I couldn’t find metrics for the school-aged version). This means it identifies 88% of true DLD (or 55/63) but only 58% of true negatives (329/567). That means that 238 of the typical children will get an amber or red score, even though they don’t have DLD!!
The other figure comes from a published report of a class-based screening tool with an AUC of .84. It therefore correctly identifies 84% of kids with DLD (53/63) and 84% of typical language peers (476/567). That means that 10 children with DLD are missed but 91 children without language disorder screen positive!
It is a fact of life that all screening tests identify more people without a problem than people with the problem. So probably the most critical thing to establish before screening is WHAT WILL YOU DO ABOUT THE FALSE POSITIVES??
If we had endless resource, this might not be such a conundrum, but we don’t – we struggle to provide adequate specialist provision for children we absolutely know have language disorders. So screening has the potential to use up valuable resource on children who don’t need it.
This is not to say that I am against early identification, but I do think it might be premature to advocate screening. We do not know that screening is better than our current systems of support and referral. While we wait for appropriate evidence, I suggest the following:
Speech-language therapists: all referrals are valid referrals because someone is concerned about the child’s development. Not all referred children will have language disorder though or require treatment. Screening is likely to increase referrals, not reduce them.
Teachers: in my experience teachers are very good at knowing when there is a problem, but don’t always attribute this to language. If a child is not making progress in reading or learning, has behaviour problems or is very shy and withdrawn, consider oral language as a possible underlying issue. If the child of concern has a positive family history of speech-language therapy or SEN, or is from a disadvantaged background, definitely refer. There are more signs of DLD in this film: https://www.youtube.com/watch?v=JAsf_Wqjz4g
Finally, we did report in SCALES that many children with DLD were not referred to speech-language therapy. Sometimes that is because teachers did not recognise DLD was the problem. But it also happened because parents were reluctant, or because the waiting lists for evaluation were too long. Therapists, teachers and families working together could help to address this and is likely to be more beneficial in the long run than a blunt screening test.
REFERENCE: Wallace, I. Berkman, N, Watson, L. et al. (2015). Screening for Speech and Language Delay in Children 5 Years Old and Younger: A Systematic Review. Pediatrics, 136 (2) e448-e462; DOI: https://doi.org/10.1542/peds.2014-3889