Skip to content Skip to menu

Lipreading and Literacy: How might visual speech perception help reading development?

Lizzie Buchanan-Worster joined the LiLaC lab as an ESRC post-doctoral research fellow in October. In this blog she discusses her previous and current work on the relationship between lipreading and reading in deaf and hearing children.

_______________________

Over the next year, I will be writing papers from my PhD and learning more from the SCALES team about running and analysing longitudinal studies.

The most common question I get when I talk about my work is, “Does that mean you know how to lipread then?”. The answer is, “Yes, everybody does to some extent”. When we think of people speaking, we usually think about what we hear rather than what we see. Most people are able to understand what others are saying without being able to see them, such as on the phone or from another room. However, what we see when someone is speaking is really important. We may not realise we are using visual information when we hear someone speak, but it can impact what we hear. In 1976 two researchers, McGurk and MacDonald, found that if you play someone a /ba/ sound but show them a video of someone saying /ga/, they combine the two inputs and will hear a /da/ sound. You can see this for yourself in the following video.

 

This illusion is so strong that, even when you know what is being played to you, you can’t help but hear the /da/ sound. Clearly, we don’t often see illusions like this in our everyday lives, but it shows that visual information is important when listening to someone speak.

________________________________

To get a sense of how we use visual speech information in everyday life, imagine yourself in a noisy café or bar. Your friend yells over to you to ask if you want something to drink.  In order to tell what your friend is saying and reply you need to be able to pick out her voice from the background noise accurately and then piece together what she is trying to say.  Both of these are made much easier by precisely combining the sound and image, as it helps first to locate the sound source and then to make the message more robust against noise.

 

Now imagine you are a child who has been deaf from a young age and are trying to learn a spoken language. Although you may still get some sound through hearing aids or cochlear implants, the quality of the sound is very different to what most hearing people experience and depends a lot on the properties of the implant (https://www.youtube.com/watch?v=SpKKYBkJ9Hw). So, what you see when someone is speaking becomes especially important to understand them and to learn new words.

 

On average, deaf children struggle to learn to read. Many people find this surprising because deaf children can see the words on the page. However, what’s written on a page is a representation of spoken language. For example, this shape ‘A’ represents an /a/ sound. We all had to learn this mapping from print to sound when we learnt to read. Being able to break down words into separate ‘sounds’ or build them up from ‘sounds’ is really helpful when children are learning to read. But deaf children have reduced access to the sounds they need to map to the letters. This makes it harder for them to break down words to read them letter-by-letter, because their representations of what the letters mean are likely to be less robust or well defined compared to hearing children.

 

I am interested in how both deaf and hearing children use visual speech information to learn about the structure of words in order to learn to read. Children who are better at speechreading (lipreading) also tend to be better at reading. Speechreading is difficult because many speech sounds look the same, like in the words mat, pat and bat. However, there is huge variability in speechreading skill, with some people who are very successful in understanding speech with no sound.

We think that speechreading allows deaf children to build up a representation of the ‘sound’ structure of spoken language, which they can then map onto the letters on a page. Speechreading may help hearing children in a similar way. During my PhD, I found that the correlation between deaf and hearing children’s speechreading and reading skills was partly explained by their phonics skills (ability to manipulate ‘sounds’ within words). This is consistent with the idea that speechreading may help children build their phonics skills, which then helps them to read.

 

However, all the data in this study were collected at the same timepoint. This means that we can’t draw conclusions about the directions of the relationships. It may be that those children who are better at reading build their phonics skills further through reading, and that those with good phonics skills are then better at speechreading, rather than the other way around. To address this, the next step is to explore whether these relationships are seen over time by conducting similar studies longitudinally. I’m looking forward to learning more about this from my time in the LiLaC lab this year!