Computers can’t replace the vital language inputs that care-givers provide, finds research that highlights the importance of social interaction.
Infants learn language from high-quality, responsive interactions with other people, usually their parents, for a host of reasons that research by my team and others is helping us to understand. They don’t learn from apps, television, or videos, even those that are labelled as “educational”.
This reality flies in the face of a common misconception that screen time can be learning time. It’s an understandable mistake. Babies stare at the screen, enraptured by what they see. But that attention and focus is misinterpreted as a signal that they’re learning. It’s true that babies are drawn to visually changing fields, colours, sounds and movement, and they find them interesting. But that doesn’t mean they’re processing the language that accompanies the otherwise appealing images and sounds.
“Babies were asked, for example, to point out objects such as a flower. Those babies who had interacted with their mothers knew the word. Those who had watched the DVD didn’t learn the word.”
In a study by Patricia Kuhl, for example, an adult spoke Mandarin to babies raised by English-speaking parents. Other babies instead watched a video of someone speaking Mandarin in precisely the same way. Infants exposed to the live speaker learned to discriminate among the speech sounds of Mandarin. In contrast, those who watched the video learned virtually nothing.
In another study, mothers were asked to take every opportunity to say various words to their babies for what was around them – such as “flower”, “grass”, and “water” —during everyday interactions. Other moms were given access to a DVD that contained the same words and their pictures. They played the DVD to their babies repeatedly. Later, the babies were asked to point out what they had been taught. Babies who were exposed to the words in the course of their interactions with their mothers had learned the words. Those who watched the DVD had not.
Why is social interaction vital to language development?
There seem to be several reasons why social interaction matters, particularly during infancy, for learning language. Combining findings across many studies, a picture begins to emerge that spotlights the importance of responsive and synchronous communications, infants’ understanding of others’ intentions, the rich social cues contained in caregivers’ interactions, and the ways that caregivers gradually modify their language and other behaviors to accommodate infants’ growing skills.
Babies benefit from responsive, synchronous interactions with others
The first reason that social interactions support language learning is because of synchrony between what infants do and how caregivers respond. Synchrony between a baby’s actions and perceptions is generally important. When they reach out and touch, say, a doll, touch and look are temporally aligned. In the same way, they learn language because, when they touch the doll, they hear and gradually learn the word “doll”.
And just like adults, they notice when this synchrony is disrupted, for example, when the sound track for a film goes out of synch. One study by Philippe Rochat filmed babies as they kicked their legs so they could see the video of their own actions, mirror-like, in real time. When the playback was shifted by several moments, the infants became upset by the asynchronistic display.
A critical factor – and one that an app struggles to mimic – is the in-the-moment responsiveness of parents or caregivers to their babies’ interests. For example, a baby might move, vocalize, play with, gesture or look towards a toy truck and—just as they do so—the caregiver might say the object’s name or describe what’s going on. This helps infants learn the words for the objects and events of their world. An app or DVD can’t easily figure out what infants are looking at or touching, so it can’t react in the same way.
“Parents employ an arrray of skills to help infants to understand… No computer can know so well the infant who is watching the video – parents are better than any algorithm.”
The connection between babies’ actions and the responses of their caregivers are also important at an emotional level. Moms and dads engage their babies in back-and-forth reciprocal interactions, smiling, speaking, and gesturing in turn—and babies love this. In fact, babies can get quite upset when their actions become unlinked from their caregiver’s reactions. Researchers have demonstrated this through what’s called the “still face paradigm”, in which mom is instructed to keep her expression frozen even when the baby coos, babbles or laughs. In the presence of a mother’s frozen behaviors, a baby becomes upset, cries and does whatever it takes to encourage mom to re-engage and respond.
Babies understand that humans have intentions
Another reason that interactions with parents are more meaningful than interactions with apps is that infants recognise that people are distinctive in having intentions and goals. At this early age, they don’t seem to recognize intentionality in characters or in moving images on a screen.
Amanda Woodward’s research showed babies a hand that was reaching for one of two objects. In the experiment, the babies’ gaze was tracked. When the hand reached instead for the other object, the babies suddenly paid attention—they were surprised because they thought the person liked the first object. In contrast, when the toys were grabbed by a mechanical claw, the babies didn’t react—they didn’t care which toy the claw picked up. Social intentions involving live humans, as compared with simple actions, seem to have much more meaning to a baby.
Caregivers offer many supportive cues to infant language learners
A third factor is that parents use an array of skills to help infants to understand. For example, melodic contours in their speech can direct a baby’s focus. Tone and pitch changes align with the word that’s being learned. These modulations are accompanied by physical cues to meaning, such as: ‘Wow, there’s your cup,’ while shaking the cup, or pointing at it or touching it. These social cues tell the baby where to look and what the person is talking about. Looks, gestures, and changes in voice can function much like a spotlight during a play—they focus the baby’s attention on what’s important. It may be possible for a computer to highlight a salient object, but it’s still not as good as a live, three-dimensional person interacting in real time.
Building on existing knowledge
A fourth factor in infant language development is the way a parent—better than an app or a DVD—can build on or scaffold existing language. So when a baby says “Ba”, mom might say “Ball”, building on what the baby already knows. This reflects the caregiver’s skill at attuning to the baby’s developmental level. So once babies know that something is a cup or a spoon or a ball, parents don’t keep telling them. Instead, a parent responds with new words that are not yet in the child’s vocabulary. Sentence lengths increase, and grammar is extended. New words are introduced, gradually raising children’s knowledge.
No computer knows so well who is watching the video—parents are better than any algorithm. Good practice also develops language by doing much more than using simple imperatives such as “look at that,” “stop” or “listen”. Parents build language by using lexically diverse words about, for example, colours, smells and tastes.
Some people, such as some fathers, may not be so closely attuned to the baby if they spend less time at home, but this might serve an important purpose. Jean Berko Gleason published a fascinating paper based on filming fathers with their babies. She found that, for the most part, moms were with the babies more than the dads, so when the baby said “haa”, mom understood that the infant wanted water. But Dad might not know, so he tended to ask the baby more questions. As a result, the infant modified its language and said “water”. Berko Gleason called fathers “the bridge to the outside world”. Their unfamiliarity and lower levels of attunement meant that they challenged infants to reframe what they said more clearly to fit in with the wider world.
What should parents do?
What’s the message to parents and policy-makers? As a start, it’s important to distinguish between the value of live social interactions and the reputedly informative inputs of apps. However, not all screen time is the same. In fact, there’s good news for parents who can’t be with their children because of travel and other reasons, or for grandparents who live far away. It’s that video chat apps such as Skype can help with language learning.
An important study demonstrated that it’s not video technology per se that delays language learning in infants. The problem is a lack of reciprocity between the baby and the medium. Kathy Hirsh-Pasek conducted a study which considered whether babies could learn from Skype interactions. For example, if babies saw someone on Skype showing them one of their toys such as a teddy, could the babies learn language? The answer was yes, highlighting that this medium’s interactivity provides better support for language learning than just watching a DVD or an app.
Nonetheless, research highlights the many reasons that parents and caregivers are central to language learning by infants around 1 or 2 years old—and why they don’t learn language by watching TV or viewing DVDs and apps. And children won’t fall behind on their technical skills with computers if they wait till ages 3 or 4.
Research by my team and others on infant language learning suggests that well attuned human interaction at the earlier stage helps infants talk and understand the people around them. These are the skills that underpin so much of what they will do and study in the future. We should be careful not to let our fascination with technology (or the fascination of babies) delay or hamper richly responsive, in-the-moment interactions, which are unparalleled building blocks of language development.
Header photo: Derek Σωκράτης Finch. Creative Commons.