To Predict A Baby’s First Words, Look At What They See

A baby’s most likely first words are based upon their visual experience, report researchers.

Drawing on theories of statistical learning, researchers found that the number of times an object enters an infant’s field of vision “tips the scales” in favor of associating certain words with certain objects.

“We think that children’s first words are predictable based on their visual experience with objects and the prevalence of those objects in their visual world,” says Linda Smith, a professor in the psychological and brain sciences department at Indiana University and senior author of the study.

“Visual memory may be the initial key to getting words stuck on objects—familiar visual objects like table, shirt, bottle, or spoon,” she adds. “It’s an aggregated experience; those very first words may be learned—slowly and incrementally—for a few visually pervasive objects. This may be how infants begin to break into language before their first birthday.”

Late talkers

The study’s results could also help inform interventions for children with delayed speech and other language disorders.


innerself subscribe graphic


“Difficulty learning words could stem from visual processing problems,” Smith adds. “Children who are late talkers have slow or age-delayed visual processing skills for objects, for example. Children with autism have object-processing problems as well.”

Although many researchers have studied infants’ first words to understand learning, Smith says none has approached the question from the visual side.

“While studying language acquisition from the ‘word side’ may benefit those studying later stages of language learning—at the ages of 18 months to 3 years—it cannot account for how children break into language,” she says.

Under the new theory, which Smith and colleagues call the Pervasiveness Hypothesis, a few highly prevalent objects stand out to infants among the “clutter” of other less frequent objects to become their first words.

Head-mounted cameras

To conduct their study, the researchers looked at videos that showed the visual field of eight children, five girls and three boys, between eight and ten months old, the period before children engage in verbal interactions with parents and caregivers.

baby talk2 12 11Mealtime scenes from that babies’ points of view. (Credit: Indiana University)

The videos came from head-mounted cameras worn by the children an average of 4.4 hours. Caregivers were told the cameras would observe children’s daily activities, not words or objects specifically. Caregivers could choose when to activate the camera.

For the study, researchers observed mealtime scenes, defined as any eating by anyone at any time or location—in cars, at playtime, or in a high chair, for example. The recordings yielded 917,207 mealtime frames, with one image sampled every five seconds. Five objects were recorded for each frame: a total of 745 objects.

Using an accepted method to index child vocabulary, the researchers then divided the named objects into “first nouns,” which are acquired by half of all 16-month-olds; “early nouns,” which are known by half of all 30-month-olds; and “late nouns,” which are acquired at later stages of learning.

First nouns include words such as table, shirt, chair, bowl, cup, bottle, food, spoon, and plate.

The study’s results revealed a strong correlation between the most frequently appearing objects and “first nouns,” with the top 15 of these words appearing in the images collected by the study.

“The comparison of first and early nouns was particularly striking, since both sets of object names are acquired quite early in childhood and refer to objects common in households with infants,” says Elizabeth Clerkin, a PhD student in the department of psychological and brain sciences and first author of the study.

“That infants’ visual environment during mealtime consistently involves a very small number of objects—and the names of these high-frequency objects are among those normally learned first by infants—suggests visual experience is doing the heavy lifting in very early word learning,” she adds.

Visual cues

Whether children who experience speech disorders are not picking up visual regularities in the environment or simply live in households with fewer regularities, Smith says it’s vital to explore the role of both words and vision in language learning.

“Taking account of the visual brings a whole new dimension of word-learning into view,” she adds. “If all you ever worry about is the word side of word-learning, you may be missing half the problem: visual cues that aid language learning.”

In addition to Smith and Clerkin, coauthors of the study are from Indiana University Bloomington and Georgia Institute of Technology. The research appears in Philosophical Transactions of the Royal Society B

Partial funding came from the National Science Foundation. The study grew out of a larger NSF grant to the university to create a collection of over 500 million images to track the visual regularities in the lives of children from birth through age 24 months.

Source: Indiana University

Related Books:

at InnerSelf Market and Amazon