Understanding Sign Language Development: Acquisition of Grammatical Facial Expressions

Sofia Fausone
10 min readMay 11, 2021

In natural sign languages, facial expressions go beyond communication cues in spoken language to serve a grammatical purpose. Insight into their acquisition helps our understanding of early linguistic development in general. For deaf children who are vulnerable to language deprivation, this information can be critical.

Sofia Fausone, LING 116

Different facial expressions are required for the formation of different words in ASL (4)

The study of sign language acquisition is not only useful for understanding scientific questions about language development, but has the potential to positively impact children in the deaf community. Studies have shown that early exposure to sign language, usually seen where deaf children have deaf parents, correlates with positive cognitive development when compared with late exposure. Unfortunately, deaf children who grow up in households without early exposure to sign language experience language deprivation.

Without linguistic input from a fluent sign language user, there is only minimal input from spoken language in early life (Henner et al., 2016). Pre-schooling ages have been shown to be some of the most important developmental periods for linguistic, as well as general cognitive, development. Accordingly, language deprivation can have a strong impact during this time. In deaf communities, it has shown to negatively affect cognitive and social function, as well as mental health and substance use (Henner et al., 2016; Black and Glickman, 2006). Several studies have demonstrated that deaf children of deaf parents do not have this type of impairment. With full exposure to sign language from infancy, children have positive linguistic and cognitive development with lasting, positive effect.

This poses a particular challenge to hearing parents with deaf children. To diminish the potential of language deprivation, should they learn sign language as quickly as possible? Research suggests that this may not be effective enough. Should they enroll their child in a school that uses sign language as soon as possible? Or try speech therapy that focuses on spoken language? Research in many areas influencing expert advice is ongoing, and not necessarily clear cut.

Data about ASL schooling for deaf of hearing children (deaf children of hearing parents) may hold a clue. Indeed, a recent study with participants in an ASL school for deaf students correlated the age of exposure to sign language with performance on ASL judgement and reasoning tests. Henner et al. (2016) compiled data with the participation of 688 deaf students enrolled in different schools for the deaf across the United States. A majority of students took an assessment related to syntactic judgement, in addition to an analogical reasoning assessment taken by all. Perhaps unsurprisingly, children with earlier exposure to sign language performed better on these tasks, though this was not a linear relation. Crucially, the data suggests that performance dropped when the age of entry into an ASL school exceeded six years, with an even more considerable dip after twelve years. The impact of exposure at twelve years old was similar in size to the initial impact of children not growing up in signing households. This type of research can provide useful and actionable information for parents of deaf children, and the more research done, the better expert recommendations can become.

Research into seemingly more linguistically motivated areas of sign language lays the groundwork for future, similarly applicable knowledge. As we learn more about the development of both spoken and sign languages, we can understand how to facilitate language learning in vulnerable communities, like those of deaf children with hearing parents.

Studies about sign language acquisition have been done across a variety of languages, including American Sign Language (ASL), French Sign Language, Japanese Sign Language, and more. Collectively, they have shown that there are significant similarities across different signed languages. As in spoken language learning, there is evidence that acquiring grammatical rules is an abstract process. Furthermore, children’s sign language learning seems to develop similarly among different types of sign languages. Therefore, we can likely view studies in separate sign languages as indicative of sign language acquisition in general.

Infants who are exposed to sign language, usually deaf children of deaf parents, experience many language acquisition milestones that parallel those we see for spoken language. For example, they begin to babble with their hands around the same age that children learning spoken language babble vocally (Mayberry, 2006). Both groups begin to transition to word formation around 10 months, with variations of several months. Due to differences in the structure of spoken and sign languages, there are developmental differences around this age as well.

In sign languages, the three linguistic dimensions are roughly defined by handshape, movement, and location. Interestingly, babies tend to manually babble in one of two spaces (around the face or in front of the body). This seems to be idiosyncratic, and the babble makes more use of handshape parameters. Around the time of their first word, babies are more likely to make handshape errors than movement or location errors. This may suggest higher variation among handshapes and greater difficulty physically constructing and differentiating between them. In spoken language, changes in the larynx and vocal cords coincide with the development and use of certain sounds and phonemes. There appears to be this type of parallel development with signing and motor skills as well.

One area of interest is the development of facial expressions and head movements. Facial expressions are utilized in sign language in ways that expand upon the facial communication we see in spoken languages.

Consider the following video, of an ASL speaker signing with facial expressions that have grammatical and semantic significance. If you don’t speak sign language, you may find that the speaker’s facial expressions hint you towards a particular meaning or emotion.

YouTube tutorial on practicing phrases involving facial expression (2)

Now, consider these same signs after being told what they mean:

YouTube tutorial on learning facial expression vocabulary (1)

Here, we see clearly that the expressions correlate semantically with the words and emotions they express. But they do not merely help understanding, they are required by the rules of sign language (in this example, ASL).

Indeed, grammatical structures like relative clauses, topics, and conditionals all have necessary facial expression markers (Reilly et al., 1990). Development of non-linguistic, or affective, facial expression is important across all types of languages. For children who are also learning grammatical facial expressions, this adds another level of complexity. Mayberry et al. (2006) discusses how children learning sign language first indicate negation with a non-linguistic headshake. This happens around a year of age, and the use of negative signs appears around six months later. Notably, this happens without the linguistic headshake of sign language. It seems that facial expressions first develop in the same way they would for children learning spoken language.

Non-manual markers like grammatical facial expression, develop after children begin to use negative signs at around two years of age. It takes a whole decade more for children to use non-manual markers with the same efficacy as fluent signing adults. This indicates the level of complexity that linguistic facial expression adds to sign language, especially as combined with another form of facial expression that develops earlier.

To understand this complexity, we will look at the three main uses of grammatical facial expression in sign language. Firstly, these facial expressions may act as single lexical items, where they combine with hand movements to form the meaning of a single word. We saw this use is the videos above.

Facial expressions are also used to modify manual signs, acting as adverbials. For example, puffing out the cheeks is a lexical facial expression that can be added to a manually-signed sentence to describe something as “very big”.

Examples of adverbials, note the expression for “puff” (3)

Mayberry (2006) notes that these types of facial expressions are used only after children can use hand signs of the same meaning. Accordingly, we see that linguistic facial expressions develop after standard facial expressions and signing hand movements are acquired.

Finally, grammatical facial expressions may be used as syntactic structures. These include conditionals, questions, negation, topics, and relative clauses (Rielly et al., 1990). Expressions can also indicate ordering within a sentence. For example, Liddell (1986) shows that a head thrust is used along with the last sign of an antecedent clause.

Below, we see several examples of grammatical facial expression in statements and questions. Note that head nodding is particularly powerful in distinguishing different meanings:

Examples of grammatical facial expression (4)

Facial grammatical signals are highly complex, and have been compared with spoken language intonations. In contrast with non-linguistic facial expressions, grammatical facial expressions follow strict rules and are strictly constrained (Baker-Shenk, 1983). This explains why it takes much longer for children to fully acquire grammatical facial expressions.

To learn these complexities, there is evidence that affective facial expressions aid development of linguistic facial expressions, due to shared semantic meaning. For example, the pre-linguistic facial expression for sadness becomes grammatical when used along with the word “cry”. Certain syntactic structures formed with facial expressions are much more difficult to acquire. Conditionals in particular have especially elaborate non-manual signals (Reilly et al., 1990). Without a precisely defined sequence of corresponding facial expressions, ASL would not interpret a sentence as a conditional. Results from Reilly et al. (1990) show that children understand conditionals that are manually signed before they understand conditionals signed with facial expressions. In their study of 14 deaf children of deaf parents, researchers found that there was no comprehension of conditionals signed by facial expressions before the age of five. After this age, children quickly became proficient at tasks that test this comprehension. Reilly et al. (1990) cites age eight as the age of mastery for these tasks.

This study also found that the production of grammatical facial-features was developed after the production of manual conditional signs. Since the same is true for comprehension, they developed a four step model for non-manual marker acquisition. It begins with the juxtaposition of two unmarked propositions. As grammar develops, the markers that signal conditionals are strictly manual. Then, some non-manual signals are seen, only together with their corresponding manual conditional signs. Finally, more non-manual markers appear and grammatical facial expressions are applied throughout the antecedent clause. It is important to note that other explanations, particularly for differences in acquisition of affective and linguistic expressions, are viable. Indeed, this evidence may alternatively suggest that children are using an acquisition strategy that inherently differentiates between lexical and syntactic knowledge. This would suggest modifications to the model above, but would lead to similar conclusions about the complex nature of grammatical facial acquisition.

By looking at the way children at different ages construct sentences, and studying their development over time, researchers are able to develop models like the one above. We will see how this type of naturalistic evidence is useful for linguistic models and supports the conclusions they lead to. One example of evidence that supports the idea that conditionals are signaled by manual markers first can be found in speech from a four year old. When prompted to create a conditional sentence, this child manually signed the word “suppose”. Older children, however, used facial grammatical markers to create “if, then” statements in similar situations. The younger child was able to find a different word to substitute what might have more naturally been expressed as “if”, suggesting that non-manual markers simplify statements when they are developed (Reilly et al., 1990). Yet, given their complexity and reliance on proficiency of the underlying manual structure, it makes sense that grammatical facial expressions are developed only later.

Further evidence suggests that when children begin to use these non-manual markers, they treat them as individual parts of speech (Reilly et al., 1990). These markers seem to be understood and acquired in an incremental nature. In part, this leads to the conclusion that children do not acquire grammatical facial expressions in a holistic manner, like they do for affective facial expression. Rather, children seem to learn non-manual markers in a remarkably analytical, componential manner. This is highly important information about language development, and there are several parallels to spoken language acquisition.

Brown (1973) demonstrated that children learning spoken languages generally use lexical items to make a semantic distinction before they adopt the grammar adults use. For example, a child might use the word “tomorrow” to signal time before they have learned past tense conjugation and verb agreement. In ASL, we have seen similar initial reliance on syntax, and subsequent development of grammatical patterns (as seen in facial expressions). The fact that these patterns exist over modes of language that appear quite different from each other suggests an important universal principle of children’s language acquisition.

For sign language in particular, evidence has made it clear that there are natural ages at which children begin to develop more complicated components of their language, like grammatical facial expressions. Reilly et al. (1990) concludes that affective and linguistic facial expressions are not only utilized in very different ways, but are developed very differently as well. After a child has acquired lexical knowledge and begins to learn syntax, non-manual markers develop quickly in an ordered, highly linguistic way. As we have seen, evidence suggests that this happens naturally among deaf children of deaf parents at five years of age. Pinpointing crucial language-learning ages may help hearing parents of deaf children, and the experts advising them, decide how to best avoid language deprivation. This topic is, of course, hugely complicated, and there are alternative medical approaches for deafness that are being studied and developed as well.

Nevertheless, the more information we have about acquisition of sign languages, the better we can support children in the deaf community in reaching their linguistic, and cognitive, milestones.


[Media cited with numbers]

Baker-Shenk, C. (1983). A microanalysis of the non-manual components of questions in American
Sign Language. Unpublished PhD dissertation, University of California, Berkeley

Black, P. A., and Glickman, N. S. (2006). Demographics, psychiatric diagnoses, and other characteristics of North American deaf and hard-of-hearing inpatients. J. Deaf Stud. Deaf Educ. 11, 303–321. doi: 10.1093/deafed/enj042

Brown, R. (1973). A first language. Cambridge, MA: Harvard University Press.

(1) Chsasl. (2017, July 26). Facial Expressions Vocabulary | ASL — American Sign Language. Retrieved from https://www.youtube.com/watch?v=vnap3MRYkL8

(2) Chsasl. (2017, July 28). Facial Expressions — Practice Phrases | ASL — American Sign Language. Retrieved from https://www.youtube.com/watch?v=U1G4GaX1O20

Henner, J., Caldwell-Harris, C. L., Novogrodsky, R., & Hoffmeister, R. (2016). American Sign Language Syntax and Analogical Reasoning Skills Are Influenced by Early Acquisition and Age of Entry to Signing Schools for the Deaf. Frontiers in Psychology, 07. doi:10.3389/fpsyg.2016.01982

Liddel, S. (1986). Head thrust in ASL conditional marking. Sign Language Studies, 52, 243–262.

(3) Learn A New Language: ASL For Beginners. (2015, August 18). Retrieved from https://www.quartoknows.com/blog/quartoexplores/learn-a-new-language-asl-for-beginners

Mayberry, R., & Squires, B. (2006). Sign Language: Acquisition. Encyclopedia of Language & Linguistics, 291–296. doi:10.1016/b0–08–044854–2/00854–3

(4) O, C. (2015, February 11). 5 Ways Sign Language Benefits the Hearing: How ASL Improves Communication. Retrieved from https://www.speechbuddy.com/blog/language-development/5-ways-sign-language-benefits-the-hearing/

Reilly, J., Mcintire, M., & Bellugi, U. (1990). The acquisition of conditionals in American Sign Language: Grammaticized facial expressions. Applied Psycholinguistics, 11(4), 369–392. doi:10.1017/S0142716400009632

(5) UniBergen. (2020, June 26). OpenPose. Retrieved from https://www.youtube.com/watch?v=_avV5W8k7ZI&t=12s