- <div style="background-image:url(/live/image/gid/32/width/1600/height/300/crop/1/41839_V14Cover_Lynch_Artwork.2.rev.1520229233.png)"/>
Speaking in sign: Neural pathways of language processing and production
Lake Forest College
Lake Forest, Illinois 60045
The anatomy of the brain is divided into numerous areas, each with multiple different functions in the body, and all containing large numbers of neurons. In order to complete any functions of the mind or body, multiple of these brain areas, and their subsequent neurons need to be actively involved. When people have certain disabilities, such as deafness, the pathways in the brain that are most commonly used are damaged or deformed and cannot be utilized. Instead, the brain and body must compensate by creating alternate pathways or actions that can perform the same overall function. To communicate, people that are deaf often rely on sign language, replacing the need for the sense of sound with the usage of sight and movement. This brings up the question of if the means of language still enables the same pathways and areas of language processing to be used.
The general networks that are required for the production and processing of both sign and speech as languages have been previously isolated to be located largely in the left hemisphere. Understanding and comprehending both sign language and words activates the left fronto-temporal region. Since the nature of signed and spoken language are different, one being based on sound and speech, the other based on vision and motion, it can be predicted that the production pathways would vary as different senses are being employed. However, it is unknown if the processing and understanding of the language and its context is similar or vastly different between signed and spoken language. The researchers determined that signed and spoken language are “fundamentally expressions of the same underlying system”, and therefore would likely have some commonalities in the processing pathways. This hypothesis was tested in an experiment that looked at the language processing areas in both native-English speakers, and deaf-born native ASL speakers. Both groups of subjects were tested using a “phrase” condition and a “list” condition. In both conditions, an image was shown with a background color, and a silhouette of a colored object. In the phrase condition, subjects were asked to name or sign the object and the color of it. In the list condition, subjects were asked to name the background color and only the object. Through this, the researchers were able to create a condition in which the subjects were syntactically processing the information, and one in which the stimulus did not evoke syntax.
The brain activity that was associated with the planning of the phrases was measured using magnetoencephalography (MEG). This is a non-invasive technique of neuroimaging that can provide imaging accurate to the millisecond and provides special localization of the electrical signals being generated through the firing of neurons. MEG scans produce a magnetic source imaging (MSI) through the use of super conducting detectors and amplifiers (SQUIDs). These scans detect the tangential dipoles of magnetic charge that is produced from the electrical activity of neurons. Unlike EEGs, MEG fields pass through the brain and skull without distortion, leading to a higher spatial and temporal resolution. This increased resolution allows for the determination of the brain regions that were sensitive to the phrasal structure of language, as well as if the time-course of language processing was consistent between sign and speech. Basic phrase building in English has been previously found to be present in the ventromedial prefrontal cortex (vmPFC) and the left anterior temporal lobe (LATL). As control regions, the angular gyrus (AG) and left inferior frontal gyrus (LIFG) were analyzed, both known for integrative processing and comprehension in English speakers. Along with the MEG, a representational similarity analysis (RSA) was used to test for a correlation between the data and a computerized model. This allowed for heightening the precision of the analysis.
The results of the study concluded that the composition of the phrase did not produce a significant difference between sign and speech. Consistent for both languages, the “phrase” condition required more processing time than the “list” condition. For signed language, the phrase condition had a mean of 761 ms, compared to 717ms for the list condition. For spoken language, the phrase condition had a mean of 917 ms vs 897 ms for listing. Based on this data, the researchers noted that the utterance onset for sign was commonly faster than that of speech. The MEG scans showed that both the LATL and the vmPFC were active for phrasal activity for sign and speech, which was consistent with previous research for spoken English. No effects of phrasal structure were evident in the AG or LIFG, showing that the timing and planning of comprehension and production of language between sign and speech are similar. The analysis of the right hemisphere showed no language-specific activity, staying aligned with previous knowledge of language production and comprehension. The RSA results showed combinatory profiles that were similar between ASL and English, as based on the computerized model.
Understanding the pathways involved in the production and comprehension of any language is extensive and complex and will require further research to come to a holistic understanding. Gaps in knowledge still exist in the understanding of the processing of complex language, syntax, grammar, and language structure. This study was able to confirm the previous findings that the vmPFC and the LATL are responsible for the majority of phrasal syntax processing in both signed and spoken language.
Blanco-Elorrieta, E., Kastner, I., Emmorey, K., & Pylkkänen, L. (2018). Shared neural correlates for building phrases in signed and spoken language. Scientific reports, 8(1), 5492. doi:10.1038/s41598-018-23915-0
Singh S. P. (2014). Magnetoencephalography: Basic principles. Annals of Indian Academy of Neurology, 17(Suppl 1), S107-12.
Eukaryon is published by students at Lake Forest College, who are solely responsible for its content. The views expressed in Eukaryon do not necessarily reflect those of the College.
Articles published within Eukaryon should not be cited in bibliographies. Material contained herein should be treated as personal communication and should be cited as such only with the consent of the author.