Imagine what it would be like if you couldn’t communicate with others. Language is a critical aspect of our lives, and it may be one of the oldest academic pursuits in the world. In fact, the oldest known text on linguistics was written over two millennia ago by the Indian scholar Pāṇini. Far more recently, new technology, such as electrophysiology and neuroimaging, has allowed for a fuller understanding of how language is produced and understood in the human brain.
Historically, the primary method for understanding how the brain produces complex functions was through lesion studies. Lesion studies look at what brain damage to a specific area does to an individual’s function. In the nineteenth century, a French physician and scientist named Pierre Paul Broca studied two patients who could not produce speech, but understood speech that was spoken to them. This disorder is a type of aphasia, a medical condition in which a patient has difficulty with speech.
Both patients could only say a few words. After they died, autopsies revealed that they both had damage to a similar area in the left frontal lobe. This area of the brain was obviously critical to speech production, and this area of the left frontal lobe has come to be known as Broca’s area. When patients cannot produce speech smoothly or quickly, but can still understand the meaning of speech, it is called Broca’s aphasia.
Similarly, another scientist named Karl Wernicke noticed that some of his patients were able to produce speech fluently, but had lost the ability to understand the meaning of speech. When the patients would speak, they would produce meaningless strings of words. Wernicke found that damage to an area in the left temporal lobe produces these symptoms, and named it Wernicke’s aphasia. These two discoveries were important in showing that certain brain areas are dedicated to specific behaviors, and identified left temporal and frontal areas as being especially important for language.
However, as technology improves, scientists are able to dig deeper to understand the subtleties of language in the human brain. One example of such technology involves individuals who come to the hospital with intractable epilepsy: epilepsy that cannot be controlled by medication. When these patients suffer multiple seizures a day, a drastic but sometimes necessary option is to perform surgery to remove the part of the brain causing seizures.
In order to identify this area precisely, patients undergo surgery to place electrodes directly on the surface of the brain, then stay in the hospital for one to two weeks while the electrodes record information about activity in the brain. During this time, if the patient is willing, scientists can play sounds or short movie clips, all while the electrodes are recording. This type of experiment is called electrocorticography (ECoG). ECoG allows scientists to gather incredibly detailed information about how specific brain areas respond to speech and what role they play in speech perception and speech production.
Arguably the most amazing aspect of language is how it develops. For many years, scientists assumed that speech and language had to be “built-in” or genetically programmed because infants pick up language so quickly and so effortlessly. At around 2 years old, toddlers are adding up to 10 words a day to their vocabulary! Although we still do not understand everything about how children pick up language so quickly, scientists now think that there are not language-specific processes that are required to learn speech. Instead, general processes like Hebbian learning are probably used to develop language. Hebbian learning describes the fact that when two neurons are active together, the connection between them gets stronger.
“At around 2 years old, toddlers are adding up to 10 words a day to their vocabulary!”
This concept can be applied to word-learning as well. For example, think about a mom trying to teach their toddler the word “ball.” Mom might be pointing at the ball, but how does the toddler know exactly what she means? Mom could mean the color of the ball, the side of the ball, the floor, the other toy right next to the ball… This seems like an impossible problem. However, when we start thinking about learning over multiple instances, we realize there is another piece of information.
The first time the toddler hears “ball” and sees mom pointing, he or she doesn’t know what it means. But over time, the toddler hears “ball” and sees a ball many times. The color of the ball might change, the exact shape and size might change, the floor and the other toys might change. The only thing that doesn’t change is the fact that it’s a ball. So over many instances, the toddler learns what a ball is. The connection between the word “ball” and the concept of ball grows stronger, while the other connections fade away. This is just one example of how a simple process can explain a seemingly difficult aspect of language learning.
These examples, along with other types of experiments and technology, are just beginning to map out exactly how our brains develop to make sense of the complex sounds we call speech so seamlessly.
Image by Jooyeun Lee.
The Man who couldn’t speak and how he revolutionized psychology. Maria Konnikova. Scientific American. http://blogs.scientificamerican.com/literally-psyched/the-man-who-couldnt-speakand-how-he-revolutionized-psychology/
Word learning emerges from the interaction of online referent selection and slow associative learning. https://www.ncbi.nlm.nih.gov/pubmed/23088341