1. Home
  2. News & Publications
  3. Research News

Nov. 14, 2008 Research Highlight Biology

Learning by osmosis

New brain images show subconscious learning in action and could be used to monitor language rehabilitation

Image of EEG brain scans Figure 1: Topographic diagrams of the ERPs obtained from EEG brain scans, 400 ms after the start of a new tone word. (a) First session of high learners, (b) last session of average learners, and (c) last session of low learners. The similarity between (a) and (b) suggests statistical learning is taking place. By contrast, the activity in (c) suggests the statistical learning is not as effective. Reproduced from Ref. 1 © 2008 Seoul National University

When you listen to someone speaking, it may seem like the words are segmented by pauses, much like the words on this page are separated by spaces. But in reality, you hear a continuous stream of sounds that your brain must organize into meaningful chunks. One process that mediates this ability is called statistical learning, by which the brain automatically keeps track of how often events, such as sounds, occur together. Now a team of RIKEN scientists has found a signature pattern of brain activity that can predict a person’s degree of achievement in this type of task1.

The team led by Kazuo Okanoya of the RIKEN Brain Science Institute presented volunteers with a 20-minute recording of an artificial language, which they heard passively in three 6.6-minute sessions. While the recording played, participants’ brain activity was measured using an imaging technique called electroencephalograms or EEGs. The researchers then analyzed how the EEG patterns related to events in the recorded language.

This language, instead of being composed of pronounceable syllables, contained only tones, similar to keyboard notes. “We used nonsense tone words to detect basic perceptual processes that are independent of linguistic faculty,” explains team-member Dilshat Abla. This way, the researchers were able to focus on the brain-activity signature of general statistical learning, rather than the specific example of language. The recording heard by the participants consisted of six ‘words’ containing three tones each, but since they were played together without gaps, the word composition would not have been immediately obvious. The participants were told to relax and listen to the streaming sound, and at the end of the experiment, they were tested on which tone triplets came from their recording and which were randomly generated.

The participants succeeded in this discrimination, which revealed to the researchers that they had performed statistical learning without exerting conscious effort. Those who earned average scores in this test showed a distinctive pattern of brain activity in the third recording session. These electric signatures, known as event-related potentials or ERPs, tended to occur 400 milliseconds after the start of a new tone word. Those who scored the lowest did not exhibit these ERPs in any session, suggesting they were not segmenting the start of each word as effectively (Fig. 1).

The highest-scoring volunteers did show these ERPs, but only in their first session. Abla explains that the effect is “largest during the discovery phase of the statistical structure,” and represents the process rather than the result of statistical learning.

References

  • 1. Abla, D., Katahira, K., & Okanoya, K. On-line assessment of statistical learning by event-related potentials. Journal of Cognitive Neuroscience 20, 952–964 (2008). doi: 10.1162/jocn.2008.20058

Top