- Human brain perceives sounds during sleep, but it’s unable to group these sounds in an appropriate sequence.
- Scientists used magnetoencephalography to prove that speech cannot be recognized during non-rapid and rapid eye movement sleep.
The ability to learn during sleep, known as hypnopedia, was popularized in the late 1950s. It’s an attempt to convey information through audio to a person when he/she is asleep. The concept has never been fully understood (favored) due to lack of scientific evidence.
However, some previous researches demonstrated that it’s possible to acquire elementary associations like stimulus-reflex response in sleeping people as well as sleeping animals. Still, it isn’t clear if more sophisticated form learning is possible during sleep, or not.
Now, scientists at ULB Neuroscience Institute have shown that while our brain can perceive sounds during sleep, it cannot rearrange these sound in an appropriate order to make any sense. This rearranging ability is only present at wakefulness (state of consciousness).
Recording and Monitoring Brain Activity During Sleep
To capture the cerebral activity reflecting the statistical learning of bunch of sounds, the scientists used magnetoencephalography (MEG) — a non-invasive technique for examining ongoing brain activity on a millisecond-by-millisecond basis. They recorded cerebral activity both during wakefulness and slow wave sleep when brain activity is eminently synchronized.
21 young adults took 90-minutes afternoon nap in the MEG scanner. While in unequivocal Non-Rapid Eye Movement sleep stage, they were exposed to sounds lower than awakening thresholds. These sounds were either segmented in groups of 3 elements (tritones) or randomly organized.
Image credit: Babbel Magazine
Although MEG responses showed the detection of isolated sounds during sleep, no signals responsible for statistical clustering were detected. On the other hand, during wakefulness, brain MEG responses unveiled sound grouping into sets of 3 elements.
In other words, only tone-related frequency-tagged MEG responses were detected, indicating perception of individual tone, during sleep. Whereas, during wakefulness, all participants showed robust tritone-related responses. These results clearly show that associations embedded in the statistical regularities during sleep aren’t detected, but implicitly learned during wakefulness.
In this study, researchers only examined non-rapid eye movement sleep, which remains to be ascertained whether it similarly applies to rapid eye movement sleep.
Another research examining higher-level audio parsing during sleep stages discovered similar stimulus acoustics across wakefulness and sleep regardless of audio intelligibility, while high-order linguistic neural tracking such as sentences or words was detected for intelligible audio during wakefulness, and wasn’t observed during both non-rapid and rapid eye movement sleep.
Since the technique used in this study involves higher-order identification of statistical regularities integrated with audio streams, researchers predict the absence of learning during rapid-eye movement sleep too — a calculated guess that still needs to be verified.
Future investigations of statistical learning through frequency-tagged responses will carefully analyze the mechanisms of tone-related frequency alteration.