Your brain is a little like a radio station. Whilst we can’t tune into precise your thoughts, or the specific songs in your head, one recent finding from Takako Fujioka and colleagues is taking us one step closer with a greater understanding of how the brain is keeping the beat in music.
Magnetoencephalography is one technique used to measure brain activity from the scalp. By measuring the change in magnetic flux from all over the head, and using signal processing analysis, researchers can choose a point (“virtual electrode”) and estimate the signal generated by that point; it’s like choosing which location we want our radio station to be (eg Sydney or London). However unlike a normal radio station, each virtual electrode is broadcasting across a huge range of frequencies (1 Hz – over 200 Hz); so we not only choose the location, but also using additional analysis, can hone in on the specific frequency we are interested in.
In this case, the researchers are looking at the frequencies 15 – 25 Hz which are known as the “beta band”. They already knew that the amount of beta band activity decreases around 200 ms after a tone is heard (this is known as Event Related Desynchronization or ERD). They were interested in whether a beat that is accented to reflect a rhythm (either perceived or done subjectively by imagining an accent) would show a difference in this activity.
Participants listened to a steady pulse every 390ms, with an accented beat (that is, a louder tone) occurring in two different patterns: March (ONE two ONE two… ) or Waltz (ONE two three ONE …). They also listened to the steady pulse and had to imagine for a chunk of time the accent on the downbeat (“ONE”) of these different meters as intructed. There were a few catch trials thrown in to check participants were paying attention (where a high pitch was played on one of the beats and participants responded to say if it was a down or up-beat).
Using this elegant paradigm Fujioka et al set the virtual electrodes in the auditory cortices (left and right) and found that the amount of beta-ERD at 200ms after the beat was similar for physically present accents or simply imagined ones – but there were clear differences between the types of beats, with increased beta ERD after the downbeat, when compared to the upbeat (ie the following beat).
Taking their analysis one step further, they used a technique known as beamforming which is a type of spatial filter. Rather than placing a virtual electrode, a beamformer for a specific frequency (again, in this case the beta-band) estimates the most likely sources within the whole brain that are contributing to the activity measured on the scalp. Whilst in the auditory cortex there were little difference between perception and imagery, they identified a notably larger number of brain areas involved in the more effortful task of imagining rather than simply hearing the accents. Even within imagery, the more complex Waltz rhythm was associated with a wider range of sensorimotor and frontoparietal regions than the March condition.
These results extend what we understand about beta band oscillations specifically in their role in predicting and anticipating timed events. Fujioka et al suggest the beta band activity may reflect a translation of timing information to auditory-motor coordination. As such, it may have far reaching application in establishing biomarkers for rehabilitation and learning for clinical conditions such as stuttering, motor impairment from Parkinson’s or stroke, and potentially even children with dyslexia.
But the reason I find this paper inspiring is the combination of an elegant paradigm design that builds on a series of previously well designed studies and seeks to answer a specific question. This is the scientific process at its best; and whilst we may be a long way off to tuning into the song in your head, discovering what makes the downbeat different is taking us one step closer.
For more information about the concept behind a #3MinutePaper check out this blog.