Machine Learning Can Tell If You’re A Musician By Monitoring Your Brain Responses

  • How your brain reacts while listening music can reveal whether you’re a musician or not. 
  • Researchers utilized fMRI brain data and applied different computational methods to extract special features of presented music. 
  • The machine learning model predicated musicianship of listeners with 77% accuracy. 

Machine learning had many astonishing advances in the last couple of years. Some of them may lead to breakthroughs in technology that get used by billions of people. The field of artificial intelligence is developing very quickly, and while it holds significant promise, we are seeing many challenges in the rapid push to integrate these systems into bigger platforms.

Recently, scientists at University of Jyväskylä and AMI Center (Finland) and Aarhus University (Denmark) have developed a system that can reveal whether you’ve received musical training by monitoring your brain responses while listening music.

By applying digital music analysis and machine learning methods on brain imaging data collected during listening music, the research team was able to tell (with a significant accuracy) whether the listeners were professional musicians or not.

The Study

The research utilized fMRI (functional magnetic resonance imaging) brain data – the team gathered data from 36 musicians and non-musicians (18 of each) while they were listening to songs of numerous genres. Then different computational methods were applied to extract the special features of presented music.

Rather than relying on static representations of human brain activity, it’s better to model how tunes and melodies are processed in the brain over time. This is exactly what researchers did. They took temporal dynamics into consideration, which improved the results remarkably. As the final step of modeling, they used machine learning techniques to create a model that accurately estimates musicianship from a combination of different regions of the brain.

Brain regions best predicting musicianship (nine discriminative regions) | Credit: JYU

The regions that best predict musicianship exist predominantly in the temporal and frontal areas of right hemisphere of the brain. They found to be associated with attention and engagement, processing of musical conventions and sound features like tonality and pitch.

The results show the noticeable impact of musical training on neural responses to songs to the extent of distinguishing non-musicians’ brains from musicians’ brains, excluding the other factors like music familiarity and preference.

Reference: Nature | doi:10.1038/s41598-018-19177-5 | University of Jyväskylä

Machine Learning Model

fMRI data of whole brain were collected from both musicians and non-musicians when they were listening to 3 musical elements from different genres. 6 musical features, representing high-level (tonality and rhythm) and low-level (timbre) aspects of music perception, were measured from acoustic signals.

Then classification into non-musicians and musicians was carried out, based on the parcellated fMRI time series and musical features. The machine learning model predicated musicianship of listeners with 77% accuracy – with 9 regions, including frontal and temporal cortical regions, cingulate gyrus and cortical regions.

Accuracy

The accuracy has been obtained for limbic, auditory and motor regions, significantly for medial orbitofrontal region, right superior temporal gyrus and anterior cingulate cortex. The impressive 77% accuracy figure has been achieved by combining different musical features for voxel-wise fMRI encoding through linear regression.

When it comes to decoding musical features from blood-oxygen-level dependent time series, notable accuracy has been obtained for rhythmic and timbral features for the majority of listeners. However, accuracy for high-level key feature varied among listeners, which indicates high inter-participant variability in the neural processing.

Simply put, the novelty of the technique lies in the integration of functional neuroimaging measures (obtained in listening environment) with computational acoustic feature extraction, and taking dynamics of neural processing into account.

Read: Scientists Create Artificial Synapse To Put Brain On A Chip

This represents significant contributions, which complement latest brain reading techniques that decode listeners’ data from brain activity in realistic environment. The study also deepens our understanding of how does the brain process specific acoustic characteristics and intonation in speech.

Written by
Varun Kumar

I am a professional technology and business research analyst with more than a decade of experience in the field. My main areas of expertise include software technologies, business strategies, competitive analysis, and staying up-to-date with market trends.

I hold a Master's degree in computer science from GGSIPU University. If you'd like to learn more about my latest projects and insights, please don't hesitate to reach out to me via email at [email protected].

View all articles
Leave a reply