AI can examine brain activity to ID the music in your ears

The sound of music can speak to one's soul in myriad ways. Musical genres can also affect the brain in varying ways.

Functional magnetic resonance imaging (fMRI) data and computational algorithms were used in new research published in Scientific Reports on Feb. 2 to demonstrate that music genre can be identified through observing neurological responses to certain characteristics associated with that particular genre.

"Our approach was capable of identifying musical pieces with improving accuracy across time and spatial coverage," said lead researcher Sebastian Hoefle, a doctoral candidate at the Federal University of Rio de Janeiro in Brazil. "Specifically, we showed that the distributed information in auditory cortices and the entropy of musical pieces enhanced overall identification accuracy up to 95 percent."

Researchers investigated fMRI brain responses of six participants who listened to 40 musical pieces of various genres, including rock, pop, jazz, classical and folk without lyrics. Participants were an average age of 31 years old, five of six participants were women with certain degrees of musical experience, and all had normal hearing and no history of psychiatric or neurological disorders, according to study methods.

A two-stage decoding model was then used to correctly identify which genre each participant listened to. The decoding model provided further insight into spatiotemporal organization responsible for auditory cortex function in the brain. The model then assessed representation of musical features across voxels, according to the researchers.

Neural signature data from each participant found through fMRI were entered into a computer trained to identify brain activity that corresponded with the tonality, rhythm, timbre and acoustic features of each piece.

Lastly, the musical pieces were then played for participants a second time while the computer identified the genre and, according to study results, was successful 77 percent of the time when choosing from two options. Additionally, when the researchers presented 10 options to choose from, the computer was correct 74 percent of the time, according to study results.

"The combination of encoding and decoding models in the musical domain has the potential to extend our comprehension of how complex musical information is structured in the human auditory cortex," Hoefle said. "This will foster the development of models that can ultimately decode music from brain activity and may open the possibility of reconstructing the contents of auditory imagination, inner speech and auditory hallucinations."

Researchers believe this study approach can provide further analysis for future neural decoding and reconstruction algorithms.

""

A recent graduate from Dominican University (IL) with a bachelor’s in journalism, Melissa joined TriMed’s Chicago team in 2017 covering all aspects of health imaging. She’s a fan of singing and playing guitar, elephants, a good cup of tea, and her golden retriever Cooper.

Trimed Popup
Trimed Popup