For Big Data Analytics, Music May Hold the Key

September 12, 2023

Thanks to advances in digital processing and storage, these days it’s easy to amass vast troves of data on practically everything. Making sense of it is less straightforward.

Scientists and scholars at universities, along with corporate researchers in the private sector, typically deploy analytic techniques such as relational databases, machine-learning algorithms and visualizations to interpret and act on information gleaned from their investigations. While effective, these tools aren’t always enough, particularly when applied to large, complex, “unstructured” data caches — think global weather patterns, the structure of proteins, or the birth and death of stars.

Jonathan Middleton, a professor of music theory and composition at EWU, has spent the last decade creating music that provides interpretive insight into a variety of unlikely data sources, perhaps, most memorably, the DNA of a downed Rosewood tree. Now, in a paper recently published by the journal Frontiers in Big Data, Middleton has demonstrated that similar techniques in “sonification” — transforming digital data into sound — could be a game-changer in all sorts of big-data interpretation.

The study was conducted over three years with researchers from the Human-Computer Interaction Group at Finland’s Tampere University. Middleton, its lead author, says he and his co-investigators were chiefly concerned with showing that a custom-built “data-to-music” algorithm could enhance engagement with complex data points (in this instance those collected from Finnish weather records) that were usually rendered in other forms.

“To do that,” Middleton says, “I had to validate our work with a perceptual study. The main angle was user experience: the idea that if people heard their data with musical sounds, they might be more engaged, spend more time with it, or have deeper connections and unique perspectives.”

Measuring user experience involved collecting survey data from 72 participants. During listening sessions at a computer, participants were asked to complete tasks such as determining whether the sounds they heard represented sunshine or clouds. Other sonification exercises elicited responses to more complicated sound patterns, including melodic interludes corresponding to wind speeds. Finally, study subjects were asked to evaluate their responses to these musical data points using a variety of engagement criteria.

Analysis of participants’ responses was led by study co-author Jeffery Culver, an EWU professor of business, and his students back in Cheney. The results, says Middleton, were very promising.

“The paper sets a foundation for others to build sonifications of data with musical characteristics. It provides a path that begins to show which musical characteristics are meaningful within certain engagement factors. The International Community for Auditory Display needs a paper like this to move forward,” Middleton adds, referencing the research organization that promotes research in sonification and related areas.

“There has a been a gap between those who think sonification can include musical traits and those who think music is problematic,” he says. “Many prioritized function over form: the functionality is more important than the design. However, many studies show the two are interrelated. Humans see improved functionality in things that are attractive in their design.”

Want to learn more?

You can access the full text of the Frontiers in Big Data article here: (https://www.frontiersin.org/articles/10.3389/fdata.2023.1206081/full)

Jonathan Middleton provides more details about the nature of his work here:  https://www.ewu.edu/cahss/fine-performing-arts/music/sessions/composition/

And you can try your own hand at sonification with this free, interactive tool developed by Middleton and his team: https://musicalgorithms.org/4.1/app/#

Update: Jonathan’s work was featured in Forbes magazine: Listening To Complex Data Doesn’t Have To Be Boring (The overlap of science and art)