Music and Vision

EWU’s Jonathan Middleton explores the
poten
tial of data-to-music” algorithms.

Music and Vision

EWU’s Jonathan Middleton explores the
poten
tial of data-to-music” algorithms.

By Charles E. Reineke

What if listening allowed you to “see” something otherwise invisible? To hear, for example, a change in the structure of a protein? A replication error in a strand of DNA?

And what if the sound signal you heard — this audible insight into things unseen — was wonderfully musical? A beautiful harmony indicating, say, the health of your submicroscopic life? Or, alternatively, an atonal indication that something was amiss?

 

Jonathan Middleton, a composer and professor of music theory at EWU, has long been intrigued by the possibilities of what researchers call “data-to-music,” or D2M, technologies — computer algorithms that allow users to turn data sets into musical compositions. Now, he’s among the world’s most influential figures in this small but growing branch of data analytics.

At its core, data to music is an application of what researchers call “sonification,” the process of translating numbers into auditory images.

Kenia Flavius, 19
OK Computer: Middleton works with music technology major Kenia Flavius, 19, in EWU’s recently upgraded Music Technology Studio.

“For decades we’ve been relying on visual displays of data — graphs, pie charts, tables— and that seems to be the go-to,” says Middleton. “It’s pretty much proven that it works, and there are a lot of fancy ways of doing visual displays now. But over the past, say, two decades, what’s emerged has been something called ‘auditory display’ of data, and this allows scientists, people in business and others to actually hear their data, rather than just see it.”

Middleton composes music that not only provides auditory visualizations, but also interpretive insights. Over the course of his own two decades of exploration, his sonification journeys have included numerous compositions drawn from unlikely data sources, perhaps most memorably the DNA of a downed Rosewood tree.

For many of those years, the seemingly esoteric nature of Middleton’s investigations consigned his findings to relative obscurity. That has changed. These days, he’s attracting the attention of a growing cadre of international scientists and entrepreneurs who, in the case of the former, see D2M as a means for analyzing and interpreting molecular-level physical phenomena and, in the latter, a vehicle for making complex industrial processes more apparent and accessible. 

The heart of the matter is information management. Thanks to advances in digital processing and storage, it’s now easy to amass vast troves of data. Making sense of these data, however, is much less straightforward.

Faculty researchers at universities, along with corporate scientists in the private sector, typically deploy analytic techniques such as relational databases, machine-learning algorithms and the aforementioned graphic visualizations to interpret and act on information gleaned from their investigations. While effective, these tools aren’t always enough, particularly when applied to large, complex, “unstructured” data caches — think molecular biology, global weather patterns, the birth and death of stars, or even the behavior of shoppers at the mall.

Among the first scientists to recognize the potential of D2M, or more generally, the idea of sonifying big data, was Robert Bywater, a now retired chemical biologist from the Francis Crick Institute in London. Middleton’s early work on musical algorithms caught Bywater’s attention as something that might provide a new way to interpret the notoriously opaque activity of amino acids, the all-important building blocks of proteins.

“It’s like Alan Turing solving the enigma codes,” he told The New York Times in a 2016 article. “You get a message. You do not understand it. You have to convert it to something you do understand.”

 

“It’s like Alan Turing solving the enigma codes,” he told The New York Times in a 2016 article. “You get a message. You do not understand it. You have to convert it to something you do understand.”

 

That something, for Bywater, was music. And Jonathan Middleton, with his experience in DNA-data compositions, was just the man for the job. In a paper that Bywater and Middleton later published in the journal Heliyon, the two demonstrated that by assigning numerical values to amino acids’ twists and turns within select proteins — then converting these numbers to notes— they could create a kind of “protein song” that allowed listeners to easily distinguish important structures and processes.

More recently, in a paper published by the journal Frontiers in Big Data, Middleton has demonstrated that similar techniques in sonification could be a game-changer for all sorts of data-intensive interpretations, including those that go well beyond the biological.

 

That study was conducted over three years with researchers from the Human-Computer Interaction Group at Finland’s Tampere University. Middleton, its lead author, says he and his co-investigators were chiefly concerned with showing that a custom-built D2M algorithm could enhance engagement with additional sets of complex data points (in this instance those collected from Finnish weather records) that were usually rendered in other forms.

Pertti Huuskonen, one of Middleton’s co-authors, is a senior research fellow with the TAUCHI Research Center at Tampere University. He and his colleagues at TAUCHI focus on “human-computer interactions,” or HCI, exploring how such interactions can be optimized in real-world settings. They count a number of Finnish businesses as their clients.

“One goal is to find new ways to convey data to users — not just via displays and the occasional beep or buzz from the phone in your pocket,” he says. “Humans are pretty good at hearing sound: Almost everyone with normal hearing can distinguish loudness, pitch or direction. Trained professionals can focus on dozens of aspects of sound simultaneously — think of a symphony orchestra conductor. Because such sounds are a parallel channel to human brains, one different from visuals, this makes it worth studying in HCI.”

Since the Finnish researchers had already pursued several sonification-related projects, Middleton reached out to them with an offer to collaborate. Huuskonen says the TAUCHI team quickly took him up on it, and soon had secured funding from the Finnish government to pursue the investigations that led to the Frontiers findings. Middleton’s background in musical composition made him a particularly attractive guy to work with.

“We thought that a professional composer would bring valuable wisdom on how to deliver information through sound, while still having it pleasant to listen to,” Huuskonen says. “And so it happened that Jonathan arrived to work with us for quite some time.”

For his part, Middleton says the aesthetic sensibilities of his Finnish colleagues — their passion for making D2M sonifications “pleasant to listen to”— was a big part of what made the three-year gig so attractive.

“When I arrived in Finland,” Middleton says, “my first question to them —was, ‘Don’t you just want to use any sounds?’ Because that really widens the field of possibilities when hearing data. They said, ‘Oh no, no!  We want to hear data with music.’

“I thought, ‘Wow, that’s amazing.’ They don’t just want the most efficient way of hearing their data, they want the full thing. That’s when I realized that design and aesthetics were huge to them.”


The work that became
the Frontiers paper was undertaken with the participation of five Finnish corporations, each with an interest in potential D2M solutions.

The key themes, says Huuskonen, involved “finding useful techniques for transforming data into musical structures — such as melodies, rhythms, compositions — and applying these techniques with industrial data from companies we were collaborating with.”

The experimental piece, adds Middleton, was essentially an exercise in confirming auditory display’s real-world feasibility. “First,” he says, “I had to validate our work with a perceptual study. The main angle was user experience: the idea that if people heard their data with musical sounds, they might be more engaged, spend more time with it, or have deeper connections and unique perspectives.”

Measuring user experience involved collecting survey data from 72 participants. During listening sessions at a computer, the study subjects were asked to complete tasks such as determining whether the sounds they heard represented sunshine or clouds. Other sonification exercises elicited responses to more complicated sound patterns, including melodic interludes corresponding to wind speeds.

 

“The main angle was user experience: the idea that if people heard their data with musical sounds, they might be more engaged, spend more time with it, or have deeper connections and unique perspectives.”

 

Finally, study subjects were asked to evaluate their responses to these musical data points using a variety of engagement criteria.

Analysis of the responses was led by a study co-author, EWU’s Jeffrey Culver, a professor of business, and his Eastern students back in Cheney. The results, says Middleton, “were very promising.”

“The paper sets a foundation for others to build sonifications of data with musical characteristics,” he says. “It provides a path that begins to show which characteristics are meaningful within certain engagement factors.”

“The International Community for Auditory Display needs a paper like this to move forward,” Middleton adds, referencing the organization that promotes research in sonification and related areas. “There has been a gap between those who think sonification can include musical traits and those who think music is problematic,” he says. 

Middleton, as a composer and artist, is insistent that music is not at all problematic; that instead, in both form and function, it is a perfect means of making auditory display an even more powerful scientific tool.

“We humans,” he says, “see improved functionality in things that are attractive.”