COMMUNICATION

Powerful and flexible sonification for many applications!

STRAUSS is extremely powerful in flexibility. It can turn data into sound in many different ways depending on what you want to communicate.  To give an overview of some of the approaches you could take to sonify your data, we've put together this show reel. We are astrophysicists, so most of these examples make use of astrophysics data (with a sprinkling of climate data!). However, data is just data - so we are excited to hear from anyone who wants to take their data and use STRAUSS to add sound to communicate their messages! A few of the examples are explained in more detail further down this page. 

Examples Audio Visual

Climate Stripes

Here we have used STRAUSS to add sound to the famous "warming stripes" visualisation of temperature data (https://showyourstripes.info/). We are seeing and hearing how the surface temperature of the Earth has evolved from 1930 to 2018.  There is one click per year of the data and the sound pans from left to right (stereo) from the first to the last year. 

At each year the temperature anomaly data is used to control the intensity and pitch of a synthesised sound. The pitch and volume are controlled by the data, with increasing pitch and volume for increasing temperature. There is also a filter-cut off applied, so you only hear a limited range of frequencies at once, below come cut-off frequency. This cut-off frequency is increased with increasing temperature, resulting in richer sound. The overall effect is a louder, richer and higher pitch sound with increasing temperature - with the sounds directly controlled by the data. 

There data is from the Met Office, using the annual global temperature anomaly data taken (ensemble medians) from: https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html 

Thanks to Vecteezy.com for the ocean video background and Ed Hawkins for the inspiration of the climate stripes.

Rotating Earth

Figure demonstrating the data used to create the sonification of the Earth's rotation, as described in the caption.

Diagram demonstrating our data sonification of the Earth’s rotation. Panel a) shows water covering fraction (left axis) versus longitude (top axis) over two Earth rotations with a world map projection as a grey underlay. These values are used to calculate the low-pass filter cutoff frequency for the sonification (right axis). Panel b) shows the waveform of the sonification as a function of time. Panels c) and d) demonstrate the effect of the filter on the waveform by zooming into 20 milli-second windows around longitudes where the water covering fraction is approximately highest and lowest, respectively (indicated by vertical lines of corresponding colour in Panel a).

We wanted to sonify sunlight bouncing off the spinning Earth through changing timbre as the Sun passes over water (a “brighter” sound) or land (a “darker” sound). For this, we used data of the covering fraction of water as a function of longitude. The data are from the GEBCO 2021 bathymetry, which assigns water or land to each cell of a 15x15 arcsecond grid across the Earth (Fig. 1). To create the sonification, we started with a sustained musical chord, using notes G flat 3, D flat 4, E 4 and B 4. Each note was created from a set of three sawtooth oscillators combined at frequencies on, 2% above and 2% below the target pitch. These choices provide a harmonically rich sound, which is then manipulated by filtering out frequencies (i.e. subtractive synthesis) based on the water covering fraction data. 


The longitude and water covering fraction (Fig. 1a, upper and left axes) were mapped directly to the time in the sequence and the filtering cut-off frequency of the chord, respectively (bottom and right axes). The cut-off frequency, above which frequencies are attenuated, was calculated from the water covering fraction using a logarithmic scale. A low-pass Butterworth filter (Butterworth, 1930) with a 24dB roll-off was used. The conversion of water fraction to frequency cut-off can be seen by comparing the left and right y-axes of Fig. 1a. We note the more jagged, harmonically rich waveform representing a longitude over the pacific ocean (Fig. 1c) relative to the smoother waveform representing a longitude over Europe and Africa (Fig. 1d). The filtering mainly changes the timbre of the sound but a secondary effect on volume is achieved in that the land-dominated regions sound the quietest (see waveform in Fig. 1b). 

Example Spatialised Data

Stars appearing in the night sky

Figure demonstrating how we turned star colour, star position and star magnitude into sound, as described in the caption

Diagram demonstrating our sonification of the ‘stars appearing’. Panel a) shows the mapping of V-band magnitude (top axis) and B-V colour (left axis) to triggering time in the audio sequence (bottom axis) and musical note (right axis), respectively. The aligned Panel b) shows the waveform produced for a stereo setup, and the triggering times of the 10 brightest stars (dotted lines). The right panel shows the stellar sky chart, with point size and colour indicating brightness and B-V colour, respectively. In our sonification the observer faces south, with the left and right audio channels corresponding to the east and west cardinal directions, respectively.

The audience ‘listen’ to stars that appear around them at the European Southern Observatory’s Very Large Telescope (VLT). The data we used are presented Fig. 2, which are the magnitudes, colours and coordinates of stars as viewed from the VLT on the 13th September 2019. We only considered stars with V-band magnitudes <6, to roughly correspond to the detection limit of the human eye. 


Each star is represented by a single note on a glockenspiel from one of five pitches: D flat 3, G flat 3, A flat 3, E flat 4 or F 4, with the choice of note based on the star’s colour (specifically, the difference in the star’s B and V magnitude). The reddest stars are assigned the lowest notes and the bluest stars the highest notes. During the sequence each star is heard in an order based on its magnitude, with the brightest stars sounding first and the faintest sounding last. This represents how brighter stars appear first to the human eye after sunset.


The stars’ positions were used to determine in which speaker(s) they should be heard. For example, stars directly in front of the ‘observer’ sound in the front speakers for the surround sound version, or equally in the left and right ear for the stereo version.

Example Virtual Reality

Stars Appearing in the Night Sky

Our STRAUSS code can used to create immersive, full surround sound experience for application in Virtual Reality (i.e., using ambisonics). As an example of this, you can see below the Stars Appearing excerpt from Tour of the Solar System (described above), re-formated for Virtual Reality. In this YouTube video, you can explore the hemisphere sky by using the arrows in the top left. You should notice that the sounds of the stars appearing can be heard in the correct location as they appear (headphones are highly recommended!). If you have a Virtual Reality headset, you can experience this directly through YouTubeVR.

Gravitational Waves

Here we present a Virtual Reality experience , with a sonification of gravitational wave events, from the third LIGO/Virgo observation run. You can hear the events at their approximate location in the sky.  The 1 minute video covers the period April 2019 to March 2020. Data from: https://gwosc.org/O3/O3a/ and https://gwosc.org/O3/O3b/