Growing Black Hole
A Sonification of a Growing Black Hole
By R.D. Shepherd, C.M. Harrison, and J.W. Trayford
Overview
We present a sonification of time series data of X-rays and visible light from the black hole system MAXI J1820+070, during a rapid accretion episode in 2018. This sonification is designed to represent a complex physical process to the general public in a way that is intuitive and accessible. Sound is used to simultaneously represent the different wavelengths by using different timbres and pitches of synthesised sounds, and by using stereo panning. The visible data can be heard to the listener’s left, with each of five different observed bands assigned a separate pitch. The X-ray data is heard to the right and is represented by a 'windy' white noise sound. Both the pitched and the noisy sounds increase in perceived intensity (controlled by varying a filter cut-off frequency) with increasing visible and X-ray brightness, respectively. This sonification is designed as a public engagement piece, with a secondary goal of making an accessible representation of the data for users with who need, or prefer, non-visual methods of communication. The sonification is combined with an animation, which is an artistic impression of the same data. In this article, we describe the methods used to create this sonification, as well as the design rationale.
The data files and associated code for this sonification will be released on the Audio Universe project page of data.ncl.ac.uk.
Summary
Data Domain/Topic: Astrophysics
Primary Goal: Public Engagement
Audience/User: General Public
Secondary Goal: Accessibility
Analytical vs. Narrative: Mostly Narrative
Sonification Type: Parameter Mapping
Sound Type(s): Synthesised Mix
Multi-Modalities: Sonification + Animation
Sonification
Figure 1: This is an audio-visual representation of how visible light and X-ray light vary with time from a black hole accreting matter. The animation is an artist’s impression of the MAXI J1820+070 system, based on observations during a rapid accretion episode in March 2018 (Paice et al. 2019). Playback is approximately 1/10th of the true speed, and the whole dataset is played twice. Graphical representations of the X-ray and visible light time-series data appear during the second playing. The X-ray data is sonified with a white noise base sound and the visible data uses a synthesised musical chord, with one note for each of the five visible bands in the data. When listening with stereo, visible data is heard on the left and X-ray is heard on the right. When the emission becomes brighter, the range of frequencies heard in each sound increases respectively (by increasing a filter cutoff frequency parameter), with the effect of increasing the intensity of the sound.
Context & design rationale
Black holes are among the most enigmatic and engaging objects in our Universe. They present extreme astrophysical conditions and demonstrate physical processes that could never be recreated on Earth. Their study allows us to test fundamental theories of the Universe on the largest and smallest scales. It is no wonder that exploring these awe-inspiring objects is a captivating way to engage the public with astrophysics. However, it is a challenge to communicate these objects visually. Black holes themselves are impossible to see as their gravity is so strong that light cannot escape them. It is possible to detect the light emitted by material in the vicinity of these black holes, in an area called the 'accretion disk'. However, investigating these systems is very difficult as their distances make them orders-of-magnitude too small to produce an image. As well as this, it is challenging to represent the complex processes that are occurring around black holes. For instance, the energy produced as X-ray and visible light can vary rapidly as material falls on to a black hole. However, due to the different locations that are responsible for producing these different wavelengths, they do not vary synchronously.
For all of the above reasons, John Paice and collaborators created an animation of the black hole system MAXI J1820+070, which is feeding on matter from nearby stars to create a vast disc of material (accretion disc). This process causes the system to emit rapidly changing levels of radiation, due to the black hole’s gravity, and the magnetic field of the material. This animation was based on the time-series data of X-ray and visible light they had obtained and analysed for a scientific publication (Paice et al. 2019). In the original animation, the artistic impression of the system shows the black hole surrounded by an accretion disc, accreting matter from a nearby star (shown in Figure 1). The animation shows the black hole flickering. This effect is created using the real time-series observations of the X-ray and visible light radiation from the system, with purple flashes used to denote X-rays and red, orange, yellow, green, and blue used to represent the visible light. Five different colours are used, because the data contains five different bands of visible light data (taken with five different filters). The animation runs through twice, with the data being plotted on the screen the second time. The animation is played at 1/10th of the real observation time as the flashes would be too fast for the eye to make out otherwise, with the fastest flickers lasting only a few milliseconds. This animation was created to demonstrate the complex radiation patterns observed from the black hole system, and how current instruments are helping astronomers to understand more about the material falling onto a black hole, despite the observational challenges. The animation demonstrates that there is a split second difference between the brightness peaks of X-ray and visible radiation, and there is a relationship between the two: dips in visible light levels are accompanied by a rise in X-ray brightness (and vice-versa). The original animation can be found in this press release.
Sonification has excellent potential for representing multidimensional data, and multiple datasets simultaneously, due to the fact that sound is information rich and has many different parameters that can be controlled. It is for this reason we decided to add sonification to the animation described above. The human auditory system is also good at monitoring and identifying patterns, so sonification is well suited to this context of time-series data. In the following paragraphs, we use the terminology set out in the Data Sonification Archive of Lenzi et al. (2020) and the Data Sonification Canvas of Lenzi et al. (2024) to explain the rationale of our sonification design.
Our goal was to create a sonification that demonstrated the complex radiation patterns observed in this accreting system, in a way that is accessible to the public. The intended primary audience is the general public, and it was made with the goal of engaging the public with astrophysics research. The sonification was designed with the intention that it would be listened to online, and is best experienced with headphones, or using a device with stereo speakers. We do not aim for the listener to be able to identify specific data values; however, we aim to communicate the main trends of the data, how the different types of radiation interact, and the fact that the data is complex. In this way, the sonification is intended to provide more of a 'narrative' than provide a fully ‘analytical’ representation of the data.
We intended to communicate how different types of radiation from this black hole system 'flicker' over time. We can not capture the sound of the process directly (i.e., for the sonification to be 'indexical'), nor create an obvious ‘iconic’ sound which is similar to the phenomenon. Therefore, we aimed to produce a sound that is symbolic of the phenomenon, which would allow the listener to distinguish between the different types of radiation. The listening experience is intended to be 'semantic', requiring the listener to decode what they hear to interpret the data trends. However, we aimed to simplify this decoding as much as possible for the listener through the choices we made. The X-ray and the visible light data are given different characteristic sounds, and are heard in different locations (right vs. left) and we designed the sonification such that for higher levels of radiation, the sounds would be perceived as more intense/powerful.
Source data & processing
This sonification uses time series data of observations of visible and X-ray radiation emitted from the black hole system MAXI J1820+070 during a period of rapid accretion in March 2018. X-ray radiation was observed by the NICER (Neutron Star Interior Composition ExploreR) instrument on the International Space Station. Visible light radiation was observed by HiPERCAM (High PERformance CAMera) in La Palma and was measured in five different visible wavelength bands (u, g, r, i, and z). The data were collected for a multi-wavelength study of MAXI J1820+070, and the results of which are presented in Paice et al. (2019). The exact tabulated data values that were used to make the animation was shared with private communication with John Paice. There is a time series array for the X-ray data and each of the u, g, r, i, and z bands. Each of these arrays needed to be appended to themselves because the animation, to which we added the sonification, plays through the whole dataset twice. No further processing was required.
Sonification method
This sonification was created using a parameter mapping approach. We used STRAUSS (Trayford & Harrison 2023) which is an open-source Python package for sonification. STRAUSS was used to generate base sounds (see below) and then manipulate them based on a set of mapping parameters. The mapping parameter values were controlled by the X-ray and visible light data (see below). The Python notebook we used to produce the sonification, STRAUSS_black_hole_animation.ipynb, is released with this article on data.ncl.ac.uk .
Base sounds
We used a different base sound for each different type of data being represented. There were six different strands of data included in the sonification: X-ray flux, and flux for each of the five visible bands (u, g, r, i and z). One of the goals of the sonification was to be able to hear how the X-ray and the visible data interacted, so we decided to make the X-ray sound distinct in timbre from the visible sounds. In contrast, the variations between the visible bands is more subtle, and is not the main phenomenon we aimed to communicate with the sonification, so we did not make the visible bands distinct in timbre from each other.
We made sure each base sound was simple because of the large amount of different data being represented simultaneously. For the X-ray, we chose a white noise base sound. For the visible, each band was assigned a note to form a five note chord, this note was produced in STRAUSS using its synthesiser generator, which uses three de-tuned saw-toothed oscillators to produce a sound with harmonic richness. This meant each note could be manipulated using a filter cutoff frequency determined by the data for that specific band (see below). The notes chosen for each band are shown in the table below. These were chosen so that the five notes played together sound aesthetically pleasing and not dissonant, and the band with the longest wavelength (corresponding to a lower frequency) would have the lowest note (z band, A2) and the band with the shortest wavelength would have the highest note (u band, A3).
The X-ray and each of the visible band base sounds are represented as spectrograms in Figure 2. Panel (a) shows the X-ray base sound. Because it is white noise, it has equal power at all frequencies. Panels (b) - (f) show the five visible light bands. Here we see each have strong fundamental frequencies, which correspond to the pitch of the note, and are visible on the spectrogram as bright horizontal lines. These fundamental frequencies are shown in the Table above. There are also higher, harmonic frequencies present, which are less strong.
Figure 1: A spectrogram of the six base sounds used to create the sonification (before any parameter mapping applied). In each case a 5 second representative section is shown. Panel (a) is for the X-ray sound which is white noise and has equal amplitude across all frequencies. Panels (b), (c), (d), (e), and (f) are for the u, g, r, i, and z band sounds, respectively. These are each represented by a different synthesised note which is labelled in the corresponding panels. The colour is scaled logarithmically such that white represents 4 orders of magnitude greater amplitude than dark blue.
Mapping & data relationships
Beyond creating very distinctive timbres, to help distinguish further between the X-ray and visible data we used stereo to separate the different wavelengths spatially. The X-ray sound was panned full right (so is only audible in the right ear) and visible sounds were panned full left (so are only audible in the left ear). This orientation was chosen to match where the X-ray and visible light curve graphs appear on the screen in the animation, to help make the sonification as intuitive as possible when heard whilst watching the animation.
To produce the perceived effect of the sounds becoming more powerful as the X-ray and visible light gets brighter, we made use of the cutoff parameter in STRAUSS. This is one of many ways to apply parameter mapping using the package. The cutoff parameter applies a low-pass filter to the sound, with the cutoff frequency being controlled by the source data. By default STRAUSS uses a Butterworth filter with a 24dB roll-off.
This parameter mapping was applied to each base sound separately, and controlled by each of the 6 data arrays. These audio clips were then joined together using the software Audacity.
Figure 3 shows a spectrogram of the sonification for one complete cycle of the data. Overlaid on this, cutoff frequency values have been plotted for X-ray and visible data. The normalised data that were used to make the sonification are also shown in this figure. It should be noted that the data for the individual visible bands were used to make the sonification; however, in Figure 3 we plot the average cutoff frequency and average brightness levels across the five bands to simplify the visualisation. There is very little variation between the five bands. Figure 3 shows some key characteristics of the sonification, including:
The filter cutoff frequency values follow the shape of the input data; however, it is scaled to a smaller range as a result of the minimum and maximum cutoff parameter values which were chosen for user comfort and aesthetics.
The five fundamental frequencies of the visible light bands are present throughout the sonification. These are seen as the bright horizontal lines, which are present across the spectrogram in the region of 110–220 Hz. These are the same frequencies present in the base sounds, shown in Figure 2.
There is increasing amplitude of higher frequencies when the brightness of the X-rays and visible light increases (along with their respective filter cutoff frequencies).
When the X-ray brightness increases the higher frequencies in the sound have equal amplitude, because of the white noise base used.
When the visible light increases in brightness there are horizontal lines seen across the spectrogram at higher frequencies - these are harmonic frequencies of the base sounds.
Figure 3: : The top panel is a spectrogram of the sonification including both X-ray and visible light sounds. The spectrogram shows the first 14 seconds, which is one complete cycle of the data. The cutoff frequency parameter is plotted for both the X-ray data (white) and the average of the five visible light bands (cyan). The colour on the spectrogram is scaled logarithmically such that white represents 4 orders of magnitude greater amplitude than dark blue. The lower panel shows the normalised data that were used to make the sonification, plotted on the same time axis as the spectrogram (which is approximately 10 times slower than the actual data). X-ray data is plotted in black and the average of the five visible bands is shown in cyan.
Multi-modality
This sonification is combined with an animation which was created and shared by John Paice. This is described in detail in the Context and design rationale section. Both the visualisation and the sonification are directly synced to the data which is played at 1/10th of the real observation time. The sound and visual was combined using DaVinci Resolve video editing software.
Bibliography
Lenzi S., Ciuccarelli P., Liu H., and Hua Y. (2020), Data Sonification Archive. http://www.sonification.design.
Lenzi, S., and Ciuccarelli, P. (2024), Designing tools for designers: The Data Sonification Canvas, in Gray, C., Ciliotta Chehade, E., Hekkert, P., Forlano, L., Ciuccarelli, P., Lloyd, P. (eds.), DRS2024: Boston, 23–28 June, Boston, USA. https://doi.org/10.21606/drs.2024.730
Paice, J.A., Gandhi, P., Shahbaz, T., Uttley, P., Arzoumanian, Z., Charles, P.A., Dhillon, V.S., Gendreau K.C., Littlefair, S.P., Malzac, J., Markoff, S., Marsh, T.R., Misra, R., Russell, D.M., Veledina, A., A black hole X-ray binary at ∼ 100 Hz: multiwavelength timing of MAXI J1820+070 with HiPERCAM and NICER, Monthly Notices of the Royal Astronomical Society: Letters, Volume 490, Issue 1, November 2019, Pages L62–L66, https://doi.org/10.1093/mnrasl/slz148
Trayford, J. and Harrison, C.M. (2023), Introducing STRAUSS: A flexible sonification Python package, 28th Proceedings of the International Community of Auditory Displays (ICAD2023), p249-256, https://hdl.handle.net/1853/73935
Acknowledgements
We thank John Paice and collaborators for inspiring the sonification and providing the data and the animation. CMH acknowledges support from an United Kingdom Research Innovation grant (code: MR/V022830/1). JWT and RDS acknowledge support from the Science and Technology Facilities Council (grant codes ST/X004651/1 and ST/W006790/1, respectively).