Course offered by: Dr. Julia von Thienen
This document serves as a documentation for the seminar "Sonic Thinking Seminar - Methods of Working with Sound" at HPI.
The content is structured as follows:
1.) Project title
2.) Team members, affiliations, contact details
3.) The project aim and why this is important
4.) Theoretical embedding, related works
5.) Methods
6.) Work results
7.) Conclusion, discussion, limitations and avenues for future work
8.) Acknowledgements
Reference List
Listen To Air Pollution
Carla Terboven
carla.terboven@student.hpi.de
IT-Systems Engineering
Air pollution is generated by emissions from road traffic, power plants, furnaces or heaters in residential buildings, and many more sources [3]. Most air pollution is produced by human activity, even though it brings critical health and climate problems.
When we breathe in polluted air, particles can get into our lungs and blood, leading to inflammation in the trachea, an increased tendency to thrombosis, or even changes the nervous system like heart rate variability [3].
Particularly dangerous are the smallest particles in the air. They are grouped with the term particulate matter (PM) (german: "Feinstaub"). PM is a mixture of solid and liquid particles. Distinctions are made in the size of the particles, no matter what chemical elements are involved. More detailed information on PM is given in the data preparation section.
Because it leads to health and climate problems, PM also gets more and more attention from artists. Anirudh Sharma motivates people all around the world to think differently about air pollution. He produces Air Ink out of PM2.5 particles [2]. Artists can use this rich, dark black ink for paintings or textile printing. With Air Ink, air pollution is turned into something useful. Most artists use Air Ink to communicate health and climate problems caused by air pollution in their paintings. I believe that more and more people are aware of these problems. But when we ask ourselves how much particulate matter we breathe in when we go outside in our own neighborhood, we do not have a clue.
Even though many city councils monitor air quality, the measuring stations are installed in fixed positions [5]. But how is the air pollution, right here and right now, in our neighborhood? How can I raise attention to our individual interaction with air and air pollution?
In the Sonic Thinking class on May 17th, Marisol Jimenez introduced the sound sculpture Woodworms by Zimoun [6]. He placed 25 woodworms in a piece of wood and recorded them with an ultrasensitive microphone. The sound of the woodworms is made audible to the viewers with a sound system. I am fascinated by the idea to enable people to hear something that they usually cannot hear even though it is there already.
With Zimoun's Woodworms as an inspiration in mind, I was able to formulate a goal for my project: "I want to enable people to hear what is around them in the air already. I want people to hear the air pollution around them."
To do that, I first have to think about the sound of air pollution. Zimoun's woodworms produced a natural sound, but what is the sound of air pollution? I still discover this question and decided to use different samples that people can directly connect to air pollution or the absence of air pollution. Most samples are self-recorded, and some are taken from https://freesound.org/.
Another strength of Zimoun's sculpture is the great picture it creates. Placing a microphone next to the piece of wood directly communicates what is recorded. I wonder how I can create an image like that. How can I express that I sonify air pollution? I could, for example, design the device with the air pollution sensor in a figurative way.
Regarding the project outcome of the seminar project, I am interested in translating data to sound that the user can intuitively understand. This semester I try to do that based on recorded samples with a sound connected to air pollution.
I aim to raise awareness of air pollution in individual environments. Towards the end of the semester, I envision creating a transportable device with a sensor. While walking, the sensor measures the particulate matter, and this live data is directly sonified and presented to the user via headphones.
As explained in the last section, I decided to sonify air pollution data. Since there are several possibilities of sonifications out there, I introduce some theoretical sonification approaches in the beginning. After that, I present different sonification projects around the topic of air pollution. Some of these projects serve as a source of inspiration for my own project.
Sonification is defined as "the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication of interpretation" [10].
But why is it interesting for me to inform people about air pollution data via acoustic audio signals instead of using visual plots, as most people are used to consume scientific data via plots and tables? The users of my device should be able to explore their neighborhood or parts of the city while using the device. Knowing that they walk around with a visual focus on their surroundings and they might concentrate on traffic, I have to communicate the data in an intuitive, not too distracting way. Still, I want to achieve an understanding and awareness of the existing air pollution.
Paul Vickers describes in chapter 18 of "The Sonification Handbook" three different modes of monitoring: direct, peripheral, and serendipitous-peripheral [14]. When monitoring information in a direct way, the main focus of attention is claimed. For peripheral monitoring, the "attention is focused on a primary task whilst required information relating to another task or goal is presented on a peripheral display and is monitored indirectly" [14]. And the "attention is focused on a primary task whilst information that is useful but not required is presented on a peripheral display and is monitored indirectly" [14] when using serendipitous-peripheral monitoring. I aim to monitor air pollution data while the users explore the city. The users should not walk only heads-down with the eyes on a display. But concentrate on the surroundings like nature or traffic. So I do not want to present air pollution data in an attention-grabbing way. This means that I aim to deliver the data in a peripheral or serendipitous-peripheral way. Since the "human auditory system does not need a directional fix on a sound source in order to perceive its presence" [14], monitoring with audio seems to be a perfect way.
But how can the user intuitively understand the monitored data? Rauterberg and Styger [12] advise to "look for everyday sounds that 'stand for themselves'". And Tractinsky et al. [13] state that aesthetics drives user perception, and there is growing evidence that there is increased usability of systems designed with an aesthetic focus.
According to Vickers [14], "the embedding of signal sounds inside some carrier sound" also leads to user satisfaction because of less annoyance and distraction. Vickers introduces approaches where user-selected music serves as the carrier sound of the sonification signals. It might be an exciting thing to look at since many people walk around with headphones on, listening to music already.
But apart from the aesthetic, user-centered everyday sound, I also found more theoretical design concepts for sonification.
Kramer [9] makes a distinction between analogic and symbolic representations of the data. An example of analogic representation is the Geiger counter because it directly maps data points to sound. The listener can understand the immediate one-to-one connection between data and sound signals. This is different from a symbolic representation. Here the data only gets represented categorically, and there is no direct relationship between data and the relationship necessary. Examples are most control notifications in cars. To me, the analogic representation sounds interesting because it can directly transport a lot of the data's meaning. Moreover, the sound of the Geiger counter could communicate the association of poisoned air to the user. A notification-like sound might be interesting when passing certain air pollution thresholds of the EU or WHO. I can imagine a symbolic representation at that point.
Another concept are semiotic distinctions. Here one can differentiate syntactic methods using, e.g., earcons, semantic methods like auditory icons, and lexical methods as parameter mapping [7].
Earcons are a very abstract representation of the data what makes them hard to understand. Because I want the user to understand the data quite intuitively, I now take a closer look at semantic and lexical methods.
Semantic methods like auditory icons map data to everyday sound. This leads to familiarity and quick understanding as well as direct classification options. But mapping data to auditory icons is complicated, especially because one has to think about a good representation for air pollution data that does not have a natural physical sound.
Different data dimensions are mapped to acoustic properties like pitch, duration, or loudness when using parameter mapping. This way, one can listen to multiple data dimensions simultaneously and create a complex sound. But it is pretty challenging to balance everything in a way that the listener can still pick up the meaning of the data [7]. Moreover, it becomes unpleasant quite fast, and one has to balance the alarming content of air pollution and the confidence and well-being of the user. Last semester, Malte Barth and I concentrated on such a musical approach with parameter mapping for air pollution data. Since the resulting sound was hardly intuitive, I concentrate on other approaches this semester.
Thinking more closely about the continuous stream of air pollution data I want to sonify, I hope to find different air pollution levels in Potsdam. Then I do not need attention-grabbing sounds all the time but can think about using them only in situations where the pollution data gets alarmingly high. Looking at specific design recommendations in the literature, McGee advises keeping sound events as short as possible and the spacing between sounds reasonable to avoid interference and sound masking as well as preventing the user from being overwhelmed by too many sound events [11].
Apart from general sonification approaches, I also took a look at past projects that communicated air pollution data with sound.
Particularly interesting, I find a project by Marc St Pierre and Milena Droumeva [21]. They scale and map pollutants (CO, O3, SO2, and NO2) to individual frequencies using SuperCollider. The sound produced by these pollutants is already very telling, and it is possible to understand which pollutant is changing at any time once the listener knows the mapping. The sound is quite powerful and vibrant but becomes even more telling and exciting because of a Geiger counter/clicking sound that somehow complements and competes with the other pollutants' rich sound. The Geiger counter is based on PM2.5 data, which is "measured differently than the other chemicals and therefore receives a different mapping".
Listening to their work on soundcloud (https://soundcloud.com/marcstpierre retrieved 2021-03-16), I am fascinated by how the Geiger counter sounds interesting for a long time even though the clicking sound itself does not change in pitch but only in rhythm. I imagine this is caused by the mysterious sound patterns that are produced by the other pollutants. The combination of vibrant, rich sound and the clicking Geiger counter is fascinating.
Julián Jaramillo Arango introduces a paper about AirQ Sonification [15]. It combines three different projects, all concerned with air pollution in 2016 and 2017. They are called AirQ Jacket, Esmog Data, and Breathe!.
"AirQ Jacket is a piece of clothing with an attached electronic circuit, which measures contamination levels and temperature and transforms this information to visual and acoustic stimuli" [15]. It uses multiple sensors to collect the data and small, lightweight speakers and LEDs to communicate the data to the user. The designers imagined "to create healthy courses through the city" [15] with these jackets. The white jacket perhaps attracted a lot of attention but looks a bit too alien-like to me. Air pollution is happening all around us, so I am excited about communicating my project with a small sensor kit and 'normal' headphones.
Esmog Data is an art installation using audio and motion graphics to achieve a "meaningful listening experience" [15]. Each collected sensor data point (CO, CO2, SO2, and PM10) is therefore connected to multiple parameters of the synthesizer to generate "more complex musical values" [15].
"BREATHE! is a multi-channel installation with no visuals, where the visitor should be able to identify each one of the measured toxic gases as a different sound source in the space. [...] The installation displays six human breathing sound loops, which shrink and stretch from different points in the space according to toxic levels." [15] Interestingly, the artists considered breathing as some kind of communication that people can understand all over the world. We all breathe the same. I would love to achieve this intuitive communication of the data for my project, not necessarily with breathing sound but with a collection of my own samples.
Next to these scientific papers, I also found compelling sound examples online. For instance, Space F!ght in collaboration with the Stockholm Environment Institute and NASA's Goddard Institute for Space Studies want to communicate the level of ozone data through art [17]. They perform a combination of parameter marking, speech, and improvisation of a trumpet player. I find the sound quite mystic as well as concerning and alarming. The improvising trumped is guided by the Ozon data and gives a sensitive touch to the performance. The group states that they chose to work with ozone data because ozone is proved to directly affect climate, and human health.
The auditory display by Kasper Fangel Skov [20] is concerned about climate and human health as well but focuses not on ozone but dimensions like temperature, light, humidity, and noise. Interestingly this auditory display of urban environmental data of different cities also uses voice to classify the used data in categories like "high" or "medium".
Also interesting is a project by Jon Bellona and John Park [16] [19]. They are not directly sonifying air pollution data, but carbon emissions of twitter feeds. This indirect concern about air pollution is communicated with a physical visualization. The auditive, as well as a visual experience, aims to connect virtuality and reality. Based on the estimation that one twitter tweet produces 0.02 grams of CO2, gas bubbles inside a water tank are released based on personal twitter feed data. The usage of gas bubbles in water creates a strong picture that I find comparable to the microphone recording the wood in Zimoun's Woodworms sound sculpture [6]. Moreover, the physical visualization is supported by sound, making the feeling transported by the installation even more powerful.
Apart from the sonification projects described above, I got inspiration by the Sonic Kayak [18] and the Sonic Bike [22] [23] [24] projects. Both projects were introduced to me by Kaffe Matthews who works on these topics for some years.
The Sonic Kayak project generates live sound on the kayak, using sensors in the water as well as sensors for air particulate pollution, GPS, and time.
Also concerned about air particulate pollution is the Sonic Bike project. Here, an air pollution sensor with 12 channels gathers live data on a bike. The data is then processed at the back of the bike, using Raspberry Pi and PD vanilla. Finally, the bike rider can experience the sonified air pollution via two speakers attached to the bicycle handlebar and a subwoofer behind the bike seat.
Initially, I started with Malte Barth, working on an interpretation of the Sonic Bike project. But due to the COVID-19 pandemic, we decided to use recorded CSV data instead of the bikes with live data. I now want to overcome this barrier with a small self-made device that can be used with headphones while walking. I believe that this setup might even allow for a closer inspection of the own neighborhood since a biker explores the surroundings much faster than a pedestrian. During a walk, the air can possibly be experienced more precisely and directly.
The original Sonic Bike project directly deals with the data in PD vanilla. I decided to use the programming language python as much as I could. So I collect the sensor data and compute most steps based on this data in python. Only the final commands which play the samples are sent to PD (Pure Data). This way, I want to play my strength in programming skills and overcome the lack of experience with PD. A more detailed overview of my current approach can be found in the next section.
On the way from pure air particulate pollution to a meaningful sonification, I had to overcome multiple challenges.
One challenge is, of course, to manage all the hardware parts to work together. On the software side, I had to read and preprocess the data. Using different particle sizes, I compute various parameters to generate a telling sound depending on the data. Finally, I use OSC (Open Sound Control) to send the processed information to PD. Here I read the messages to control the speed and time of recorded samples.
In the following sections, I take a more detailed look into each step I take to hear the sonified data in the end successfully.
To build a transportable device that enables users to listen to air pollution, multiple components are connected. I use a PLANTOWER PMS5003 sensor (http://www.plantower.com/en/content/?108.html) to gather the data, a Raspberry Pi 2 to process the data, and headphones to listen to the sonification. Moreover, for a transportable design of the device, I use a rechargeable battery pack to power the Raspberry Pi, and a transparent plastic box to protect the electronic devices.
The rechargeable battery pack, as well as the headphones, are easily connected to the Raspberry Pi's corresponding plugs. To wire the sensor, the Raspberry Pi's GPIO pins are used. According to the sensor's documentation three pins are required to read the data on the sensor side: PIN1 (VCC) and PIN2 (GND) for 5V power and ground, and PIN5 (TX) is the serial port sending pin. These three pins are connected to the corresponding pins at the Raspberry Pi. Sensor PIN1 is wired to the Raspberry Pi's 5V pin, sensor PIN2 to the Raspberry Pi's ground and sensor PIN5 is connected to the Raspberry Pi's GPIO 15 (UART RX). A picture of the complete wiring is shown below. To finally read the data on the software side, I use the python package pms5003-python available on GitHub.
Apart from pure transportation and protection purposes, I chose the transparent plastic box also for illustration reasons. My self-imposed goal was to create a picture like Zimoun with his sound sculpture Woodworms [6]. Zimoun was able to intuitively communicate that woodworms are recorded by placing a microphone next to a piece of wood. The design of my final prototype is not quite as straightforward but still subject to some design decisions.
First of all, the box is completely transparent, as is the air that is sonified. Moreover, the transparency is not hiding any of my used components or the not completely polished wiring. No detail, even the most unattractive or unpleasant, is hidden. This should also strengthen the essential trust in my application. Official measuring stations make adjustments in the environment (e.g. in traffic flow) to improve the measurement data [4]. This is not possible with my transparent application.
The sensor is located at the very top, centrally on the final prototype. Its significance is further emphasized by the label surrounding. The label says Listen To Air Pollution. This is not only the project title. It reminds the users that the sounds they hear have meaning. In a higher context, the handwritten message, combined with the obviously handmade assembly of the other parts, points out that the audible air pollution is also human-made.
The device is lightweight and has a size of around 18cm x 16cm x 10cm. In daily life, e.g. during walks, these dimensions allow the device to be held in hand. If it is placed, for example, in a bag, the sensor on top should remain exposed to the air.
Last semester I worked on gathered PM data from Berlin that was kindly shared with me by Kaffe Matthews. The data consists of seven data sets measured in slightly different weather conditions. The following figure shows raincloud violin plots with the distribution of the PM values for all seven data sets.
The plot shows that the distribution of PM2.5 and PM10 nearly seems to be the same. This can be explained by the definition of PM ("Particulate Matter"). PM1 describes the amount of µg/m³ of particles smaller than 1 µm. PM2.5 includes the amount of µg/m³ of particles smaller than 2.5 µm. And PM10 contains the amount of µg/m³ of particles smaller than 10 µm. All particles that are smaller than 1 µm or 2.5 µm are smaller than 10 µm as well. If there are only a few particles with a size between 2.5 and 10 µm, the PM10 values equal the PM2.5 values most of the time.
For sonification, I want to use the PM data to manipulate different aspects of the auditory representation. If PM2.5 and PM10 values behave the same most of the time, the sound representation hardly becomes exciting and meaningful to the listener. This is why I decided to subtract the smaller PM values from the bigger ones for each live data point I read with the sensor. I call the generated values "disjoint PM". For the example data from last semester, the following plot proves that the distribution of the PM values becomes more distinct for the "disjoint PM".
Still, I have to keep in mind that I have to use the original PM data when comparing it to legal thresholds. There are statutory thresholds by the EU and more strict recommendations by the WHO to ensure human health. All average limits per year are presented in the following table (thresholds according to [3]):
PM 1 | PM 2.5 | PM 10 | |
---|---|---|---|
EU | - | 25 µg/m³ | 40 µg/m³ |
WHO | - | 10 µg/m³ | 20 µg/m³ |
Knowing how to connect the sensor and what data to expect, I was able to start with the translation from the data to sound.
The src folder in this repository contains all necessary python code to run my project. Using the sensor.py file, I read data in periods of ten seconds and then sonify the data during the next ten seconds. To read and play data simultaneously and play multiple sound layers with different audio samples, I decided to use pythons multiprocessing package.
Whenever I quickly needed to test code or record comparable demos, I used the time_series.py file. Here I read the test data introduced in the last section. To give comparable results, I also read the data in blocks of ten seconds. Both methods of data reading, based on sensor or test data, can lead to the same sound because I separated the generation of sound to another python file. This generate_sound.py file consists of multiple functions used individually in both the sensor.py and the time_series.py files.
I wanted to be able to try out and exchange different sounds quickly and easily to be able to pursue different ideas. This is why each computation from data to a specific sound is done in an individual function (see generate_sound.py file). This way, I can easily exchange the sound of a Geiger counter with an asthma inhaler sound by simply exchanging one function call inside the sensor.py or time_series.py file.
To finally hear the sonified data, I use PD (Pure Data). I send the preprocessed information from python to PD via OSC (Open Sound Control). OSC is a network protocol mainly used for the real-time processing of sound data. Having a background in computer science, I set up the OSC client on the python side quite fast but needed support for the PD side. The tutorials by von Coler [26] and Davison [27] helped out so that messages with the preprocessed data can be used inside my PD patch.
These messages trigger PD to play prerecorded samples at a certain speed. The setup is based on tutorials by Kreidler [28] and Brown [25]. I hoped to create an intuitive understanding with samples that sound like air pollution. But since air pollution has no natural sound, I tried multiple approaches using different samples.
This semester I aimed to realize sonification with samples. Based on different sound snippets, I developed five different concepts to sonify the quality of the air. The advantages and disadvantages of the concepts' pleasantness and information content still need to be tested with a uniform study. In the following, all concepts will be briefly explained, and finally, it will be reported how a test series with users could look like, which would allow the comparison of the concepts.
Code: https://github.com/carlaterboven/listen_to_air_pollution
Demos: https://github.com/carlaterboven/listen_to_air_pollution/tree/main/demos
The demos are based on the collected test data for better comparability. Most of the concepts were tested for very high particulate matter ("ride 1"), medium particulate matter ("ride 6") and low particulate matter ("ride 4").
The first concept was already presented in class in July. It uses breathing sounds of different speeds to sonify PM 1 values and the sound of a Geiger counter to sonify disjoint PM 10 values if the PM 10 level is higher than the WHO threshold.
Implementation Details:
According to clinical and health psychologist Schmid, a relaxed breath takes around ten seconds [1]. The faster the rate of breathing, the more stressful and tense it is perceived. I decided to use different breathing levels with a breathing time of 10, 5, 2.5, and 1 second. The different levels are directly influenced by the average of PM 1 values over a period of 10 seconds. The gradations are based on the quartiles of the collected test data. For example, the quartile [0; 15] results in a respiratory rate of 10 seconds (sound played once), the quartile (15; 21] results in a rate of 5 seconds (sound played twice), and so forth.
The Geiger counter only clicks when the average PM 10 level of 10 seconds is higher than the WHO threshold. To avoid the clicking being too monotonous, I sonify the measured values of every second with one Geiger click for every 2µg/m³ disjoint PM10 pollution.
First Impressions and Feedback:
Many of the test subjects reported that they automatically adjusted their breathing speed to the speed of the recorded breathing and were thus able to follow the relaxation or tension of the data intuitively. Regarding the Geiger counter, some subjects had problems when interpreting. They had not been able to interpret well whether a fast or slower click was desired.
All users were able to listen to the sonification as a whole but also were able to pay closer attention to one of the sounds. Therefore, the basic idea of using sound samples can be pursued further.
However, all test persons have so far only listened to the concept out of curiosity, and a more extensive survey would certainly lead to even more valuable feedback.
As an iteration of the first concept, I combined the well-perceived breathing sound with two new samples. This time I focused more on the theme of breathing and air to get a better user understanding than I found with the Geiger counter. Moreover, not only PM 1 and PM 10 values are used for this concept but also the measured PM 2.5 data.
Implementation Details:
The sonification of the PM 1 data with the breathing sounds corresponds precisely to the implementation from concept 1. The implementation logic of the PM 10 sonification also remains the same except for using the sound of an asthma inhaler instead of the click of the Geiger counter.
To sonify the PM 2.5 data, the sound of air bubbles is used. The sound sample is produced with a straw in a glass of water and some filter effects. The higher the particulate matter values, the faster, higher and more often the "popping" of the air bubbles sounds. The different gradations of sounds are again based on the quartiles of the collected test data.
First Impressions and Feedback: Test persons were also able to interpret the meaning of the data based on sonification for three different samples. In the long run, the implementation of very similar measured values over a more extended period of time appears quite monotonous due to the uniform popping of the air bubbles.
This concept focuses not only on air pollution and its composition with different PM sizes. Instead, this time I am concerned with the air as a whole, with wind as the sound that many respondents associate with air. Based on the EU and WHO thresholds, I would like the user to understand how good the air quality is or whether certain limit values have already been exceeded. For this, I use three sound samples: wind chimes, leaves rustling and howling wind.
Implementation Details:
Since I want to measure air quality based on WHO and EU limits, I use only the original, joint values of PM 2.5 and PM 10 for this concept. If both particulate matter sizes are below the EU limit, the soft harmonic sound of a wind chime is heard. As soon as one of the sizes (PM 2.5 or PM 10) is above the threshold values of the EU, the howling wind resounds. This means that either the wind chime or the howling wind is audible.
To achieve a more exciting sound scheme, the sound of rustling leaves is added whenever one of the PM levels is above the stricter WHO thresholds, but at the same time, one of the PM levels is below the EU threshold.
First Impressions and Feedback: The main problem in understanding the data in this concept was the "melody" that is already in one of the samples used. The transitions between the louder and softer sounds of the rustling leaves can be misinterpreted as a development in the data. Therefore, for later iterations, attention should be paid to a more uniform soundscape of the short samples.
After talking to a person living in Stuttgart, one of the most polluted cities in Germany, I decided to prototype one concept where good air quality is sonified instead of bad. My interviewee from Stuttgart directly mentioned bees as a sound she always notices as something special whenever she is in an area with more nature. She misses this sound in the traffic of Stuttgart. She connects the absence of bees with air pollution. Next to bees, I also use the sound of chirping birds to mimic the sound of pure nature whenever the PM values are below EU thresholds.
In contrast to concept 3, this time, I don't want to abstract entirely from the different PM levels. The aim is to convey an audible difference between good and poor air quality while at the same time keeping it clear to what extent PM 2.5 or PM 10 values influence the quality. Also, in this concept, I focus mainly on highlighting good air quality and work with silence (meaning absence of bee and bird sounds) at high air pollution levels. This contrasts with the other concepts, where I usually made the sound more intense, faster or louder when the air pollution is more elevated.
Implementation Details: If the average value of the original, joint PM 10 values (considered over 10 seconds) is below the EU limit value, then I sound this with the 10 second sound of birds chirping. When the limit is exceeded, the short cawing of a crow sounds like an alarm. For the rest of the 10 seconds, it remains quiet. For PM 2.5, I use the same concept but with a bee buzzing as a sound source. When the limit is exceeded, the crow's alarm sounds for 1.7 seconds, followed by 8.3 seconds of silence.
First Impressions and Feedback:
The separation into bees and birds allows the listener to distinguish particulate matter sizes easily and reliably. So far, however, all listeners have been informed beforehand about the meaning of the sounds and the silence. A user study could find out whether such an interpretation is also intuitively possible.
In particular, I would be interested in the extent to which silence can nevertheless bring users closer to the alarming quality of the air. This is also related to one person's feedback who perceived the very intense buzzing of bees as stressful and upsetting. Despite the excellent air quality, she found it difficult to relax when the bees were buzzing, which could also be due to the sample used. For later tests, it would be better to use a sample with the buzzing of fewer bees instead of the intense buzzing of the swarm in the sample used. Fortunately, the current implementation allows exchanging the used samples quite easily and quickly.
Since many people walk with headphones and music on, my latest concept tests how to alert these people to exceptionally high levels of air pollution during their walks. For this, I add different amounts of white noise to the music.
Implementation Details: Any song can be selected as the music that the user listens to during his walk. In my demo, it is Vivaldi's Four Seasons. If the average values of the original, joint PM 10 and 2.5 values (considered over 10 seconds) are above the EU thresholds, constant white noise is added to the music for 10 seconds. For values between the WHO and the EU limits, I add the white noise in short, pulsating "disturbances" with irregular pauses. If both particulate matter sizes (PM 2.5 and PM 10) are below the WHO threshold, I do not add noise but only play the music.
First Impressions and Feedback: Initially, I hoped that adding white noise to the music would have an effect similar to disturbed radio or television signals. Unfortunately, I found that a much more finely tuned amount of noise would be needed to achieve this effect rather than a regular, unexciting annoyance. Moreover, the concept did not work for very high polluted areas because then I experienced a constant noise in the background next to the music, which was alarming and annoying but not as artistic and informative as I wished. Compared to the other concepts, I find this approach artistically less exciting and strongly dependent on the music played by the user.
So far, I have developed five different sonification concepts. But more detailed feedback and comparison between the concepts would be desirable. This information can be obtained in a user study, where the product is tested in practice.
The study is only planned this semester. Conducting the user study is part of the future work.
The goal of the user study would be to compare the concepts and improve the user experience. Points of interest are the pleasantness and the informativeness of the sound. Furthermore, I would like to get feedback if an introduction to the sonification concept is needed in the beginning. General advice from the user regarding the project and the device could be collected at the end of the survey.
In general, I would like to ask each participant to walk with the sensor and headphones in the ears for about 5 minutes (or longer). After each walk, the user will be asked to fill in the questions about the respective concept in a study questionnaire.
To test the intuitiveness of the different sonification concepts, I would like to divide the study into two groups.
The first group hears the concepts as they are. Without any further introduction, they have to interpret what they hear entirely free.
The second group will receive a short introduction before each concept. This introduction should explain in a condensed form what exactly is sonified and show with short examples how particularly extreme or particularly positive measurements will sound.
Comparing the results of both groups in the end can lead to insights which parts of my sonification are intuitively understandable.
Each participant is asked to fill in the Study Questionnaire which can be evaluated at the end. Based on the insights gained, I can then further develop the most promising concepts in the future.
My goal of this project was to make people more aware of air pollution in their daily life by sonifying real-time air pollution data with a hand-held device. In the beginning, I stated: "I want to enable people to hear what is around them in the air already. I want people to hear the air pollution around them." Overall, I can say that with the creation of a small device and the associated development of five sonification concepts, I enable users to hear the air around them. The quality of this listening experience can be viewed in a differentiated way.
Since musical concepts were difficult to understand for the layman in last semester's project, this year, I focused more on the actual sound of air and air pollution. By connecting the data more directly to intuitive audio samples, I hope to increase the comprehensibility of the sonifications. In addition, I hope for a more interesting sound image due to the more versatile sounds of the used sound snippets. However, it will only be possible to say more precisely whether these objectives have been met once the prepared user study has been carried out and evaluated.
On the hardware side, creating the transportable device does not only mean the compact consolidation of all hardware components used. It also enables the use of live data. The direct conversion of measurement data from the sensor to audio gives the project a much higher relevance than the mere use of already collected test data.
Limitations and resulting paths for future work are mainly based on the user study. Since the basic setup and the questions to be asked in the questionnaire have already been worked out, the next step is to test the transportable design outside with multiple test persons. After that, the evaluation of the study shows possible strengths and limitations of the different sonification concepts. With these insides gained, I can continue working on the most promising ideas. Right now, I wonder if I should decide on one sonification concept in the end. Another option could be implementing an interactive setup for users with the audio samples that seemed to be most valuable in the user study.
This interactive sonification is also motivated in literature. Herman and Hunt [8] state that most sonification "fails to engage users in the same way as musical instruments" because they lack physical interaction and naturalness. The naturalness of sound in the real world already got my attention when thinking about an intuitive way of monitoring air pollution. But thinking of sonification with physical interaction possibilities would be a new level.
For my sonification, I map data to audio samples in multiple ways, combining some of these mappings in one of the five concepts. Herman and Hunt recommend including interactive controls and input devices in this mapping.
For example, the user could choose the sound of the most alarming data points to be a Geiger counter, smoke detector sound, or heavy breathing. Maybe the preferences even change when being in different locations of the city. I imagine interactive customization of the sonification would increase user satisfaction and usage and maybe consolidate personal associations with air pollution.
The results of the user study will help to improve user satisfaction. But from a developer's perspective, I could already think of a few ideas that could enhance the experience of my project.
Right now, the sonification does not take previously collected data into account. Transitions in audio could become more smooth when adding a memory of what was played before. This would also allow fine-tuning of sonification behavior in different environments that is especially needed when improving concept 5. Otherwise, the user has to deal with constant white noise, not gaining any more specific data information.
Such passages, in which little development happens in the data, seem to be very monotonous and tiring for users of sonification. The question arises to what extent the data can be brought closer to the user without tiring him. Or whether the correct representation of monotonous data is more critical.
I would like to thank Sven Köhler for his input on other air pollution projects and sensors. His shared literature was of great help when setting up the sensor. Moreover, I thank Malte Barth for his support last semester when we did our first steps with the sonification of air pollution data together. During that time, Kaffe Matthews and Henrik von Coler shared their help and expertise with us. Thank you for providing test data, sharing ideas and helping to set up the technical framework.
[1] Schmid, N. (2020). Bauchatemtraining. Retrieved from https://www.gesundheitskasse.at/cdscontent/?contentid=10007.826380 on 2021-09-06
[2] Sharma, A., TED@BCG Toronto (2018). Ink made of air pollution. Retrieved from https://www.ted.com/talks/anirudh_sharma_ink_made_of_air_pollution on 2021-03-27
[3] Umweltbundesamt (2021). Feinstaub. Retrieved from https://www.umweltbundesamt.de/themen/luft/luftschadstoffe-im-ueberblick/feinstaub on 2021-03-27
[4] umweltzeitung (2017). Braunschweigs Luftzustand - Kein Grund zum Aufatmen. Retrieved from https://www.umweltzentrum-braunschweig.de/fileadmin/_uwz-pdfs/2017-03/Kein_Grund_zum_Aufatmen.pdf on 2021-08-30
[5] World Air Quality Index Team (started in 2008). The World Air Quality Project: Echtzeit-Luftqualitätsindex (LQI). Retrieved from https://https://aqicn.org/here/de/ on 2021-03-27
[6] Zimoun (2009). Woodworms. Retrieved from https://www.zimoun.net/sculptures/ and https://vimeo.com/14424815 on 2021-08-23
[7] Barrass, S., & Kramer, G. (1999). Using sonification. Multimedia systems, 7(1), 23-31.
[8] Hermann, T., & Hunt, A. (2005). Guest editors' introduction: An introduction to interactive sonification. IEEE multimedia, 12(2), 20-24.
[9] Kramer, G. (1994). Auditory Display: Sonification, Audification and Auditory Interfaces. SFI Studies in the Sciences of Complexity, Proceedings Volume XVIII. Addison Wesley, Reading, Mass.
[10] Kramer, G., Walker, B. N., Bonebright, T., Cook, P., Flowers, J., Miner, N., et al. (1999). The Sonification Report: Status of the Field and Research Agenda. Report prepared for the National Science Foundation by members of the International Community for Auditory Display. Santa Fe, NM: International Community for Auditory Display (ICAD)
[11] McGee, R. (2009). Auditory displays and sonification: Introduction and overview. University of California, Santa Barbara.
[12] Rauterberg, M., & Styger, E. (1994). Positive effects of sound feedback during the operation of a plant simulator. In International Conference on Human-Computer Interaction (pp. 35-44). Springer, Berlin, Heidelberg.
[13] Tractinsky, N., Katz, A. S., & Ikar, D. (2000). What is beautiful is usable. Interacting with computers, 13(2), 127-145.
[14] Vickers, P. (2011). Sonification for process monitoring. In The sonification handbook (pp. 455-492). Logos Verlag.
[15] Arango, J. J. (2018). AirQ Sonification as a context for mutual contribution between Science and Music. Revista Música Hodie, 18(1).
[16] Bellona, J & John Park, J & Bellona, D. (2014). #Carbonfeed, About. Retrieved from https://carbonfeed.org/ on 2021-03-16
[17] cdm (2013). A Sci-Fi Band and Music Made from Ozone Data: Elektron Drum Machine, Sax Sonification. Retrieved from https://cdm.link/2013/11/sci-fi-electronic-band-music-made-ozone-data-elektron-drum-machine-sonification/ on 2021-03-16
[18] FoAM (2020). Sonic Kayaks. Retrieved from https://fo.am/activities/kayaks/ on 2021-03-16
[19] Harmonic Laboratory (2014). #CarbonFeed - The Weight of Digital Behavior. Retrieved from https://vimeo.com/109211210 on 2021-03-16
[20] Kasper Fangel Skov (2015). Sonification excerpt #4: Rio de Janeiro. Retrieved from https://soundcloud.com/kasper-skov/sonification-excerpt-4-rio-de on 2021-03-16
[21] St Pierre, M., & Droumeva, M. (2016). Sonifying for public engagement: A context-based model for sonifying air pollution data. International Community on Auditory Display. (sound files: https://soundcloud.com/marcstpierre retrieved 2021-03-16)
[22] Bicrophonic Research Institute (2020). Environmental Bike (2020). Retrieved from https://sonicbikes.net/environmental-bike-2020/ on 2021-03-16
[23] Kaffe Matthews (2020). Environmental Bike (2020). Retrieved from https://www.kaffematthews.net/project/environmental-bike-2020 on 2021-03-16
[24] Kaffe Matthews (2020). Sukandar connects the air pollution sensor / Environmental Bike gets real. Retrieved from https://www.kaffematthews.net/category/Lisbon/ on 2021-03-16
[25] QCGInteractiveMusic/Andrew R. Brown (2020). 39. Modifying Audio File Playback with Pure Data. Real-time Music and Sound with Pure Data vanilla. Retrieved from https://www.youtube.com/watch?v=br7Hcx_FLoc on 2021-03-28
[26] Henrik von Coler (2020). Puredata. Retrieved from https://hvc.berlin/puredata/ on 2021-03-16
[27] Patrick Davison (2009). Open Sound Control (OSC). Retrieved from https://archive.flossmanuals.net/pure-data/network-data/osc.html on 2021-03-16
[28] Johannes Kreidler (2009). Programmierung Elektronischer Musik in Pd. Kapitel 3. Audio. Retrieved from http://www.pd-tutorial.com/german/ch03.html on 2021-03-16