Ripples: Composition with Time and Space
An Environment Musification iOS Application for Atlanta Botanical Garden
Fall 2018 - Spring 2019
In this project documentation paper, we introduce "Ripples", an environment musification iOS application for Atlanta Botanical Garden. "Ripples" composes music with time and space to enhance user’s audio experience while visiting Atlanta Botanical Garden. The application tries to enhance the user’s audio experience from three aspects: generating ambient music based on the environment data; replaying pre-recorded environment sound samples when the user revisits certain places and introduce them as part of music; binaural panning for two closest locations to act like audial navigation guidance. The design and implementation details of the application are demonstrated in the paper. Evaluation is also given in the last part of the paper.
This master project is entitled "Ripples," an iOS software development approach to provide audio augmented experience for the user when they are visiting the Atlanta Botanical Garden. This iOS application generates ambient music based on location, environment and time to create an album of time and space. "Ripples" is a "soundtrack" for the visuals one is experiencing in the botanical garden; it is a tool to let people pay attention to nuance people normally take for granted; it is an audial navigation tool to guide people in the garden. Overall, "Ripples" is an augmentation of hearing and visiting experience. In this project documentation paper, I will first discuss the motivation and design philosophy of the project. In the second section, the related works will be presented. In the third section, the design and implementation details of the application will be demonstrated. In the last section, the evaluation process will be presented, and the analysis results will be discussed.
Motivation and Design Philosophy
A “Soundtrack” for the User’s Visual
The soundtrack is an effective tool for a movie maker or a video game developer to deliver emotion, push the development of the plot, etc. A deliberated movie or game music soundtrack can promote the film or game to the next level. On the other hand, people’s daily routines are just like a movie or playing a first-person role play game, but without a soundtrack. The lack of music, especially when people is alone might make people feel bored. With the development of technology, people can listen to the streaming of music on their smartphone devices. However, these music have no association with the visual one is experiencing. One could switch songs or play the playlist they made early for themselves to accompany the visuals and jobs they are experiencing and doing, but these kinds of music still lack association with the visuals compare to those soundtracks that are designed for specific scenario or visuals.
What if we could compose music in real time, based on the surrounding environment of the user to accompany with the visual scenes and the “diegetic sounds” (environmental sounds) people are experiencing in order to augment their daily experience? One of the motivations of this application is to try to achieve the goal: to augmented the people’s daily experience from an audial perspective.
An Album of Time and Space
I treat the "Ripples" as an album of time and space. It generates music based on location, environment and times. The reason I will make this a self-generated music application and not a location-based soundtrack wave file player is that I want the music which plays from the app to fit the scenario, environment at any moment, so the music has a strong association with the visual one is experiencing. When the user is using the "Ripples," the music generated by it will slowly change when the user visiting the garden and depending on the weather, and time of the year, it will have different kinds of music playing. Therefore, this iOS application is an album which will evolve with time and space.
An Audial Navigation Tool to Guide People in the Garden
“From the side, a whole range; from the end, a single peak; far, near, high, low, no two parts alike.
Why can't I tell the true shape of Lu-shan? Because I myself am in the mountain.” 
Chinese ancient philosopher Su Shi. In this poem, the Chinese philosophers argue that every person lives in his/her own temporal space and inhabits his/her surrounding environment. The limited perspective view always makes people hard to see the whole picture of the scenario they are currently experiencing. Long sentence short, the player sees less clearly than the bystander.
This issue happens to me when I first visited the garden: the garden could be divided into two sections, forest section, and garden collection sections. However, I completely overlooked the garden collection sections at the first visit. Many first-time visitors also report that it is easy to get lost in the forest section. To respond to this issue, the Atlanta Botanical Garden provides a garden map to guide people during the visiting. I personally find the map helps to navigate in the garden to a different point of interest, but I really want to enjoy the visual and beautiful scene during the visit and don't want continually checking the map. With the paper map, it is also easy to miss the point of interests because several places are located very closely. For regular visitors, they will also tend to ignore the new point of interest that is just recently opened in the garden. Therefore, I would like to have the "Ripples" to serve as an audial navigation tool to guide people from place to place in the garden, and let people pay attention to nuance people take for granted.
Over the years, there have been many attempts to present the environment into music composition. Sonic City is a wearable system that enables users to create electronic music in real time by walking through and interacting with the urban environment   . The system collects data from the user’s body such as heart rate, arm motion, and speed and data from the environment such as light level, noise level, and temperature to generate electronic music that is mapped to real-time processing of urban sounds. Sonic City project focusing on the user themselves and the environment they currently inhabit. However, it didn't take location data and time data into account. Thus, the sound design will have high similarities if the location shares the same environment characteristics. Making a location-specific musicfication generation application enables precise music-location mapping reflection. Therefore, we choice location and time data as one of our main input for music generation. On the other hand, the sound design choice for Sonic City is all electronics. Electronic music presentation of the environment some times suffers non-organic is-sues: the resemblance of the music is hard to match to the environment, making the user hard to associate the music with the music.
One approach to solve this problem is to utilize concrete sounds. UrbanRemix  is a collaborative and locative sound project that allows users to record and share sounds in their community on a mobile phone app and later remix them into electroacoustics compositions through a web interface. The aesthetics of UrbanRemix inherit the spirit of soundscape composition, acoustic ecology, and the chance approaches of John Cage. Ur-banRemix incorporates an interesting approach to introducing environmental sounds into the composition. However, the app only lets users record and tag sounds; composition and re-mixing was done manually via the web interface. This is due to concrete samples are hard to analyze and utilize in music composition without manual manipulation.
While UrbanRemix records environmental sounds, RJDJ  processes them in real time. This iOS application listens to the sounds of the environment through a microphone and harmonizes the audio stream to deliver an augmented reality listening experience. However, RJDJ processes all incoming audio using the same signal processing chain, rather than customizing its response based on classification of the input signal, which makes the sound design for the different environment too generic.
Our algorithm tries to combine and pushes these projects idea even further from the following aspects:
1. We use both space (geographical data) and time to generate music.
2. We combine both synthesized and concrete approaches in multiple layers of music generation. The concrete layers reinforce a tight connection between the environment and music generation, while the synthesized layers afford more precise algorithmic control.
3. We make the sound design decision carefully for each place in the Atlanta Botanical Garden to make sure the sound is well associated with the place and scenario but introduces a certain level of randomness and uncontrollability to the music generation.
Tiani, an iOS-based generative music app, was our first attempt at environment musification, created in 2017 in collaboration with Yongliang He and Tejas Rode. The system collects GPS location, place type information from the Google Maps API, and user walking pace. Place type defines the overall scale type and key of the generated music. The GPS coordinates slightly modify the scale and synthesizer timbres. In this way, the same place type (i.e., restaurant) will sound similar but not identical in different locations.
This project served as a conceptual foundation for “Ripples” in terms of collecting environment information using smartphones and representing that data through a generative music system. However, Tiani’s approach suffered the same problem as the projects described in section 3.1: the synthesized sounds enabled precise control of the musical content, but users found the connection between the music generated and the environment to be too unclear and abstract. Therefore, we pursued a better approach in the “Ripples.”
SYSTEM DESIGN AND IMPLEMENTATION
In this section, we will first discuss the UI/UX design. Then we will discuss the system design and mechanism. The implementation detail will also be presented in this part.
The original version the application does not contain any graphical user interface. This is because I believe that all the interaction between the application and the user could happen in the audial domain. However, with the very first few beta-testings, the result is not ideal: the user report that they are not used to the non-graphical application and occasionally, they still want to check their phones for checking the current status of the application. Therefore, the mission of design the graphical user interface starts.
The user interface contains three stages/screens. The first page is the app launching screen, which showcases the name of the application, the logo of the use and the place to a user the application (in this case it is Atlanta Botanical Garden.) The launch screen will light up for 2 seconds then it will automatically switch the map screen and start up all the function. In the upper part of the screen, we have our map view. When the application starts, the map view will showcase pre-located point of interest. Because these points of interest are presented in circles of different sizes, we named these place of interest “Ripples.” The user can pinch the screen to enable map zoom in and zoom out. On the bottom of the screen, we have a green tag which showcases the current “Ripple” name. Above the green tag, we have five white circular icon. The left two icons are user interactive enable, which allows the user to mute the music for conversation and the right white icon enable the user to mute or unmute notification sound indicating the user reaches a “Ripple.” The right three icon above the green tag is the information icon which indicates the season of the year, time of the day and current weather status. This information is all used in the music generation, thus this information will be presented in a musical way. The bottom green tag can be a swipe up to become a whole green sheet card. The green sheet card is the secondary user interface, ideally, the user should be able to navigate the garden without looking at the map view but with the information provided on this green sheet card. Inside the white circle presents the current “Ripple’s” name. Underneath the white circle, we have the three information icons again. Below the information icons, the application showcases two nearest Ripple’s name and the distance from the current location to that Ripple. Below that, we have a description for each place and the sound design ideas. At the very bottom, we have a mute bottom just like the one we have in the map view interface.
System Design and Mechanism
The application will keep tracking of the following data changes: location of the user, heading of the user, the season of the year, time of the day, current weather status and user walking speed. Whenever the location of the user changes, the system will check if the user’s current loca-tion is inside the ripple. If the user’s current location is inside the ripple, the application will play sound designs associate with the ripple. Information like time, weather and user walking pace will affect the melody and rhythm of the music generation. If the user’s current location is not inside the ripple, the system will first stop the current sound design track, then it will find the two nearest ripple to the current location. Once the two nearest ripple's location are found, a weight table will be applied to these to the location to find out what will be the next ripple the user should go to. The weight table allows the application will guide the user in the garden to avoid unnecessarily repeated visiting of the identical ripples. The weight table forms a connected ripple map as following:
Once the next ripple is decided, the system will return the direction and distance to that ripple. The direction and distance will map to the binaural audio system to serve as an audial guiding system. Then the sound design of the ripple will be played through the binaural audio system.
I used AudioKit 4.6 for some of the sound synthesis . Overlay Container for the Apple Map style sheet card UI design , Google Map API for the Map view interface and Solar for checking the sunrise and sunset time of given location . I implement a granular synthesizer for ambient layer . For more details about the sound engine please check out section 5.
Compositional Aesthetics and Sound Engine
In this section, we will first discuss the compositional ideas. Then we will discuss the sound engine and sound design.
In this project, we treat the iOS application as an album of space and time. The user needs to explore the Atlanta Botanical Garden in order to experience the full album. When users visit the Atlanta Botanical Garden and walk from place to place, the generated sound changes with the change in space and time. Because of the relatively small scale of the botanical garden, the change is subtle. On the other hand, if the user decides to stand still at a certain location, because the time is always moving, the generative music still changes. The space and time we provide here are rather large (the entire Botanical Garden and the whole day,) the change will occur very slowly but still noticeable. Texture defines the color of the piece. Therefore, instead of using concrete sounds as the source for the ambient layers of the music, we synthesize an ambient texture that can be more carefully controlled. Since only having an ambient layer might not be sonically interesting, we build several layers of sounds upon the ambient layer, for example, rhythm patterns from stored concrete samples. To enable a strong connection between the music and the environment, we also introduce a lot of concrete sounds as samples. These samples are both pre-recorded in the garden or are instruments that might be associated with the places.
We have four types of inputs: one is from the surrounding environment data we collected using iPhone GPS sensors and Google Map API, time data which contains the time of the day and season of the year, weather data and user walking speed. Four different layers of sound engines are converged and delivered to the mixer. Ambient layer utilizes environmental data to select FM synthesizer parameters or selecting different samples to feed into granular synthesis method. Acoustic instrument layer (piano) use environment data as parameters in the sequencer. The concrete layer delivers audio with a parameter-driven sequencer that plays pre-recorded samples I recorded previously in the garden. The ambient environmental sound layer directly feeds into EQ balancer through microphone input and take parameters from environmental data figure out what kind of environment sound should be emphasized under current location and time. Depend on what Ripple (location) the user is currently in, weather status, time of the day, time of the year and the user's walking pace, different sound design parameters and music rhythm will be chosen to generate music.
Environmental Sound Emphasizing
Our system will enhance the existence of certain types of environment sounds based on location and time. For example, if we are in an area of bird habitats, the system will emphasize the frequency bands of the bird songs. This function aims to have artistic control of the “diegetic sound” we mentioned in the design philosophy section. If the user is wearing noise-canceling earplugs, the sound emphasizing will be more prominent, and other environmental sounds will be pushed to the background. With this function, we can select what environment sound should be delivered to the user. The emphasizing function is achieved by an equalizer and it is implemented with AudioKit 4.6. Depend on the environment data (location, place type and time), different presets of the equalizer will be loaded to enhance the real-time environment sounds heard by the user.
Concrete Layer is one of the spirits of the sound design aesthetics as it emphasis space/time composition idea and builds a strong connection between environment and generative music. In the concrete layer, there are two types of samples: One is pre-recorded sounds that I recorded in the garden. We try to introduce time as a parameter to trigger the samples: certain stored classified samples will be replayed when the user visits certain places at a certain time to reminisce their memories associated with place and time. The other type of samples is instrument samples. For example, Japanese garden uses Koto sample as a starting point to build up the piece. Koto samples are fed into the granular synthesizer to create an ambient layer, yet still keeps the sound color of Koto which helps people associated with the Japanese culture.
In this section, we will fist discuss the evaluation study methodology. Then we will discuss the results of the evaluation results.
The evaluation study is set in the afternoon. There is a total of 17 subjects participated in this study. There are two steps for the experiment: visiting the Atlanta Botanical Garden while using the iOS application, and participation to the focus group. The visiting of the garden took 60 minutes. The focus group took around 30 minutes. The data are confidential and we did not collect identifying information such as the participant's name, email address. Before the experiment, the participant was first to be helped to install the music generation iOS application on their smartphones. Meanwhile, an introduction of the application and how to use the application were given. When the experiment starts, the participants were asked to walk around the botanical garden like they normally do when they visit the garden and listen to the sounds and music that are generated by the iOS application. After the visit, the participants were invited to a focus group to talk about the experience when using the application and audio recordings were made during the focus group (they were later transcribed to text and the recordings are deleted after transcription, the names of the speakers will not be noted.) The focus group is aimed at improving the interface and features to create a better version of the application.
In this section, I will quote some of the group study recording transcriptions to discuss the common feedbacks/suggestions from the participants of the study:
“The sound design adds another color to the environment rather than reflecting the environment”
I think it is a common dilemma of musification and sonification tasks. One of the key sound design approach I am following here is only allowing the inputs (time of the day, location, season of the year) to be affect the music generation in a limited range so that I can have full control of the sound that is coming out from the music generation problem. The pros of this approach is since I have a lot control over the music that is coming out, I can make sure it sounds pleasing. On the other hand, because I have more control over the sound that is coming out, I unconsciously add more own color and understanding of the environment to the music rather than reflecting the environment. There are also some participants who familiar with my style of music see that they experience the application in a different way: they treat environment and the music generated by the application two separated identities. Although they enjoy the music and enjoy the view, but it is somehow same as listening to my ambient compositions along with enjoy the view. Which is against what I was intended to do at the first place. I think a good solution will be keep a good balance between the randomness and control. That is allowing the input to take control of a music generation parameter more but not on the extend of sounding bad.
“Visual map and navigation system needs to include more features”
These comments are mostly come from the staffs of the botanical garden. They suggest that the visual map should include more features like a real map application. This suggestion is interesting as I was not plan to even have a visual map interface in the application. I think this suggestion comes from two reasons: first is the Y position of the binaural audio system does not work very well in this implementation. Therefore, the distance from current location to the next ripple is not well presented in audial domain. Although, the distance to the current ripples are shown in the interface, the audial domain’s not very well presentation results the petition of making more features in the map system. Second is coming from “game playing” perspective which I personally think will also be interesting. One staff suggest that some people visited here many times but they might not visited all the places (ripples) in the garden. It will be convenient if they could see how many ripples they have visited and select one of the unvisited ripples as a destination. Then the audial system can help it to navigate to the ripples they wants to visit. I think the “game play” perspective will help the application more interesting. This could be implement in the future versions.
“Didn’t take advantage of generative music to balance the environment sounds and generative sounds”
I think this one is the most helpful feedback. Although the design of the application is the music will changes slowly with the time, but it is hard to listening to the difference when you visit the sample ripple within short among of intervals. This is because I am only allowing the inputs to be affect the music generation in a limited range so that I can have full control of the sound. But on the other hand, I also didn’t take advantage of the core characteristic of generative music: I generated music in real time so it should sensitively response to the environment. Some participants said that in some situation, the music is covering the natural sounds. It will be great if the application can self-balanced the generated sounds and environment sounds so that they can blend together as an integrated piece. The solution of this problem can be take the advantage of generative music: I can have the microphone analysis the current environment and letting music generated from the application not masking the frequency range of the environment sounds. I believe this will helps the application to provide a better user experience since some participants wearing bone conducted earphones have report overall better experience than participants with noise cancelation earplugs.
I think the majority problems or suggestion of improvements are not on the technical issues but on the concept and sound designs. In order to make a better version of the application for the next time, I need to thoughtfully take these suggestions into account and re-consider the concept of the application before making the next version.
For the sound design, I will introduce real-surrounding environment sound analysis feature to improve the generative music sound design. Since the music coming from the application is generative, we could further take advantage of the feature so that the music generated by the application will not mask, but instead blend well will the surrounding environment sounds. For the navigation system, I can further improve the Y position of the binaural audio future for indicating the distance of the nearest ripple to make the audial navigation system more compelling. The visual map on the UI system could be more precise and intuitive. There are also other minor tweaks of the application's function and features, UI Design, etc. but they will all be minor tweaks. I am looking forward to the next stage of this project that this application could be presented to the public as a special event or became an official application at Atlanta Botanical Garden.
In this documentation paper, we presented "Ripples," an iOS software development approach to provide audio augmented experience for the user when they are visiting the Atlanta Botanical Garden. This iOS application generates ambient mu-sic based on location, environment and time to create an album of time and space. The application tries to enhance the user’s audio experience from three aspects: generating ambient music based on the environment data; replaying pre-recorded environment sound samples when the user revisits certain places and introduce them as part of music; binaural panning for two closest locations to act like audial navigation guidance. A beta-testing study was conducted to evaluate the application design. Total of 17 study subjects were participating in the study and the application received positive feedback. I am really looking forward to the next stage of this application.
I would like to acknowledge Professor Jason Freeman for his supervision and support in this project. I would like to thank to Wenyu Mao for UI/UX design assistance and consulting. I would also like to acknowledge the contributions of Yongliang He and Tejas Rode in Tiani project, which is the foundation of this project. I would also like to thank to my classmates at Georgia Tech for their support and encouragement.
 J. Freeman, C. Disalvo, M. Nitsche, and S. Garrett, “Rediscovering the City with UrbanRemix,” Leonardo, vol. 45, no. 5, pp. 478–479, 2012.
 Bassoli, A., Cullinan, C., Moore, J. and Agamanolis, S. TunA: a Mobile Music Experience to Foster Local Interactions. In Adjunct proceedings of UbiComp'03 (Seattle, USA, 2003).
 Gaye, L. and Holmquist, L.E. (2004) In Duet with Everyday Urban Settings: A User Study of Sonic City. Proceedings of New Interfaces for Musical Expression.
 Gaye, L., Mazé, R. and Holmquist, L.E. (2003) Sonic City: The Urban Environment as a Musical Interface. Proceeding of New Interfaces for Musical Expression.
 C. Howell, A Swift micro library for generating Sunrise and Sunset times.: ceeK/Solar. 2019.
 C. Roads, Microsound. MIT Press, 2001.
 “||AudioKit - Powerful audio synthesis, processing, and analysis, without the steep learning curve.” [Online]. Available: https://audiokit.io/. [Accessed: 29-Apr-2019].
 J. Cardiff and M. Schaub, Janet Cardiff: The Walk Book. Thyssen-Bornemisza Art Contemporary (www.tba21.org/), 2005.
 “applidium/ADOverlayContainer: Non-intrusive iOS UI library to implement overlay based interfaces.” [Online]. Available: https://github.com/applidium/ADOverlayContainer. [Accessed: 29-Apr-2019].
 S. M. Kostka, Materials and Techniques of Twentieth-century Music. Prentice Hall, 1990.
 J. Drucker and W. H. Gass, The Dual Muse: The Writer as Artist, the Artist as Writer. Washington University Gallery of Art, 1997.
 “Augment. Hear the world your way.” [Online]. Available: http://augment.audio/. [Accessed: 29-Apr-2019].