The Emotional Space | #17 | Core Logic

The logic module (still in the same Pure Data patch) is responsible for the direct control of the soundscape. This comprised translating the processed sensor data into useful variables that can be mapped to parameters and functions in Ableton, ultimately leading to an entertaining and dynamically changing composition that reacts to its listeners’ movements. Over the course of this semester, I wrote down various functions that I thought might be adding value to the soundscape development – but I soon realized that most of the functions and mappings just need to be tried out with music for me to be able to grasp how well they work. The core concepts I have currently integrated are the following:

Soft movements

Great to use for a direct feel of control over the system since each movement could produce a sound, when mapped to e.g., the sound level of a synth. An example might be something like a wind sound (swoosh) that gets louder the quicker one moves their hand through space.

Hard movements

Good for mapping percussive sounds, however, depending on one’s movements sometimes a little off and therefore distracting. Good use for indirect applications, like counting hard movements to trigger events, etc.

Random trigger

Using hard movements as an input variable, a good way to introduce movement-based but still stochastic variety to the soundscape, is to output a trigger by chance with each hard movement. This could mean that with each hard movement, there is a 5% chance that such a trigger is sent. This is a great way to change a playing midi clip or sample to another.

Direct rotation mappings

The rotation values of the watch can be mapped very well onto parameters or effects in a direct fashion.

Levels

A tool that I see as very important for a composition that truly has the means to match its listeners’ energy, is the introduction of different composition levels, based on the visitors’ activity. How such a level is changed is quite straight forward:

I am recording all soft movement activity (if a soft movement is currently active or not) into a 30 second ring buffer and calculate the percentage of activity within the ring buffer. If the activity percentage within 30 seconds crosses a threshold (e.g., around 40%), the next level is reached. If the activity level is below a threshold for long enough, the previous level is active again.

While by using all those mechanisms I could reach a certain level of complexity in my soundscape already, my supervisor inspired me to go a step further. It was true that all the direct mappings – even if modulated over time – had the risk to get very monotonous and boring over time. So, he gave me the idea of using particle systems inside my core logic to introduce more variation and create more interesting mapping variables.

Particle System

I found an existing Pure Data library that allows me to work with particle systems which is based on the Graphics Environment for Multimedia (GEM). This did however mean that I had to actually visualize everything I was calculating, but I figured out that at least for my development phase this was very helpful, since it is difficult to imagine how a particle movement might look just based on a table of numbers. I set the particles’ three-dimensional spawn area to move depending on the rotation of the three axes of the wristband. At the same time, I am also moving an orbital point (a point that particles are pulled towards and orbit it) the same way. This has the great effect that there is a second velocity system involved that is not dependent directly upon the acceleration of the wristband (it however is dependent on how quickly the wristband gets rotated). The actual parameters that I use in a mapping are all of statistical nature: I calculate the average X, Y and Z position of each particle, the average velocity of all particles and the dispersion from the average position. Using those averages in mappings introduces an inertia to the movements, which makes the compositions’ direct reactions to one’s motion more complex and a bit arbitrary.

The Emotional Space | #15 | Software News

Following the signal flow, in Pure Data the wristband sensor data gets read in and processed; and modulators and control variables get created and adjusted. Those variables are used in Ableton (Session View) to modulate and change the soundscape. This means that during the installation, Ableton is where sound gets played and synthesized, whereas Pure Data handles the sensor input and converts it into logical modulator variables that control the soundscape over short- and long-term.

While I separated the different structural parts of the software into own “modules”, I only have one Pure Data patch with various sub-patches, meaning that the separation into modules was mainly done for organizational reasons.

Data Preparation

The first module’s goal was to clean and process the raw sensor data in a way for it to become usable for the further process steps, which I spent a very substantial amount of time with during this semester. It was a big challenge to be working with analogue data that is subject to various physical influences like gravity, electronic inaccuracies and interferences and simple losses or transport problems between the different modules, but at the same time having a need for quite accurate responses on movements. Additionally, the rotation data includes a jump from 1 to 0 each full rotation (like it would jump from 360 to 0 degrees) which also needed to be translated into a smooth signal without jumps (converting to a sine was very helpful here). Another issue was that the bounds (0 and 1) were rarely fully reached, meaning I had to find a solution how I could reliably achieve the same results with different sensors at different times. I developed a principle of using the min/max values of the raw data and stretching that range to 0 to 1. This meant that after switching the system on, each sensor needs to be rotated in all directions for the Pure Data patch to “learn” its bounds.

I don’t calculate how the sensor is currently oriented in space (which I possibly could do, using gravity) and I soon decided that there is no real advantage in using the acceleration values of the individual axis, but only the total acceleration (using Pythagorean triples). I process this total acceleration value further by detecting ramps, using the Pure Data threshold object. As of now, two different acceleration ramps are getting detected – one for hard and one for soft movements, where I defined a hard movement, like one would move a shaker or hit a drum with a stick, and the soft movement is rather activated continuously as soon as one’s hand moves through the air with a certain speed.

I originally imagined that it should be rather easy to let a percussion sound play on such a “hard movement”, however, I realized that such a hand movement is quite complex, and a lot of factors play a role in playing such a sound authentically. The peak in acceleration when stopping one’s hand is mostly bigger than the peak when starting a movement. But even using the stopping peak to trigger a sound, it doesn’t immediately sound authentic, since the band is mounted on one’s wrist and peaks at slightly different moments in its acceleration than one’s fingertip or fist might do.

The Emotional Space | #12 | Signal Flow

From a sensor movement to a change in the soundscape, it takes quite a number of steps. To make the process and my tasks for building a prototype a little more tangible I created a small sketch that includes some signal flow descriptions and other details that lay out the logical sequence of the interactive capabilities of this project. I will use this blog entry to display and explain the aforementioned sketch.

The complete signal flow sketch