The Emotional Space | #20 | Next Steps

The biggest change that I await to happen next semester is to receive the SOMI-1 kit from Instruments of Things. This new kit will only come out during summer, but basically entail exactly what I need from it: the wristband sensors look and can do pretty much the same as the ones from the 2.4SINK kit, but the receiver is just a small USB interface, meaning there is no analogue data to clean up anymore. This of course makes the time I spent cleaning this data feel a little redundant, but this also gets rid of the bottleneck, allowing me to read out data from more than one sensor simultaneously, too. Which also brings me to my next point: I will spend some time on planning the multi-sensor mapping – basically defining what influence each sensor has on the soundscape when more than one sensor is active.

Other than that, I will want to spend a lot of time on the actual composition. And furthermore, I would like to plan the installation, since I would like it to be a multi-sensory experience. Either way, I am very much looking forward to spending more time on this project.

https://cdn.shopify.com/s/files/1/0261/8310/6635/files/SOMI-1_ad53ec6d-f1e8-47fe-91c8-b0d7252106c4.png?v=1653412311

The Emotional Space | #19 | Ableton

While the composition should of course always be in the back of my mind while developing the logic and mappings, the part in Ableton is where the music finally comes into play. I have not found a great workflow for creating such a composition yet, since I mostly end up changing mappings around, but I think when the time comes that I know my mappings quite well, the workflow also becomes a little easier and more intuitive.

I think that Ableton is a great choice for exactly this type of application because it was developed for live settings – which this installation also belongs to. Ableton’s session view is arranged in clips and samples that can be looped and will always play in sync, without an ending time. I am mostly playing all tracks at once and use the MIDI parameters to turn up the volume of certain tracks at certain points.

Composing for this format reminds me a lot of what I know about game sound design, since the biggest challenge seems to be to create a composition that consists of parts that might play together in ways one can barely anticipate, while it should still always sound good and consistent.

For the proof-of-concept composition, I used some stereo atmosphere I recorded myself and adjusted the width of it depending on the dispersion of the particles. A pad with many voices was also always running in the back, with slight pitch adjustments depending on the particle average position, as well as a low pass filter that opens when one’s arm is lifted up and closes when it is pointing downwards. With each soft movement, a kind of whistling sound is played, that is also modulated in pitch, depending on the arms position (up/down). This whistling sound also gets sent into a delay with high feedback, making it one of the only sounds at that level that reveals the BPM. As soon as the second level is reached, a hi-hat with a phasor effect on it starts playing, bringing a bit more of a beat into the soundscape. The phasor is controlled by the average velocity of the particles. The pattern of the hi-hat changes with a five-percent chance on every hard movement. The third level slightly changes the pad to be more melodic and introduces a kick and different hi-hat patterns. All in all, the proof-of-concept composition was very fun to do and worked a lot better than expected. I am looking forward to spending more time composing next semester.

The Emotional Space | #18 | MIDI

To be able to send MIDI data to Ableton, there is a need to run a virtual MIDI cable. For this project, I am using the freeware “loopMIDI”. This acts as a MIDI device that I can configure in Pure Data’s settings as MIDI output device and in Ableton as MIDI input device.

In Pure Data, I need to prepare data to be in the right format (0 to 127) to be sent via MIDI and then simply use a block that sends it to the MIDI device, specifying value, device, and channel numbers. I am mainly using the “ctlout” block, which sends control messages via MIDI. I did experiment with the “noteout” block as well, but I did not have a need for sending notes so far, since I rather adjust keys and notes dynamically in Ableton.

There is a mapping mode in Ableton, which lets one map almost every changeable parameter to a MIDI pendant. As soon as I turn on the mapping mode, I can just select any parameter that I want to map, then switch to Pure Data, click the parameter that I want to map to the Ableton variable, and it will immediately show up in my mapping table with the option to change the mapping range. While this approach is very flexible, it unfortunately is not very well structured since I cannot read or edit the mapping table in another context (which would be very useful to avoid mistakes while clicking).

The Emotional Space | #17 | Core Logic

The logic module (still in the same Pure Data patch) is responsible for the direct control of the soundscape. This comprised translating the processed sensor data into useful variables that can be mapped to parameters and functions in Ableton, ultimately leading to an entertaining and dynamically changing composition that reacts to its listeners’ movements. Over the course of this semester, I wrote down various functions that I thought might be adding value to the soundscape development – but I soon realized that most of the functions and mappings just need to be tried out with music for me to be able to grasp how well they work. The core concepts I have currently integrated are the following:

Soft movements

Great to use for a direct feel of control over the system since each movement could produce a sound, when mapped to e.g., the sound level of a synth. An example might be something like a wind sound (swoosh) that gets louder the quicker one moves their hand through space.

Hard movements

Good for mapping percussive sounds, however, depending on one’s movements sometimes a little off and therefore distracting. Good use for indirect applications, like counting hard movements to trigger events, etc.

Random trigger

Using hard movements as an input variable, a good way to introduce movement-based but still stochastic variety to the soundscape, is to output a trigger by chance with each hard movement. This could mean that with each hard movement, there is a 5% chance that such a trigger is sent. This is a great way to change a playing midi clip or sample to another.

Direct rotation mappings

The rotation values of the watch can be mapped very well onto parameters or effects in a direct fashion.

Levels

A tool that I see as very important for a composition that truly has the means to match its listeners’ energy, is the introduction of different composition levels, based on the visitors’ activity. How such a level is changed is quite straight forward:

I am recording all soft movement activity (if a soft movement is currently active or not) into a 30 second ring buffer and calculate the percentage of activity within the ring buffer. If the activity percentage within 30 seconds crosses a threshold (e.g., around 40%), the next level is reached. If the activity level is below a threshold for long enough, the previous level is active again.

While by using all those mechanisms I could reach a certain level of complexity in my soundscape already, my supervisor inspired me to go a step further. It was true that all the direct mappings – even if modulated over time – had the risk to get very monotonous and boring over time. So, he gave me the idea of using particle systems inside my core logic to introduce more variation and create more interesting mapping variables.

Particle System

I found an existing Pure Data library that allows me to work with particle systems which is based on the Graphics Environment for Multimedia (GEM). This did however mean that I had to actually visualize everything I was calculating, but I figured out that at least for my development phase this was very helpful, since it is difficult to imagine how a particle movement might look just based on a table of numbers. I set the particles’ three-dimensional spawn area to move depending on the rotation of the three axes of the wristband. At the same time, I am also moving an orbital point (a point that particles are pulled towards and orbit it) the same way. This has the great effect that there is a second velocity system involved that is not dependent directly upon the acceleration of the wristband (it however is dependent on how quickly the wristband gets rotated). The actual parameters that I use in a mapping are all of statistical nature: I calculate the average X, Y and Z position of each particle, the average velocity of all particles and the dispersion from the average position. Using those averages in mappings introduces an inertia to the movements, which makes the compositions’ direct reactions to one’s motion more complex and a bit arbitrary.

The Emotional Space | #16 | Mapping & Communication

Before I write about the core logic part, I would like to lay out how the communication between Pure Data and Ableton looks like. Originally, I had planned to create a VST from my Pure Data patch, meaning that the VST would just run within Ableton and there is no need to open Pure Data itself anymore. This also gave me the idea to create a visualization that provides basic control and monitoring capabilities and lets me easily change thresholds and parameters. I spent some time researching possibilities and found out about the Camomile package, which wraps Pure Data patches into VSTs, with visualization capabilities based on the JUCE library.

However, it did not take long until I found several issues with my communication concept: First off, the inputs of the Expert Sleepers audio interface need to be used, while any other interface’s (or the computer’s sound card’s) outputs must be active, which is currently not possible natively in Ableton for Windows. There would be a workaround using ASIO4ALL, but that was not a preferred solution. Furthermore, a VST always implies that the audio signal flows through it, which I did not really want – I only needed to modulate parameters in Ableton with my Pure Data patch and not have an audio stream flowing through Pure Data, since that would give me even more audio interface issues. This led me to move away from the VST idea and investigate different communication methods. There were two more obvious possibilities, the OSC protocol and MIDI. The choice was made quite quickly since I figured that the default MIDI resolution of 128 was enough for my purpose and MIDI is much more easily integrated into Ableton. Now with that decision made, it was a lot clearer what exactly the core logic part needs to entail.

The Emotional Space | #15 | Software News

Following the signal flow, in Pure Data the wristband sensor data gets read in and processed; and modulators and control variables get created and adjusted. Those variables are used in Ableton (Session View) to modulate and change the soundscape. This means that during the installation, Ableton is where sound gets played and synthesized, whereas Pure Data handles the sensor input and converts it into logical modulator variables that control the soundscape over short- and long-term.

While I separated the different structural parts of the software into own “modules”, I only have one Pure Data patch with various sub-patches, meaning that the separation into modules was mainly done for organizational reasons.

Data Preparation

The first module’s goal was to clean and process the raw sensor data in a way for it to become usable for the further process steps, which I spent a very substantial amount of time with during this semester. It was a big challenge to be working with analogue data that is subject to various physical influences like gravity, electronic inaccuracies and interferences and simple losses or transport problems between the different modules, but at the same time having a need for quite accurate responses on movements. Additionally, the rotation data includes a jump from 1 to 0 each full rotation (like it would jump from 360 to 0 degrees) which also needed to be translated into a smooth signal without jumps (converting to a sine was very helpful here). Another issue was that the bounds (0 and 1) were rarely fully reached, meaning I had to find a solution how I could reliably achieve the same results with different sensors at different times. I developed a principle of using the min/max values of the raw data and stretching that range to 0 to 1. This meant that after switching the system on, each sensor needs to be rotated in all directions for the Pure Data patch to “learn” its bounds.

I don’t calculate how the sensor is currently oriented in space (which I possibly could do, using gravity) and I soon decided that there is no real advantage in using the acceleration values of the individual axis, but only the total acceleration (using Pythagorean triples). I process this total acceleration value further by detecting ramps, using the Pure Data threshold object. As of now, two different acceleration ramps are getting detected – one for hard and one for soft movements, where I defined a hard movement, like one would move a shaker or hit a drum with a stick, and the soft movement is rather activated continuously as soon as one’s hand moves through the air with a certain speed.

I originally imagined that it should be rather easy to let a percussion sound play on such a “hard movement”, however, I realized that such a hand movement is quite complex, and a lot of factors play a role in playing such a sound authentically. The peak in acceleration when stopping one’s hand is mostly bigger than the peak when starting a movement. But even using the stopping peak to trigger a sound, it doesn’t immediately sound authentic, since the band is mounted on one’s wrist and peaks at slightly different moments in its acceleration than one’s fingertip or fist might do.

The Emotional Space | #14 | Hardware Developments

Leaving speakers aside, the hardware requirements mainly consist of wireless wristbands with accelerometers and gyro sensors, a computer that can run Pure Data and Ableton and everything that it takes to get the wristbands and the computer to communicate.

As I had planned last semester, I got to try out the 2.4SINK Kit by Instruments of Things, which is a set of wireless wristband sensors and a receiver. The receiver is made for modular Eurorack setups, meaning that wristband movements will be translated into control voltages (CV’s) that can in turn be used to modulate synthesizers or effect panels. This is of course not necessarily optimal for my application, where I need all the wristband data in my laptop anyway. Thankfully, I was not the first one at my university that wanted to use the 2.4SINK Kit in that way: The module was already built into a small Doepfer box, together with an Expert Sleepers USB Audio Interface, that allowed me to receive up to 10 CV inputs (including the input expansion module) via USB on my laptop.

Wristband sensors

The sensors look very sturdy and of good quality, but also thin and subtle. On the backside, each watch has two metal pins that are used to switch the sensor on (if both pins are touched at the same time), as well as they are used as a mounting contraption to fit them on belt clips or fabric wristbands (“click-mount”). As soon as a sensor is switched on, it will go into calibration mode, which means that it is of utter importance to have the sensor lying still on a flat surface during this process (otherwise the values will drift off constantly).

Receiver

The receiver consists of an antenna, a little toggle button that switches between “configuration” and “active” mode and 16 dynamically configurable outputs. When switched on, the 2.4SINK receiver will create a wireless LAN network, which, if accessed, will provide the opportunity to visit the configuration page of the 2.4SINK Kit. On the configuration page, it is possible to change a few global settings, as well as individual settings for all 16 outputs (while the switch on the receiver is set to “configuration mode”). For each output, it is possible to choose which parameter of which sensor (up to seven sensors can be connected) should modulate the output. For each sensor, six parameters can be chosen:

  • Rotation X
  • Rotation Y
  • Rotation Z
  • Acceleration X
  • Acceleration Y
  • Acceleration Z

Furthermore, it is possible to output LFO signals that are modulated by the sensor’s parameters. However, this is much more interesting for a modular setup and not very suitable for my use case.

USB Interface

The Expert Sleepers USB Interface (ES8 plus ES6 input expansion module) works just as most other USB audio interfaces do. The CVs are normalized as input signals ranging from -1 or 0 (depending on the type of input: unipolar/bipolar) to 1.

While there were only two sensors available, the 2.4SINK receiver would have still supported to read out all parameters from both sensors (six plus six parameters, while the receiver features 16 outputs), but the true bottleneck in this case was the Expert Sleepers audio interface, which only has 10 inputs, meaning that not even all parameters from both sensors could be read simultaneously. This led me to the decision that the outcome for this semester would only be a proof-of-concept demonstration with only one sensor.

The Emotional Space | #13 | Artistic Contemplations

Last semester I spent a lot of time on conceptualizing the whole installation as such. I had a very wholistic approach, where the sound was clearly a main attraction, but all the other details seemed just as important. And during developing this concept and contemplating what I want it to convey (and if I want it to convey something), I also came up with the name “The Emotional Space”. As a quick recap – I framed my concept in one sentence like follows: “The Emotional Space describes a room that reacts at least as much to the mood of its visitors as the other way around”. While this might not be very easy to understand immediately, it does transport my artistic approach quite well: I wanted to create an installation that not only emphasizes, but downright lives through the individual ways a visitor might want to experience it.

When this semester arrived, a lot of feedback and contemplation followed, and I realized that I am not fully standing behind the concept anymore. I got confused comments about the word “emotional” in the title and started questioning how appropriate it is – do I dare to claim to change people’s emotions with my installation? How fitting is such a term to describe a room? With those doubts in mind, my whole concept slightly lost its grounding and I had to clear my mind and define what I really wanted to spend my time with during this project. But then again, being a sound design student, the choice was clear: I wanted to work with sound.

When I stopped thinking of the project as an installation, I quickly realized that what I wanted to achieve is easy to describe: I would like to create an explorative, interactive composition. And funnily, that description does not contradict anything I worked out last semester– it is rather a slightly more focused phrasing. While knowing what exactly I want to work on gave the project its direction back, I was still not happy with the word “emotional” in the title. But over the course of this semester, I enjoyed to “The Fluid Space”, which has a little more of a neutral standing and might not create too many expectations in its visitors. Some might disregard those topics as superficial, but I have a strong sense that this process was very crucial for me to know what exactly I am working towards. What I am now striving to create, is rather a format than an installation. Of course, the presentation of this format will still be embedded into an installation that may still follow the wholistic values I had in mind last semester. However, I want to further discover the possibilities of the format of explorative, interactive compositions, aided by the sensors I have in place. My modular approach should make it possible to get entirely different kinds of composition – also in their arrangement and interactivity – while working with the same core logic.

The Emotional Space | #12 | Signal Flow

From a sensor movement to a change in the soundscape, it takes quite a number of steps. To make the process and my tasks for building a prototype a little more tangible I created a small sketch that includes some signal flow descriptions and other details that lay out the logical sequence of the interactive capabilities of this project. I will use this blog entry to display and explain the aforementioned sketch.

The complete signal flow sketch

The Emotional Space | #11 | First Tests

With a little delay, I finally managed to get my hands on some 2.4SINK Sensors by Instruments of Things. Two of them to be more exact, together with one receiver, the 2.4SINK Eurorack Module. Other than the SOMI series that is to be released this summer, the 2.4SINK kit natively only works with control voltages (CVs), to be used in a modular setup (or literally any setup that works). The kit that my university kindly provided, which is currently standing on my desk, sits in a Doepfer casing, where it is connected to a power supply and an Expert Sleepers ES-8 USB audio interface. Furthermore there is an input expansion module on this rack, the Expert Sleepers ES-6, increasing the interface inputs from just four to ten.

Instrument of Things 2.4SINK Eurorack Module in a Doepfer casing attached to an Expert Sleepers Audio Interface