_Making Sound playable

One could summarize the paper by Christopher Ariza with ‘using a controller as interface for live music performances’ – how it works, what benefits and limitations there are. A controller, in the paper often referred as ‘Dual-Analog Gamepad’ is originally designed as a gaming peripheral/interface, for consoles and computers. But some folks figured it out back then, that all its inputs also could be interpreted by a computer as MIDI signals and subsequently used to map certain sounds or modifiers to these inputs – thus generating music by making inputs to the buttons on the controller. This not even limited to one instrument or soundscape alone, because there are various buttons left on the device, some could be used to alternate between different instruments which have either different constraints on using or are just controlled completely different from others. Since there is also the possibility to create complex interaction patterns, like to simultaneous button presses, the amount of immediately available instruments vastly increases.

This whole approach isn’t not the newest invention, the concept of repurposing any digital interfaces to transfer their various interactions into inputs to a machine to generate anything, yet it’s not being seen everywhere though. In most live musical performances, theses ‘input methods’ are very rare – although they could greatly enhance the audience’s perception of the artist; meaning that they don’t only interact with their laptop and just use ‘conventional’ input methods, like mouse and keyboard. As the paper correctly stated, it would create the impression that the artist actually ‘plays an instrument’ and have profiled in its use.

Coming back to the paper, it mostly focuses on explaining existing interface mappings for controllers, but the goal of this paper is mainly to promote the use and experimentation of literally ‘playing’ with a controller to create new experiences in music making.

What me struck me as most interesting, since the paper is now roughly ten years old, there have been numerous improvements and advances made in controller technology. So, if someone now would harness the various sensors, input and feedback methods of a newest generation controller – like the PlayStation5 DualSense Controller – the possibilities would be mind boggling.

To reiterate, what this little piece of plastic and electronics can do:

  • 16 discrete buttons
  • 2 Thumb sticks (essentially Joysticks) which also can be pressed
  • Adaptive triggers for haptic feedback (creating various resistance experiences when pressing the triggers), which also can differentiate various strengths of button presses
  • (Also, pressable, like a button) Touchpad which can track up to 2 fingers very precise and differentiate between certain various button press locations, like left and right
  • Vibration motors for haptic feedback (precision rumble sensations)
  • Acceleration sensor
  • Gyro sensor
  • LED light panel capable of displaying a lot of covers
  • Built in Speaker
  • Built in Microphone
  • Headphone Jack
  • Bluetooth Connectivity (to Apple products it is even optimized out of the box)

So, it’s quite a list of things of what a new generation controller can do. For example, I thought of changing the different instruments by dividing the Touchpad in segments and touching different segments of said Touchpad could correspond to activating different instruments. Adding to that, the current state of the instrument selection could be represented through a corresponding colour trough the LED panel – adding insult to injury, the successful switch to another instrument could be communicated trough a short rumbling of the controller, like a little shockwave; to give more haptic feedback to the change in instruments. Also, since the Touchpad can detect touch/swipe inputs, an interaction like scratching done by DJ’s could be emulated. There is one example, where a game uses the Touchpad to detect inputs for a guitar playing minigame – in TLOU Part II. You choose a chord (from a radial menu of presets) via the Thumb stick, and then strike individual strings or all of them via the Touchpad to get a sound.

Staying on the topic of the LED panel, communicating different events or states with light and even sound directly could be used to tell the rhythm, or the Haptic feedback with Vibration or adaptive triggers could be used to indicate rhythm and enable precision timing. Coming back to the various ways of haptic feedback, with the precision vibrations or rumblings, either the current beat timing could be felt like a little bass drum – or even wilder, whatever sound has been currently created with the controller, the beats vibration pattern could be used to make the newly made music ‘tactile’ and add an interesting layer of immersion/feedback experience.

To address the other options of input methods which take advantage of the different sensors, like the Gyro sensor to map movements to music, similar to the theremin or the Accel sensor to map events, like a change in tempo, drop, etc. The option to use the speakers as output in extreme situations could also be very helpful – but maybe just for something small like a metronome – but the headphone capability of the controller could come in handy at every opportunity.

All in all, utilizing a modern age controller like the DualSense controller could really open up new and various other ways to make and literally ‘play’ music.


_Literature & Resources

NIME: Creativity in Children’s Music Composition

Introduction

I had a look on the paper “Creativity in Children’s Music Composition” written by Corey Ford, Nick Bryan-Kinns and Chris Nash, which was published at Nime (https://nime.pubpub.org/pub/ker5w948/release/1) in 2021. The authors conducted a study examining which interactions with Codetta – a LOGO-inspired, block-based music platform – supports children’s creativity in music composition. In this context, “LOGO” refers to Papert’s LOGO philosophy to support children’s learning through play. Such experiential learning approaches are based on intrinsic motivation and tinkering. The authors stated that there was a lack of empirical research that investigated if a LOGO-inspired approach was conductive to creativity, which is why their aim was to get a better understanding on how children make music with such technologies. 

About the study

The concrete research question of the study was “Which interactions with Codetta best support children’s ability to be creative in composition?” with the aim to uncover patterns of creativity within the participant’s first use of the tool. To get a better understanding of how Codetta works, I found this explanation video on youtube: https://www.youtube.com/watch?v=b7iMPuEaPts. The study was performed with 20 primary schoolers between the age range of 6-11. Due to the COVID-siutation the study was conducted in an online setting where the children had to perform two tasks: 1) composing a short piece of music and 2) answering a post-task questionnaire afterwards. 

Procedure

Once the children opened Codetta, they were provided built-in, age-appropriate, instructions to get to know the basics of the tool (see Fig. 1). After completing the tutorial the children were asked to compose a short piece of music. No other motivation was given to keep the task open-ended. Once finished composing, the children completed a short post-task questionnaire online.

Fig 1: The Final Result from Codetta’s Tutorial

Data collection

The following data was collected: 1) log data of each child’s interactions using Codetta (and consequently their final compositions); 2) questionnaires responses and 3) expert ratings of each composition.

For visualizing the collection of log data a color scheme with several interaction categories was developed (see Fig. 2). The returned logs from Codetta were mined and visualised using Python and Anaconda. Once the data was prepared, statistical analysis was conducted using SPSS.

The post-task questionnaire consisted of 13 5-point Likert-scale statements, asking the children about their confidence in music lessons, writing music notation, using blockbased programs and using computers as well as the ​​children’s perceptions of Codetta as a whole.

In order to quantify the children’s creativity, six expert judges rated each composition using several scales. Each judge assessed each child’s composition independently and in a random order. Their ratings were then averaged.

Fig 2: Color scheme for children’s interaction logs 

Results

I don’t want to go too much into the empirical data here, but to sum up the results it can be said that they focused on three subsections: the children’s compositions, interactions and UI perceptions.

Most children composed short pieces (mean length was 11.950 seconds) with conventional melodies like arcing up and down in pitch. The logged interaction data was visualised as a stacked bar chart using the color scheme mentioned before (see Fig. 3). The results of the questionnaire showed that the children felt they had to plan before writing their music and that they found it difficult to understand what each block was for.

Fig. 3: Stacked Bar Chart for the Total Percentage of All Interactions

Discussion

Based on the results several conclusions were drawn (shortened): 

  • Note-level interactions (rise/lower pitch, edit note length) best support children’s ability to be creative in music composition
  • First-time novice users should initially be encouraged to focus on learning how to use Codetta’s notation engine (i.e. Introduction workshops)
  • Codetta could automatically introduce blocks sequentially, based on children’s current knowledge and interaction patterns
  • More complex features should be introduced gradually
  • The UI Design could be more intuitive to avoid mistakes and confusion

Lastly it is to mention that the results are limited by the small sample size and user’s levels of experience. longitudinal study with a larger number of participants would be needed to throughly investigate how interaction styles develop with time.

Own thoughts

I found it super interesting to read about the topic “creativity in children’s music composition” for several reasons: First of all, I spent a huge part of my childhood and teenage years in music schools, playing several instruments and taking music theory courses, but I never got in touch with actually composing music myself – Neither in primary/high school nor in music school or in any of the music-theory courses I took. So I totally agree with the authors’ statement that composing was a neglected area of music education. Moreover, I liked the connection between sound design, interaction design and sociology since digital music composition combines all of that. It could also be useful as inspiration for finding a master’s thesis topic.

Though it was a bit hard for me to understand the “Results” because I have only basic experience and knowledge in statistics and would need a deeper dive into that topic to get all the measures and values right. Yet it was nice to engage with research after not binge-reading studies and empirical stuff in a while (speaking of bachelor’s thesis).

Spire Muse: A Virtual Musical Partner for Creative Brainstorming

Every musical creation starts with an idea, which can be a phrase, a sound object or a rhythmic pattern. Musicians are inspired by the individual sounds, bringing compositions to life and playing around the idea by improvising and adding parts to increase complexity. In the context of songwriting sessions of music groups, new compositions are often created through improvisational interactions between the musicians – this is called jams.

Spire Muse is a co-creative agent that supports musical brainstorming. The following is a brief explanation of how it works.

During jam sessions, basic musical ideas emerge, and the more one subsequently interacts with this idea, the more diverse the musical form develops. Creative interaction during improvisation is therefore very important for the emergence of further ideas. In general, musical interaction is a strategy built up from iterative phases.

The more one interacts with the idea, the more diverse the musical form can change. To make an open-ended creative interaction possible, improvisation is very important. A musical interaction is a strategy built from iterative phases. This means that improvising musicians decide whether they want to change something in their sequences, which can either be an initiative (new) change or a reaction. Reaction categories are adoption, augmentation and contrast.

Feedback during improvisation is essential; if it is positive, it can reinforce certain ideas; if it is negative, it can extinguish the spark of an idea or lead to new ideas different from the previous one.

Creativity cannot be defined in a uniform way, but is a process of multiple interactions that trigger unpredictable and undefined results. Through emergence of human-computer interaction (HCI), as well as artificial intelligence (AI), the perception of creativity is changing. With the help of AI, there are two ways that have an impact on the creativity process. Systems that have a supportive effect on human intelligence, so-called “creativity support tools” or those that generate results that are classified as creative from the perspective of an unbiased observer. The symbiosis of human and computer-assisted idea generation or creative process is then a co-creativity.

We have focused on designing a co-creative system that realizes the concept of a virtual jam partner.

by Notto J. W. Thelle and Philippe Pasquier

Spire Muse focuses on creating such co-creativity through a virulent jam partner. The strategy behind this is that the human or computer components interact with each other.

In a human interaction, action and decision interact, so the human is often the decision maker, which a computer cannot be. To avoid a dominant side in a co-creation, interactive behaviors are analyzed and defined in 4 categories, which are classified based on Reactive and Procative properties.

  • Shadowing – user follows what he does synchronously
  • Mirroring – Information or musical content is mirrored back in a novel way.
  • Coupling – system, can clearly take the lead
  • Negotiation – attempts to achieve goal through output modification.

When switching between the three behaviors – Shadowing, Mirroring, Coupling – Negotiation occurs, either autonomously or by the user.

The Spire Muse agent was generated on MASOM (Musical Agent based on Self-Organising Maps) and in MAX.

The learning process of the agent starts in the 1st stage by slicing the audio data in the source folder (the corpus) and then labelling the individual audio slices with feature vectors. This results in 55 dimensions of melodic and harmonic data.

Duration, loudness, fundamental frequency and chroma are used to encode the harmonic dynamics.

The feature vectors are then mapped onto a selforganizing map (SOM), an artificial neural network that employs unsupervised learning, so that the feature vectors can be displayed in a two-dimensional topological grid. Then, the tempo for each song is derived from the Python script via OSC to match the original tempo of the music piece. The last training part focuses on creating one sequence per song, so that repetitions and interaction dynamics can be shown.

The four influence parameters are rhythmic, spectral, melodic and harmonic.

The influences can be adjusted with sliders, so any combination of relative influences is possible.

Interactive modes

  • Shadow mode: Agent is reactive here. It plays the most appropriate audio slice in the corpus for each registered onset in the input.
  • Mirroring Mode: agent is reflexive interaction, respond with similar phrases after prolonged listening to single phrases.
  • Coupling Mode: song is automatically selected from the corpus and chosen with respect to two criteria:
    • Harmonic dynamics on the meso-time scale:.
    • Tempo similarity:

The program starts in Shadow mode, which is the initial mode, as well as the evasive mode. The program returns to this mode if the requirements for activating Mirror or Coupling Mode are not met.

In addition to the automated behaviors of the agent, there is also a button with the functions Back, Pause/Continue, Change and Thumbs Up.

Thumbs Up indicates to the system that the current interaction is good and then maintains the current state for the next 30 seconds. All keys are operated with foot pedals.

“Throwback” is supportive of call-and-response interactions, but can become unpredictable in some SOM regions. These interfaces allow manipulation of behaviors and provide sufficient room for automated operations.

→ Spire Muse is not intended to enhance the agent’s musical performance, but rather to get the user to create ideas with a sense of shared exploration.

→ Ultimately, we believe that the most promising feature of Spire Muse is not the agent’s musical performance per se, but rather its ability to get users to explore ideas with a sense of shared ownership.

Spine Muse is expected to learn more algorithms in the future to reduce unpredictability through repeated use. By observing multiple sessions, the agent should be able to generate a profile that recognizes behavior and thereby play out different responses based on the situation.

Personal Comment:

Encouraging creativity is always a great approach, just by engaging with this new interface artists can already break out of their usual environment and be inspired. Such systems can be very useful in artistic development, especially when there are blockages or lack of creativity. Nevertheless, as an artist, one should not only rely on digital systems, but also be able to and be inspired by analog.

I would find a stronger integration of human-human interaction very exciting, in order to bring more emotions into the generated ideas. Maybe an implementation of Jam-with-Friends-Feature would be a good approach to bring together several artists with computer support in different places and to create a kind of community.

Source: Spire Muse: A Virtual Musical Partner for Creative Brainstorming – by Notto J. W. Thelle and Philippe Pasquier (https://nime.pubpub.org/pub/wcj8sjee/release/1)

NIME : TouchGrid – Combining Touch Interaction with Musical Grid Interfaces

Background

Touch interfaces for musical applications have been first introduced in the early 90s by Schneiderman. They started out controlling vocal synthesis. Since then the Monome grid has developed into the standard of interfaces for musical equipment over the last 15 years.

The article I read explains how they made an effort to provide an other solution to it. Their suggested solution is a grid device with capacitive touch technology to use swipe gestures as a menu system, expanding the interface capabilities of the popularized Launchpads nowadays.

One of the reasons that might have made the launchpad keep it’s simplicity and popularity is the fact that most instruments rely on haptic feedback, but almost none rely on visual feedback.

These two grid controllers, Tenori-On and Monome Grid har pushed their interfaces into the music industry which have been the same for 15 years now.

Monome grid
Tenori-On

It’s now in more recent times adopted by the more popularized and professional standards in the different Launchpads.

Launchpad X

The great thing about the interface and allegedly why it’s been adopted is it’s generic layout which allows any artist to customize and play however they want.

Added functionality to the grid layout is the buttons around the “core” allowing the launchpad to control different hardware interfaces with specific software.

Their project

Touchgrid was developed to solve problems from limited resolution and restricted space provided by grids using touch interfaces. It keeps the generic button layout of the launchpad, but uses capacitive touch to extent it’s possibilities.

1st iteration

Their solution was the capacitive touch display consisting of 16*8 grid of touch areas. By using time-based date they recognized different interactions such as swipes and bezel interactions.

Problem: Due to their now huge processing requirements, the system ran on a slower maximum sample-rate than preferred.

TouchGrid – 1st prototype

2nd prototype

For their next prototype they used a Adafruit NeoTrellis M4 which is a readymade DIY kit. Using LEDs and silicone buttons to expand their Launchpad.

Chosing from a wide range of possible interactions they chose the “Drag from off-screen” and “Horizontal swipe” as their interactions as they are well known from smart devices adopted by the common public today.

with a more restricted but better hardware layout they manage to fix their problem with high sampling rates and hence performance was bettered.

Touchgrid – 2nd prototype

In an innovative solution this group made interactions replace the buttons from the Launchpad, allowing more space for music, less for navigation. The buttons allocated to menu-changing actions from the launchpads were now replaced with swiping motions we learned using our phones.

Their new menu layout is made in a spatial context to reduce the learning curve and mental workload of learning their device. Using the swiping interactions mentioned earlier they make an intuitive mapping as shown below.

Spatial layout

In the end they managed to gather user insight from 26 people with prior knowledge of how to use similar instruments. Answers from their survey revealed that the touch patterns are recognized and their mapping works. There is however worries that the suggested interaction system might add complexity and interferences with an already working product.

Conclusion

As they are saying themselves, they will make the argument of expanding the capabilities of tangible devices instead of making touch screens more tangible. Meaning that they will take their learning from touch screen interactions and implementing them on the grids and tangible devices for a haptic screen feel.

Being an interaction designer it’s refreshing seeing a well developed tangible interaction system stand its ground towards more “modern” screen touch-based ones. In a very human way this project combines what we know and adopted from before with another dear interaction to us from another system.

Nerve Sensors in Inclusive Musical Performance

Methods and findings of a multi-day performance research lab that evaluated the efficacy of a novel nerve sensor in the context of a physically inclusive performance practice.

by Lloyd May and Peter Larsson

Making the musical performance more accessible is something that many artists, such as Atau Tanaka and Laetitia Sonami, as well as scientists, have been aiming for a while. Therefore many efforts go toward the “nerve-sensor” direction. With this kind of approach, the detection of signals from nerve fring is more likely to happen rather than the skeletal muscle movement, so performers with physical conditions have more control over the sensors.

Even though the variety of gestures wasn’t as broad as other gestural instruments offer, the affordance of communication of gestural effort was better as proved in the explorations made on different sound-practices like free improvisation and the development of a piece called Frustentions.

Thanks to the Electromyography, a technique used to measure the electrical activity of skeletal muscles and nerves through the use of non-invasive sensors placed directly on the skin, we have seen more and more people with muscle atrophy or compromised volitional control of skeletal muscles, having access to technologies, for example when it comes to gaming. But, as it usually happens, the broader the accessibility is the more potentially harmful lens can come with it. Therefore it is important to keep in mind that every individual is unique and be aware of the invisible boundaries that the technology can set around the people it’s supposed to serve.

The more people with different physical and mental abilities get involved in these sound-making explorations, the better and opener accessible the design of the interfaces will be.

For this specific exploration, there were 4 investigated parameters: sensor position, gesture types, minimal-movement gestures, as well as various sound-mapping parameters. The lab was structured into several sessions, each concluding with a performative exploration, as well as a structured public showcase and discussion at the end of the lab. Other research lines like minimal-movement “neural” gestures were also investigated but not much data could be gathered. The outcome of the session was the previous said composed piece: Frustentions. A fixed media composition developed during the workshop.

Three groups of gestures were determined during the sessions in order to record the needed data: Effort gestures, which were particularly suited to audio effects that are well-aligned to swelling such as distortion, delay, or reverb, and adjustment gestures, which often required full focus and were not necessarily accessible at all times during a performance; and trigger gestures.

The nerve sensor was compared with other interfaces like the MiMu glove, the Gestrument, and Soundbeam. Even though these other instruments allowed wider recognition of the number of gestures with better accuracy, it was more challenging to use with limited fine-motor capabilities. In addition, the wearable sensor afforded the performer greater opportunities to connect visually with ensemble members or the audience as there was no immediate requirement to view or interact directly with a screen.

Conclusions

Research aimed at making musical performance accessible to everyone is something that has multiple benefits, clearly on a physical level, but above all on a neural and psychological level. It is surprising how many things associated with leisure are out of reach for many people, simply because their physical condition does not meet the standards for which they are designed. The possibility that all these people can access activities of enjoyment represents a clear increase in the quality of life for them and for the people around them.

Nerve sensors are just one example, and thanks to this exploratory initiative we can get to know them and compare data with other instruments on the market. In more advanced stages of research, I would like to imagine that these interfaces are also used medically, to alleviate the effects of some diseases, improve physical conditions, and even reduce motor damage that originates in the brain by promoting nerve and muscle movement. Music is obviously a means of enjoyment, but together with science, it can be a means of healing.

NIME: Exploring Identity Through Design: A Focus on the Cultural Body Via Nami

Sara Sithi-Amnuai

In this article, the author reveals the theme of personal identity and cultural body. Identity is very closely linked to the culture in which we were born. This culture, in turn, is inventive and full of art, music, and dance. The author focuses on the fact that the topic of the design and application of gesture controllers is not widely discussed. The goal of the author of this article is to embrace the cultural body, incorporate it into existing gesture controller design, and how cultural design techniques can expand musical/social affiliations and/or traditions in technological development. Sara Sithi-Amnuai’s article discusses the design of Nami, a custom-made gesture controller, and its applicability to extending the cultural body. We develop freedom of action by perceiving the world in terms of our self-identity and collective identity. According to the definition of references to which the author refers, “the mind is inseparable from our bodily, situational and physical nature” she also notes that in the aggregate all this is called consciousness. Our bodies absorb movement and experience through the senses, vision, and sensations, which influence how we relate to our environment and how we behave.
What is this cultural body? The author says that this is a body subject, “marked by culture” and “talking” about cultural practice, itself, and history. Dancers, for example, often notice that their body is intimately tied to their identity and vision of themselves. Often after their career ends they no longer understand themselves and it is very difficult for them to embark on the path of recovery.
In the article, the author also points out many practices related to design. Design stages often take place in 4 steps:
1. Sketching phase includes an input (“data”), functionality (“model”), and material/form.
2. Concept phase includes training data, ML model (training engine), and data/form relationship.
3. Critical Thinking phase includes purpose, intentions, culture, and material/form exploration.
4. Reflections phase includes input, functionality, and final materials/form.

One of these devices that allow you to get a musical and cultural experience is NAMI. NAMI is a glove interface designed for live electro-acoustic musical performance, primarily relying on an augmented instrument. The goal of NAMI was to explore and develop a new sign language beyond the effective trumpet gesture and to integrate it with the author’s own experience and her cultural body. The trumpet was used with additional sensors which provided additional sound control options. The trumpet provides freedom of movement for the left hand, while the right hand supports the instrument. The fundamental connection between the musician and the trumpet exists between the musician’s lips and the mouthpiece and then extends to the fingers. This scheme allows the executor to access multiple controls at the same time. The performer can play the instrument (trumpet) with the right hand and operate multiple controls in real-time, exploring, expanding, and amplifying the sound of the trumpet with the left hand.

The author of the article also pays a lot of attention to materials and techniques for creating pearls. The first thing Sara Sithi-Amnuai remembers is that the shape of the dodge reflects the essence of the culture for which the gloves were designed. Materials We have chosen for affordability, only materials that could be easily recreated on a small budget and used in a wide range of sports or casual activities. In the third development, the glove was designed to fit every hand size. A wrist strap allows the user to lock the glove and sensors in place, however, the fingerless design allows for flexible sensor placement depending on hand size.
In conclusion, I want to say that the article opens up a new understanding of the manipulation of music based on the individual experience of the performer. Which in turn leads to more refined and culturally rich performances.

Kids and Interaction (X): Exhibition spaces for children (measures)

It is very important to bear in mind that a children’s space must be suitable for its users.
Children’s facilities are often the most complex to accommodate, as they need to be accessible to accompanying adults as well. In addition, as in any other facility, it is necessary to take into account people with reduced mobility, adapting heights and sizes.

Let’s remember that all the above details are determined according to the age range of 6 to 8 years old. Not only because this is the target public of this project, but also because at this age it is necessary to limit the range as it is a time when physical and personal changes occur rapidly.

Taking into account this range, it is necessary to know the approximate height of our audience. In this case, it is very similar between the sexes and is between 115 cm and 127 cm tall. This means that any table, chair, device or sign should be within the range of vision and accessibility of a person of that height.

Knowing this, an analysis of the correct heights and spaces can be carried out. Reference is made to a guide for Glasgow museum exhibitions and a standard accessibility guide for exhibitions.

These guidelines determine that for ages 5-12 years, seating should have a minimum height of 32.5 cm and a maximum height of 45 cm; while standing desks should be between 52 cm and 82.5 cm. The knee space under these tables should be 61 cm high, 61 cm deep, 76 cm wide.

In addition, a child’s viewing height is between 101 cm and 147.4 cm when the child is standing; and 85.6 cm and 95 cm when the child is sitting. This allows a reach radius of between 54.5 cm and 88 cm when standing and 41 cm and 70.5 cm when seated.

This includes the recommended widths between tables, walls or shelves. This should be a minimum of 183 cm to allow space for two wheelchairs. In any case, a space of 223.5 cm is recommended for specific areas for children.

A summary table of all these concepts is included below.

References

García, I. (2021, May 7). Pesos y estaturas en niños recomendadas por la OMS. Todo Papás. https://www.todopapas.com/ninos/desarrollo-infantil/pesos-y-estaturas-en-ninos-recomendadas-por-la-oms-10165
Glasgow City Council. (n.d.). A Practical Guide for Exhibitions. https://www.britishcouncil.in/sites/default/files/guidelines_for_museum_display.pdf
Ingenium accesibility standards for exhibitions. (2018). https://accessibilitycanada.ca/wp-content/uploads/2019/07/Accessibility-Standards-for-Exhibitions.pdf

NIME: Hyper-hybrid Flute: Simulating and Augmenting How Breath Affects Octave and Microtone – An electronic wind instrument with MIDI output.

by Daniel Chin, Ian Zhang, and Gus Xia

Breath control 🤧🥱🥅

Breathing is becoming increasingly important for stress relief. However it is not only good for controlling the body, but also for controlling wind instruments for example the flute.

With the development of the Hyper hybrid flute, an attempt was made to integrate the profound role of breath control into a digital flute and to take it in advance it was successful. In principle, musicians can control not only the volume, but also articulation, octave, micro-tones, etc. through breathing techniques on wind instruments. However, most existing digital versions do not capture the various effects of breathing as is possible in analogue. Instead, they rely on additional interface elements. An interface was developed that converts real-time breath data into MIDI controls. The Hyper-hybrid Flute can be switched between its electronic and acoustic modes. In acoustic mode, the interface is identical to the regular six-hole recorder. In electronic mode, the interface recognizes the player’s fingering and breathing speed and converts them into MIDI commands.

SIDE NOTE: Definition of MIDI: MIDI stands for Musical Instrument Digital Interface, which means “digital interface for musical instruments”. It is a language that allows computers, musical instruments and other hardware to communicate with each other. The MIDI protocol includes the interface, the language in which the MIDI data is transmitted, and the connections needed for the hardware to communicate.

 The Hyper-Hybrid Flute interface has three contributions in particular:

  • It simulates that acoustic property of the flute where higher breathing speed leads to higher octaves and more micro-tonal pitch bending.
  • By exaggerating the parameters, the interface is expanded into a hyper instrument.
  • A simple toggle supports the change between electronic and acoustic mode.

To detect if the hole is covered by a finger when playing the flute, a ring-shaped capacitive sensor is placed on each of the six holes and the breathing rate is measured by a BMP085 air pressure sensor.

Changing state

To enter the electronic mode, the musician inserts the air pressure sensor into the mouthpiece outlet. This mutes the recorder and simultaneously exposes the sensor to the air pressure in the recorder, from which the breathing speed is calculated. To enter acoustic mode, the player releases the air pressure sensor from the exit port, so that the playing of the interface is acoustic and the air pressure sensor is not triggered. The picture below shows the prototype with the attached sensors.

Controlling Octave and Micro-tone via Breath

The influence of the breath on the micro-tone and the octave can be modeled as follows:

  • Harder blowing at a pitch leads to an upward micro-tonal pitch bend.
  • When the breathing speed exceeds a specific threshold, the pitch jumps up an octave.

Such threshold values for the breathing speed increase with rising pitch. This is shown perfectly in the picture above. The higher the velocity of the breath with holding for example D# the higher the jumps to another octave are.

Measuring the relationship between pitch bend and breath pressure using an acoustic recorder gives a pitch bend coefficient of 0.055. The micro-tone enables the musician to perceive his position relative to the thresholds. This interactive feedback allows them then to calibrate their breathing speed and avoid unexpected octave jumps. Under the bend coefficient > 0.055 the interface becomes a hyper instrument. The micro-tone as a musical device offers an additional dimension of expressiveness.

How does it become a MIDI controller?

To know what pitch the instrument should produce at any given time does not make it a MIDI controller per-se, because MIDI requires a discrete stream of note on and note off events. So the interface must be stateful.

The breath velocity is compared to a threshold to determine whether the instrument should be at rest or producing a note. A rising edge in that signal marks the excitation of the instrument, which fires a Note On event. Meanwhile, a differentiator listens to the pitch and fires its output line when the pitch changes value. The differentiator output, conditioned on whether the instrument is at rest, also fires a Note On event.

What tools are used?

The interface is wireless. All sensors are connected to an Arduino Nano, which communicates with a Processing 3 sketch via Bluetooth. The sketch uses the midi bus library for MIDI messaging. The recorder body is modeled in Fusion 360 and fabricated with MJF 3D printing.

Reflections

In the results of this research, it is very clear that there is still innovation in the field of wind controllers. With the ability to measure octaves, the multi modal music teaching system can be expanded to include breathing technique in the learning outcomes. The MIDI interface is accurate and allows for optimal communication through the musician’s breathing.

Therefore, the hyper-hybrid flute poses as an interesting solution on the path to the digitization of wind instrument and also new didactic concept and learning more immersive. Besides teaching, I especially see this hyper-hybrid flute applied in the context of arts and performance art but also possible in commercial productions where simulations of wind instruments might be useful. Moreover I want to mention the importance of interfaces as bridges from the analogue to the digital world, what this flute also represents. It is of high interest to combine these worlds to create even better and more comprehensive solutions and experiences. Analog and digital, these opposites both have their justification and are to a certain extent equally dependent on each other, and definitely can profit from each others strengths.

I want to close this post with Adrian Belew’s words: “Digital for storage and quickness. Analog for fatness and warmth.”

Source: https://nime.pubpub.org/pub/eshr/release/1

NIME: Creating an Online Ensemble for Home Based Disabled Musicians: why disabled people must be at the heart of developing technology.

by Amble Skuse, Shelly Knotts

Generally the article addresses the use of universal design for software products that are accessible to musicians with various disabilities. Although I am not specifically involved with music interfaces myself, both privately and professionally, it was very interesting for me as an interaction designer to gain more insights into the field. Even if the article is about two different softwares and how they could be improved in order to be more accessible, there is a lot of information and input that can be applied to any other area or digital product.

Disabilities

Within the first paragraphs of the text the authors tackle the term „Disability“. Rather than seeing disabled people as a minority group, who are not able to act the same way non-disabled people can, they want to create a framework with all people and their individual needs working together and being equally included without having one dominant group. This is also a reassuring theme throughout the whole article: Working with the knowledge and experience of disabled people instead of assuming or trying to „solve“ problems for them. One key finding of their research is to bring disabled persons in the process, begin with an equitable approach and make technology more flexible, robust and inclusive.

Universal Design

As previously mentioned the approach of designing for disabled musicians sets its focus on Universal Design, especially on the first principle – „Equitable Use“, which is summed up in following four points:

1a. „Provide the same means of use for all users: identical whenever possible; equivalent when not.

1b. Avoid segregating or stigmatizing any users.

1c. Provisions for privacy, security, and safety should be equally available to all users.

1d. Make the design appealing to all users.“ [1]

Research Goals

The overall goal of this article is to inspire other designers and spread awareness that there is a lot of potential to make music technology systems accessible by providing information and support. As the title of the paper suggests, the project focuses on home-based disabled musicians in order to to provide access for them to collaborate with each other and perform live, both online and at physical events. Particularly important in this project was, that it is „disabled-led“ by putting disabled people in the foreground and actually start with their input instead of sprinkling in on top in the end.

Interviews

The first stage of the project was an interview phase with 15 home-based disabled musicians from all over the world. They had a diverse range of disabled identities, eg. mobility issues, d/Deaf, Autism, …, however the interviewees were not as demographically diverse as they wished for. For me this was very interesting to see how they handled this by just communicating it open and honest. The following categories of questions were included in the interviews:

  • the approach of making music
  • their personal requirements from music making applications (setup, handling, …)
  • their personal requirements for learning (concentration span, explanation, …)
  • their personal requirements for performance (real time or pre-recorded, duration of performance, …)

Analysis

At first the project laid its focus on live coding, because it does not require additional hardware like MIDI controllers and can be controlled with various assistive technology like eyegaze or head mouse controllers. Furthermore the bandwidth requirements are reduced in comparison to audio transmission. However the workshop with the target group showed that they are not really into live coding, but would prefer using their existing hardware, which is why the authors decided to shift the focus to audio streaming platforms. Following software tools were analyzed in the paper: Estuary, a live coding interface, Icecast, an audio streaming software, and LiveLab, an open source browser-based interface for sharing video and audio. 

Findings

Besides some technical issues with the software there were major political issues in the project. Overall the companies had the feeling that making their products accessible does not fully pay off, so they wanted to restrict the needs to their availability of time and money. One of the main approaches was to make an easy version of the software, which would never be a real part of the main program and therefore not adapted or updated over time. This of course did not match the findings of the interviews at all. Here it was a great concern to put the whole structure and the working process itself above small, surface adaptations. Specifically the musicians wished for a flexible layout, a quick response time, a well documented help, captions in videos, robustness with assistive hardware, accessibility as a part of the main software and including disabled people in the design process. Another main finding was that experiencing being in the community generates and expert knowledge of accessibility, which should always be considered and used in this context. 

Conclusio

Personally I felt that the major issue here was definitely a political one. Companies would rather not make it fully accessible due to financials and since it is not regulated by law or state funded they don’t feel obligated to do adapt their products. „Half accessibility is no accessibility“ was definitely a key statement for me in this article. To end my post on a positive note: I liked how the article stressed the importance of including a broad span of needs in any design work and prioritizing workflows and flexibility in order to be accessible for all. 

Sources

Amble H C Skuse and Shelly Knotts. 2020. Creating an Online Ensemble for Home Based Disabled Musicians: Disabled Access and Universal Design – why disabled people must be at the heart of developing technology. Proceedings of the International Conference on New Interfaces for Musical Expression, Birmingham City University, pp. 115–120.

[1] National Disability Authority: What is Universal Design. The 7 Prinicples. In: https://universaldesign.ie/what-is-universal-design/the-7-principles/#p1 (zuletzt aufgerufen 4.6.22)

NIME: Speculātor — Visual soundscape augmentation of natural environments

by Nathan Villicaña-Shaw, Dale A. Carnegie, Jim Murphy, and Mo Zareei – Apr 29, 2021

In the second half of the 20th century, a new format for multi-sensory artistic expression emerged through integrating auditory elements within visual art practices. These sonic art installations by definition, incorporate acoustic elements into their overall presentation and the realization of the work’s artistic statement. Moreover, Another component of sonic arts that is of importance to later sections is the tradition of exhibition outside of typical art gallery venues.

Following the footsteps of early sonic artworks such as Max Nauhaus’ Listen: Field Trips Through Found Sound Environments (1966), and John Cage’s 4’33” (1952), the Speculātor project explores the implications of augmenting soundscapes without adding sounds or manipulating how visitors physically hear those sounds in-situ.

Behind the project is a careful and deep study of what is meant by Soundscape and when com term was born, particularly delving into the relationship between Natural Soundscape and Music Technology. The interaction between these two players can take place in two different and opposite ways, namely bringing nature into a technological environment, or conversely bringing technology into a natural environment, facilitating in-situ sonic art and musical performances. In this way, the juxtaposition of electronic devices in natural settings can be exploited aesthetically and artistically, obtaining results in a way that cannot be achieved if presented inside indoor galleries.

It is in this context that Speculātor was born. Is a small, battery-powered, environmentally reactive soundscape augmentation artefact that provides audio-reactive LED feedback. 

Close up of Speculātor v3 unit with an unsanded and unsealed enclosure.
Close up of Speculātor v3 unit with an unsanded and unsealed enclosure.
Speculātor hardware from the side.
Speculātor hardware from the side.
Speculātor hardware from the top.
Speculātor hardware from the top.

Personally, I’ve found the level of engineering that this “artwork” presents extremely interesting. Indeed, a large number of parameters were taken into account in its design to make it suitable for every situation. It is in fact extremely transportable, modular, and to survive in outdoor, fully-exposed locations, Speculātor uses data collected from a combined temperature and humidity sensor to shut down the system when the enclosure is too hot inside or compromised by liquid ingress.

All this is made possible by complex electronics developed with extreme detail. At the heart is a Teensy 3.2, and connected to it are input and output modules such as microphones, NeoPixel RGB, temperature, humidity, and light sensors, and an autonomous battery. This is then encased in a spherical acrylic shell, making it waterproof, buoyant, and transparent.

This is then encased in a spherical acrylic shell, making it waterproof, buoyant, and transparent. 

The final effect is a kind of Christmas tree ball, which can be easily hung thanks to a specially created hook on the wrapper, and which needs nothing but itself.

Speculātor units installed in Kaitoke Regional Park in February 2020.
Speculātor units installed in Kaitoke Regional Park in February 2020.
Speculātor units installed in Kaitoke Regional Park in March 2020.
Speculātor units installed in Kaitoke Regional Park in March 2020.
Close-up of a frosted unit displaying song feedback.
Close-up of a frosted unit displaying song feedback.
Close up of same frosted unit displaying click feedback.
Close up of same frosted unit displaying click feedback.

Speculātor is thus placed in natural places with an important sound background since it is the sound that makes it come alive. Indeed, it is placed near waterways and cicadas, making nature the real user of Speculātor.

I found the connections to the work of Bruce Munro, particularly his work “Field of Light,” intriguing. For in the latter, the artist brings technology into the natural environment, creating a new connection that welcomes the audience into a different exploration of their surroundings. Again, technology reflects nature, rather than going against it, and this is perhaps what makes this approach speculative, even though it should not be. 

Speculātor explored non-aural approaches to the exhibition of sonic artwork which leveraged visitors’ visual sense to prioritize listening. By focusing on listening instead of speaking, visual soundscape augmentation techniques potentially serve as a promising method for realizing sonic installation art whose artistic focus is the in-situ sonic environment.

Speculātor installed in Grand Canyon, Arizona.
Speculātor installed in Grand Canyon, Arizona.
Speculātor installed in Donner’s Pass, California.
Speculātor installed in Donner’s Pass, California.
Speculātor installed on Route 66, Arizona.
Speculātor installed on Route 66, Arizona.

Sources: