Cross-Modal Terrains: Navigating Sonic Space through Haptic Feedback


The introduction of digital musical instruments (DMIs) has removed the need for the existence of a physically resonating body in order to create music, leaving the practice of sound- making often decoupled from the resulting sound. The inclination towards smooth and seamless interaction in the creation of new DMIs has led to the development of musical instruments and interfaces for which no significant transfer of energy is required to play them. Other than structural boundaries, such systems usually lack any form of physical resistance, whereas the production of sounds through traditional instruments happens precisely at the meeting of the performer’s body with the instrument’s resistance: “When the intentions of a musician meet with a body that resists them, friction between the two bodies causes sound to emerge. Haptic controllers offer the ability to engage with digital music in a tangible way.

Using basis functions to generate haptic terrains for the NovInt Falcon.


Dynamic relationships occur and are ongoing between the performer, the instrument, and the sounds produced when playing musical instruments. These exchanges depend upon the sensory feedback provided by the instrument in the forms of auditory, visual, and haptic feedback. Because digital interfaces based around an ergonomic HCI model are generally designed to eliminate friction altogether, the tac- tile experience of creating a sound is reduced. Even though digital interfaces are material tools, the feeling of pressing a button or moving a slider does not provide the performer with much physical resistance, whereas the engagement required to play an acoustic instrument provides musicians with a wider range of haptic feedback involving both cutaneous and proprioceptive information, as well as information about the quality of an occurring sound. This issue is recognized in Claude Cadoz’s work regarding his concept of ergoticity as the physical exchange of energy between performer, instrument, and environment. A possible solution to these issues is the use of haptic controllers. As has been previously noted, “we are no longer dealing with the physical vibrations of strings, tubes and solid bodies as the sound source, but rather with the impalpable numerical streams of digital signal processing”. In physically realizing the immaterial, the design of the force profile is crucial because it determines the overall characteristics of the instrument.

My Conclusion

Haptic Feedback of Cross-Modal Terrains in combination with Sonic Sound is a very interesting way to experience a feel of terrains. Experience structure of surface imperfections are the next step to make an impact and improve the experience of Cross-Modal Terrains.

Furthermore the technology already has achieved the Sonic Sound Haptic feedback though ultrasonic waves. For example the company Ultraleap with STRATOS Inspire where soundwaves a concentrated into on point. This will be an interesting approach for testing the haptic feedback in modern technology and might be a more detailed response.

The way they used Max8 for there testing was a good choice due to having a good connection with devices and monitoring values and the images which is provided as a greyscale 2D image. There it is very simple to see and detect where we are on the surface and how intense it is to feel.

Isaac, Gabriella; Hayes, Lauren; Ingalls, Todd (2017). Cross-Modal Terrains: Navigating Sonic Space through Haptic Feedback

NIME: Creativity in Children’s Music Composition


I had a look on the paper “Creativity in Children’s Music Composition” written by Corey Ford, Nick Bryan-Kinns and Chris Nash, which was published at Nime ( in 2021. The authors conducted a study examining which interactions with Codetta – a LOGO-inspired, block-based music platform – supports children’s creativity in music composition. In this context, “LOGO” refers to Papert’s LOGO philosophy to support children’s learning through play. Such experiential learning approaches are based on intrinsic motivation and tinkering. The authors stated that there was a lack of empirical research that investigated if a LOGO-inspired approach was conductive to creativity, which is why their aim was to get a better understanding on how children make music with such technologies. 

About the study

The concrete research question of the study was “Which interactions with Codetta best support children’s ability to be creative in composition?” with the aim to uncover patterns of creativity within the participant’s first use of the tool. To get a better understanding of how Codetta works, I found this explanation video on youtube: The study was performed with 20 primary schoolers between the age range of 6-11. Due to the COVID-siutation the study was conducted in an online setting where the children had to perform two tasks: 1) composing a short piece of music and 2) answering a post-task questionnaire afterwards. 


Once the children opened Codetta, they were provided built-in, age-appropriate, instructions to get to know the basics of the tool (see Fig. 1). After completing the tutorial the children were asked to compose a short piece of music. No other motivation was given to keep the task open-ended. Once finished composing, the children completed a short post-task questionnaire online.

Fig 1: The Final Result from Codetta’s Tutorial

Data collection

The following data was collected: 1) log data of each child’s interactions using Codetta (and consequently their final compositions); 2) questionnaires responses and 3) expert ratings of each composition.

For visualizing the collection of log data a color scheme with several interaction categories was developed (see Fig. 2). The returned logs from Codetta were mined and visualised using Python and Anaconda. Once the data was prepared, statistical analysis was conducted using SPSS.

The post-task questionnaire consisted of 13 5-point Likert-scale statements, asking the children about their confidence in music lessons, writing music notation, using blockbased programs and using computers as well as the ​​children’s perceptions of Codetta as a whole.

In order to quantify the children’s creativity, six expert judges rated each composition using several scales. Each judge assessed each child’s composition independently and in a random order. Their ratings were then averaged.

Fig 2: Color scheme for children’s interaction logs 


I don’t want to go too much into the empirical data here, but to sum up the results it can be said that they focused on three subsections: the children’s compositions, interactions and UI perceptions.

Most children composed short pieces (mean length was 11.950 seconds) with conventional melodies like arcing up and down in pitch. The logged interaction data was visualised as a stacked bar chart using the color scheme mentioned before (see Fig. 3). The results of the questionnaire showed that the children felt they had to plan before writing their music and that they found it difficult to understand what each block was for.

Fig. 3: Stacked Bar Chart for the Total Percentage of All Interactions


Based on the results several conclusions were drawn (shortened): 

  • Note-level interactions (rise/lower pitch, edit note length) best support children’s ability to be creative in music composition
  • First-time novice users should initially be encouraged to focus on learning how to use Codetta’s notation engine (i.e. Introduction workshops)
  • Codetta could automatically introduce blocks sequentially, based on children’s current knowledge and interaction patterns
  • More complex features should be introduced gradually
  • The UI Design could be more intuitive to avoid mistakes and confusion

Lastly it is to mention that the results are limited by the small sample size and user’s levels of experience. longitudinal study with a larger number of participants would be needed to throughly investigate how interaction styles develop with time.

Own thoughts

I found it super interesting to read about the topic “creativity in children’s music composition” for several reasons: First of all, I spent a huge part of my childhood and teenage years in music schools, playing several instruments and taking music theory courses, but I never got in touch with actually composing music myself – Neither in primary/high school nor in music school or in any of the music-theory courses I took. So I totally agree with the authors’ statement that composing was a neglected area of music education. Moreover, I liked the connection between sound design, interaction design and sociology since digital music composition combines all of that. It could also be useful as inspiration for finding a master’s thesis topic.

Though it was a bit hard for me to understand the “Results” because I have only basic experience and knowledge in statistics and would need a deeper dive into that topic to get all the measures and values right. Yet it was nice to engage with research after not binge-reading studies and empirical stuff in a while (speaking of bachelor’s thesis).

NIME: Speculātor — Visual soundscape augmentation of natural environments

by Nathan Villicaña-Shaw, Dale A. Carnegie, Jim Murphy, and Mo Zareei – Apr 29, 2021

In the second half of the 20th century, a new format for multi-sensory artistic expression emerged through integrating auditory elements within visual art practices. These sonic art installations by definition, incorporate acoustic elements into their overall presentation and the realization of the work’s artistic statement. Moreover, Another component of sonic arts that is of importance to later sections is the tradition of exhibition outside of typical art gallery venues.

Following the footsteps of early sonic artworks such as Max Nauhaus’ Listen: Field Trips Through Found Sound Environments (1966), and John Cage’s 4’33” (1952), the Speculātor project explores the implications of augmenting soundscapes without adding sounds or manipulating how visitors physically hear those sounds in-situ.

Behind the project is a careful and deep study of what is meant by Soundscape and when com term was born, particularly delving into the relationship between Natural Soundscape and Music Technology. The interaction between these two players can take place in two different and opposite ways, namely bringing nature into a technological environment, or conversely bringing technology into a natural environment, facilitating in-situ sonic art and musical performances. In this way, the juxtaposition of electronic devices in natural settings can be exploited aesthetically and artistically, obtaining results in a way that cannot be achieved if presented inside indoor galleries.

It is in this context that Speculātor was born. Is a small, battery-powered, environmentally reactive soundscape augmentation artefact that provides audio-reactive LED feedback. 

Close up of Speculātor v3 unit with an unsanded and unsealed enclosure.
Close up of Speculātor v3 unit with an unsanded and unsealed enclosure.
Speculātor hardware from the side.
Speculātor hardware from the side.
Speculātor hardware from the top.
Speculātor hardware from the top.

Personally, I’ve found the level of engineering that this “artwork” presents extremely interesting. Indeed, a large number of parameters were taken into account in its design to make it suitable for every situation. It is in fact extremely transportable, modular, and to survive in outdoor, fully-exposed locations, Speculātor uses data collected from a combined temperature and humidity sensor to shut down the system when the enclosure is too hot inside or compromised by liquid ingress.

All this is made possible by complex electronics developed with extreme detail. At the heart is a Teensy 3.2, and connected to it are input and output modules such as microphones, NeoPixel RGB, temperature, humidity, and light sensors, and an autonomous battery. This is then encased in a spherical acrylic shell, making it waterproof, buoyant, and transparent.

This is then encased in a spherical acrylic shell, making it waterproof, buoyant, and transparent. 

The final effect is a kind of Christmas tree ball, which can be easily hung thanks to a specially created hook on the wrapper, and which needs nothing but itself.

Speculātor units installed in Kaitoke Regional Park in February 2020.
Speculātor units installed in Kaitoke Regional Park in February 2020.
Speculātor units installed in Kaitoke Regional Park in March 2020.
Speculātor units installed in Kaitoke Regional Park in March 2020.
Close-up of a frosted unit displaying song feedback.
Close-up of a frosted unit displaying song feedback.
Close up of same frosted unit displaying click feedback.
Close up of same frosted unit displaying click feedback.

Speculātor is thus placed in natural places with an important sound background since it is the sound that makes it come alive. Indeed, it is placed near waterways and cicadas, making nature the real user of Speculātor.

I found the connections to the work of Bruce Munro, particularly his work “Field of Light,” intriguing. For in the latter, the artist brings technology into the natural environment, creating a new connection that welcomes the audience into a different exploration of their surroundings. Again, technology reflects nature, rather than going against it, and this is perhaps what makes this approach speculative, even though it should not be. 

Speculātor explored non-aural approaches to the exhibition of sonic artwork which leveraged visitors’ visual sense to prioritize listening. By focusing on listening instead of speaking, visual soundscape augmentation techniques potentially serve as a promising method for realizing sonic installation art whose artistic focus is the in-situ sonic environment.

Speculātor installed in Grand Canyon, Arizona.
Speculātor installed in Grand Canyon, Arizona.
Speculātor installed in Donner’s Pass, California.
Speculātor installed in Donner’s Pass, California.
Speculātor installed on Route 66, Arizona.
Speculātor installed on Route 66, Arizona.


(NIME) Yixiao Zhang, Gus Xia, Mark Levy, and Simon Dixon. 2021. COSMIC: A Conversational Interface for Human-AI Music Co-Creation. 

Reference to article:

There are more and more forms of AI in different fields. From assistants on websites, to online bots (cleverbot) and including the famous voice assistants on phones (Alexa, Siri).

I have always found the use of these interesting, not only for the amount of information they contain, but also for discovering the funniest answers (like the ones in the images below). That’s why I find this paper so interesting. Applying all this AI knowledge to the world of music can be complex and at the same time super interesting.

In this paper they talk about COSMIC, a (COnverSational Interface for Human-AI MusIc Co-Creation). This bot not only responds appropriately to the user’s questions and comments, but it is also capable of generating a melody from what the user asks for. In the following video you can see an example (referenced in the same paper).

The complexity of these projects is always in the back of my mind. Knowing how devices are capable of reacting like humans seems to me to be a great advance that at the same time can be a bit alarming.

Still, this case opens up a lot of new opportunities. Using this same system, a new method of learning can even be generated. After all, this bot simply edits parameters of a melody (speed, pitch…) to resemble different emotions. One could therefore learn how different emotions tend to imply different sounds or speeds, or many other details.

Living Sounds

Thoughts on a NIME Paper | Living Sound

In this blog post I will share my thoughts to an interesting paper published at the 2021 NIME (New Interfaces for Musical Expression) conference. The paper carries the title Living Sounds: Live Nature Sound as Online Performance Space and was written by Gershon Dublon and Xin Liu.

One of the remarkable aspects of the experience was that small, otherwise inconsequential events, such as a bumblebee flying around a microphone, could bring distant, isolated listeners together in the continuously unfolding story.

(Dublon & Liu, 2021)

Living Sounds describes itself as “an internet radio station and online venue hosted by nature” (Dublon & Liu, 2021). It essentially streams the sound of microphones that are installed in a wetland wildlife sanctuary around the clock on the homepage Furthermore, the project regularly invites artists to perform on the stream, using the live sound from the microphones via a self-built online interface which provides the artists with the raw multichannel audio stream to work with.