Reference works for Extended Guitar Performance #3

During my research on Google Scholar, I came upon the following journal article called “The Electric Guitar: An Augmented Instrument and a Tool for Musical Composition”. I found the content of this article highly inspiring since it deals with the history of sonic augmentation of the electric guitar and is thus also highly relevant for my project (which, of course, also seeks to extend the range of sound possibilities of an electric guitar).

According to Lähdeoja, Navarret, Quintans and Sedes, the electric guitar can be considered a highly augmented instrument in itself since it is basically an acoustic guitar whose sonic possibilities were and still are extended by electromechanical and digital means. Obviously, at the beginning of the electric guitar stands the acoustic guitar since – as far as the basic physical qualities are concerned – an electric guitar is very much based upon an acoustic: (physical) features such as metallic strings, a glued or screwed neck, strengthened with a truss rod, tone woods (maple or mahogany necks with a maple, rosewood or ebony fingerboard) as well as metallic frets and a chromatic scale were all transferred from the acoustic onto the electric guitar. However, with the invention of the solid body a new era began, introducing a major extension of sonic capabilities through electric technologies of amplification. With an electric guitar, the actual sound source of the acoustic guitar becomes part of an electro-acoustic chain that additionally comprises a range of analog and/or digital modules that are necessary for producing sound, thus giving the electric guitar a modular property. Of course, the electrification of the guitar sparked a whole new way of thinking and creating sounds, leading to the development of new playing techniques.

While “the electrification of the guitar is probably the most important modification the instrument has undergone in the twentieth century”, the instrument and its sound possibilities continued to be intensely advanced in the following decades, with the developments driven by guitarists, instrument makers and the requirements of different musical styles. The afore mentioned modularity of the electric guitar that includes the selection of tone woods, pick-ups, amplifiers, effect chains, etc. provides the guitarist/musician with a lot of freedom as far as customizability is concerned. However, it is the case with every instrument that along with the increase of sonic possibilities and corresponding control interfaces, playing the instrument gets more and more complex as well, potentially overburdening the musician’s (physical) capabilities. In order to avoid the latter, most control interfaces for electric guitar have been conceived with simplicity in mind, featuring relatively simple controls and traditionally being conveniently placed within the guitarist’s reach (e.g. an effect pedal that is (dis)activated by stepping on it). While serving its purpose of not overburdening the player, Lähdeoja, Navarret, Quintans and Sedes observe that this approach limits the possibilities for dynamic, real-time interaction with controls which leads to a status they describe as the “sonic stasis common in electric guitar playing: the player chooses a specific sound for a musical part with “on/off” effect switches, playing with the same timbre until the next “monolithic” modification”.

I think this latter notion is very interesting because it justifies my quest to identify ways of extending the sonic range of an electric guitar using only the natural movements of the player as an input source for effects. While the article states that there is currently research being done on this topic, I think that I can still add to the current research with my project and its outcomes.

Source:

Lähdeoja O., Navarret B., Quintans S., Sedes A. (2010). The Electric Guitar: An Augmented Instrument and a Tool for Musical Composition. Journal of Interdisciplinary Music Studies, 4(2), 37-54. 10.4407/jims.2010.11.003

Idea Update

Good News: Conducting thorough research, concerning extended guitar sound possibilities, during the month of November allowed me to further specify my project idea “Extended Guitar Performance”. By firstly discerning what has already been done in this area of musical research, I identified several commercials products (see the MIDI rings in Blog #6) as well as custom-made products (e.g. Blog #9 + Blog #10) that all deal with movement and/or gesture-controlled extension of guitar sounds. However, I also found that the bulk of these “gadgets” require the guitar player to move his body and hands in certain ways to specifically trigger the sensors built within the gadget. Consequently, the guitarist often needs to interrupt his natural playing movements in order to wave is hand for example. I saw this for myself when I tried to keep playing guitar in the usual way while incorporating special hand movements to trigger hypothetical sensors (having not built anything yet) – it proved to be almost impossible to maintain a typical rock groove for instance much less playing lead.

That is why I thought that it would be even better if one could extend a guitar’s sonic possibilities while at the same time not compromising the usual way people play guitar. Based on this notion, I decided to try finding a non-evasive way, restricting my idea to only using natural movements of a guitarist to trigger sensors in order to extend guitar sound possibilities.

Consequently, some changes to my previous idea will be made:

I will abandon the idea of attaching (accelerometer) sensors to the guitar body to pick up its movements because moving in a certain way again is not part of the natural movements of a guitarist.

Instead, I will focus on the “Solo Mode” effect, described in Blog #8, and effects based on the same concept since this kind of system allows the guitar player to move up and down the neck with effects adapting automatically to the fret position. During this week’s consultation session, I brought this idea before my supervisor, and he recommended looking into ultrasonic or infrared sensors to locate/pinpoint the left hand along the neck. This will, of course, be done. Furthermore, my supervisor also saw the possibility of using this system beyond switching from rhythm to lead sound. Additionally, one could define certain areas/positions on the fretboard that each give a slightly different sound, allowing the guitarist to change is sound according to fret position.

I will also further pursue the idea of placing an IMU type of sensor on the pick to pick up the natural strumming and picking movements of the guitarist. Here, it would have been practical to resort to one of the MIDI rings which are, however, quite expensive. At the moment, a custom setup is hence favoured.

Naturally, I will also continue to analyse a guitarist’s natural body and hand movements in order to come up with additional ways to harness them as an input source for guitar effects.

Lastly, upon the recommendation of my supervisor, I will not yet fully discard the possibility of using MIDI pick-ups. During my research I stumbled upon the Fishman Triple Play, a hexaphonic pick-up, capable of picking up the pitch of each string separately and converting it to MIDI data. Albeit coming with a huge price tag (300€ on Thomann), the pick-up constitutes another possibility of adding to a guitar’s sound potential without compromising the player’s usual movements. This will be given further thought.

Sources:

https://www.thomann.de/at/fishman_triple_play.htm

Reference works for Extended Guitar Performance #2

Albeit having used Google Scholar to find scientific articles extensively while writing my Bachelor’s thesis, I never considered using it for my Extended Guitar performance project – until today. It was a good decision for I discovered some great articles that deal with electric guitars and possibilities to further extend or evolve their sonic capabilities. One of these articles is briefly summarised below.

MIDI Pick

The first article I found documents the development of the so-called MIDI Pick. This special pick serves a dual purpose: on one hand, it can be used as a conventional pick to pluck the strings of an electric guitar and on the other hand, it functions as a pressure trigger, interpreting finger pressure exerted on it as analog or digital values. The pick itself is made of wood, rubber and double-sided tape with a force-sensing resistor mounted on it. The sensor is connected to an Arduino microcontroller and Bluetooth module is used to transmit the data by wireless means. The two latter items are attached to strap worn around the wrist. As already mentioned, the MIDI pick needs to be squeezed and the harder the pressure, the higher a numerical number is outputted. The output is received by a MSP/Max patch that relays the data to other patches. Furthermore, the MIDI pick can operate in serial or switch mode with the mode being controlled by a switch on the wrist. In serial mode, values between 0 and 127 are transmitted. In the switch mode, the pick sends a 1 when the pressure exceeds a certain threshold and a 0 when that pressure is exceeded again, essentially making the pick a toggle switch. In a live performance test, the developer successfully tested the MIDI pick, using it as a controller for a white noise generating patch. In this context, the developer also noted that being capable of adequately using the MIDI pick necessitated time and practice. The 2006 article also spoke about a future outlook involving a updated version of the MIDI pick, however, I did not find another article documenting the further development process of the pick.

Personal thoughts

This article is definitely interesting for my project because the latter will also involve using the pick one way or another to add to the sonic capabilities of the guitar. In fact, the notion of placing a pressure sensor opened a whole new world of possibilities for me as far as sensors are concerned. Let me explain: Until now, I only thought about mounting an accelerometer/gyroscope/IMU kind of sensor on the pick or the back of the hand in order to register e.g. the strumming movements of the hand. However, I see now that I need not to restrict my thinking to the afore mentioned sensors alone. While the idea of using a pressure sensor is evidently taken (XD), I immediately thought of a touch sensor, more precisely, a capacitive touch sensor. A capacitive touch sensor measures touch based on electrical disturbance from a change in capacitance and not based on pressure applied (in contrast to a resistive touch sensor). As far as applications in a guitar context are concerned, such a touch sensor may be used to trigger or activate an effect by double tapping on the pick for example. Admittedly, double tapping would not be possible with a conventional pick that needs to be held between thumb and forefinger all the time. However, by using a so-called thumb pick, a pick that is strapped to the thumb, the forefinger would be free to tap onto the underside of the pick in order to trigger a certain value. This idea will certainly find its way into my final project concept. Beyond that, the article also shows that it is possible to place a sensor on a pick without compromising playability.

Sources:

Vanegas R. (2007, June 6-10). The MIDI Pick – Trigger Serial Data, Samples, and MIDI from a Guitar Pick. Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME07), New York, NY, USA.  https://dl.acm.org/doi/pdf/10.1145/1279740.1279812?casa_token=kT0EgXV1DtwAAAAA:WQ1bNZkrY9hVGEbT4nQbTd8kk6Miz5_ZPl6ZkRCHTPXQPFpULPva5_QQ3GLr6tGDKq-NZTF0cjF3gA

http://roy.vanegas.org/itp/nime/the_midi_pick/

Reference works for Extended Guitar Performance #1

I dedicated this weekend to a first round of finding similar reference works and publications since this is also one of the tasks due for the Exposé.

Imogen Heap’s Mi.Mu gloves

One of the artists I stumbled upon during my research that makes use of hand movements and gestures to perform and compose her music is Imogen Heap. Considered a pioneer in pop and electropop music, she is a co-developer of the so-called Mi.Mu gloves, gesture controllers in glove form that Heap uses to control and manipulate recorded music and/or her musical equipment during a (live) performance.

As she explained in an interview with Dezeen, Heap found the conventional way of playing keyboards or computers on stage very restrictive since most of her actions like pressing buttons or moving a fader were hidden from the audience and thus not very expressive, even though they may constitute a musically important act. Her goal was to find a way to play her instruments and technology in a way that better represents the qualities of the sounds produced and allows the audience to understand what is going on on stage.

Inspired by a similar MIT project in 2010, the gloves underwent eight years of R&D, with the development team consisting of NASA and MIT scientist alongside Heap. While Heap has used prototypes for several years now during her live performances, also other artists were occasionally seen to try them out. Ariana Grande for example used the gloves on her 2015 tour. In July 2019, the Mi.Mu gloves became commercially available for the first time promising to be “the world’s most advanced wearable musical instrument, for expressive creation, composition and performance”.

The Mi.Mu gloves contain several sensors including:

  • an accelerometer, a magnetometer, and a gyroscope in the form of an IMU motion tracker, located at the wrist, that gives information regarding the hand’s position, rotation and speed
  • a flex sensor over the knuckles to identify the hand’s posture in order to interpret certain gestures
  • a haptic motor that provides the “glove wielder” with haptic feedback: it vibrates for example if a certain note sequence is played

To send the signals to the computer, the gloves use WLAN and Open Sound Control (OSC) data instead of MIDI data. The gloves themselves are made from e-textiles, a special kind of fabric that acts as a conductor for information. Furthermore, the gloves come with the company’s own Glover software to map your own custom movements and gestures which can be integrated into DAWs such as Ableton Live or Logic Pro X.

Unfortunately, the Mi.Mu gloves still cost about £2,500 (converted about 3,000 €) and are, on top of that, currently sold out due to the worldwide chip shortage. A limited number of gloves is expected to become available in early 2022.

Key take-aways for own project

First of all, Heap’s Mi.Mu gloves serve to confirm the feasibility of my project since the technology involved is quite similar. The gloves also use an IMU sensor which is also my current go-to sensor to track the movements of a guitar player’s hands. Although Heap mostly uses the gloves to manipulate her voice, I found a video that shows her playing the keyboard in between as well. This shows that wearing the sensors on one’s hands does not necessarily interfere with playability of an instrument which is a very important requirement for my project.

Interestingly, the gloves rely on WLAN and OSC instead of MIDI which is definitely a factor that calls for additional research from my side. OSC comes with some advantages over MIDI especially as far as latency and accuracy are concerned which makes it ideal for use in real-time musical performances. Furthermore, data is conveyed over LAN or WLAN which eliminates the need for cables. Moreover, OSC is supported by open-source software such as Super Collider or Pure Data which could make it even more attractive for my project.

Finally, I want to use Imogen Heap and her glove-supported performances as a source of inspiration in order to come up with playing techniques or effect possibilities for my own project.

Sources:

https://www.dezeen.com/2014/03/20/imogen-heap-funding-drive-for-gloves-that-turn-gestures-into-music/

https://www.mimugloves.com/gloves/

https://www.engadget.com/2019-04-26-mi-mu-imogen-heap-musical-gloves-price-launch-date.html?guccounter=1&guce_referrer=aHR0cHM6Ly9lbi53aWtpcGVkaWEub3JnLw&guce_referrer_sig=AQAAABq203VmIuqq3D8e81XRlsg9lu1bLGt7Zf8fnxd6554YvV1nBE0XW87WoYfLl5DWNMybFLUsgSz3rlthBtL1ZvEsXv7Szdyv8hIAVr64tKPltPEApCyqtPQvmqWLaQDUfbX1_LIp7oLR6PbzavY3NeWb0NBv2rfC6A1MyUCkG0LZ

https://www.popularmechanics.com/technology/gadgets/reviews/a10461/power-glove-makes-music-with-the-wave-of-a-hand-16686382/

https://en.wikipedia.org/wiki/Imogen_Heap

https://www.uni-weimar.de/kunst-und-gestaltung/wiki/OSC

https://opensoundcontrol.stanford.edu/

The Bowed Tube: a Virtual Violin

In the following blog post, a journal article is analysed in the course of the subject Project Work 1 with Dr. Gründler.

The chosen paper is documenting the development process of virtual violin usable for real-life performances. It consists of two components: a spectral model of a violin as well as a control interface that registers the movements of the player. The control interface consists of a violin bow and a tube with strings drawn upon it. The system uses two motion trackers to capture the gestures whose parameters are then sent to the spectral model of the violin. This model is able to predict spectral envelopes of the sound corresponding to certain bowing parameters. Finally, an additive synthesizer uses these envelopes to produce the final violin sound. MAX/MSP serves as the software framework and three external MAX/MSP objects were specifically developed.

I chose this article because I work on a similar project myself that aims to extend the sounds of an electric guitar using sensor data. That is why I find the above-mentioned system pretty genius especially from the technical aspect. However, although the article reads that there is a video that shows how the system works, I would have been interested in the feedback of real violin players regarding the Bowed Tube. In my opinion, it would have been great if the authors had included a kind of survey in their article that asks violin players to test the Bowed Tube and then uses their collected feedback to gain insights on the actual playability and use of the Bowed Tube as well as possible improvements. Finally, I also have to admit that I do not see a lot of use cases for the Bowed Tube. In fact, the article itself is very vague about which real life problem it tries to solve with its Bowed Tube violin. It is definitely a stunning project from a technological and scientific point of view. Maybe I am a too practically orientated person, but I cannot help to ask myself – why not use a real violin?

Sources:

Carillo A. P. & Bonada J. (2010 June 15-18). The Bowed Tube: a Virtual Violin. Proceedings of the 2010 Conference on New Interfaces for Musical Expression (NIME 2010), Sydney, Australia. 

The Emotional Space | #3 | Another Concept

My first meeting with my supervisor left me very inspired and motivated – I had the feeling to be able to explore almost any direction I wanted while still receiving full support. I communicated that the walking soundscape was definitely my favorite concept and in turn received some further input and keywords to look into.

One of those keywords was RjDj. The company Reality Jockey Ltd., which was founded in 2008, coined the term reactive music, which could be described as music that reacts to its listener or the environment in real time (Barnard et al., 2009). RjDj released a number of apps on iOS, which all seemed to be able to technically do exactly what I wanted to achieve with my walking scape – just in a much more broad sense. They built a framework based on Pure Data that allowed people to create music that reacts to a phone’s sensors. Basically anybody could create music (or “scenes” as they called it) for those apps, with the help of RjDj’s open source library rjlib. Unfortunately, neither the RjDj website, nor its apps are available anymore.

Flocking Algorithmen

… Using computers, these patterns can be simulated by creating simple rules and combining them. This is known as emergent behavior, and can be used in games to simulate chaotic or life-like group movement. …

https://gamedevelopment.tutsplus.com/tutorials/3-simple-rules-of-flocking-behaviors-alignment-cohesion-and-separation–gamedev-3444

Im Jahr 1968 wagte Craig Reynolds einen revolutionären Schritt in der KI-Animation. Er erstellte viele individuelle Objekte, welche mit den jeweils anderen in Interaktion traten. Diese Objekte nannte er “Boids”. Ziel des Projektes war es, das Verhalten eines Vogelschwarms zu simulieren.

In der einfachsten Form folgten diese Boids 3 Grundregeln.

  1. Separation: wähle eine Richtung, die einer Häufung von Boids entgegenwirkt
  2. Angleichung: wähle eine Richtung, die der mittleren Richtung der benachbarten Boids entspricht
  3. Zusammenhalt: wähle eine Richtung, die der mittleren Position der benachbarten Boids entspricht

neighborhood diagram

Jeder Boid hat direkten Zugriff auf die gesamte geometrische Information der Szene, aber der Flocking Algorithmus erfordert, dass er nur auf Boids innerhalb einer bestimmten kleinen Umgebung (Neighbourhood) um sich selbst reagiert. Die Nachbarschaft ist durch eine Entfernung (gemessen vom Zentrum des Boids) und einen Winkel, gemessen von der Flugrichtung des Boids, gekennzeichnet. Boids außerhalb dieser lokalen Nachbarschaft werden ignoriert. Die Nachbarschaft könnte als ein Modell der eingeschränkten Wahrnehmung (wie bei Fischen in trübem Wasser) betrachtet werden, aber es ist wahrscheinlich richtiger, sie als die Region zu betrachten, in der die Artgenossen die Steuerung eines Boids beeinflussen.

In Zusammenarbeit mit einigen Mitarbeitern der Symbolics Graphics Division und Whitney / Demos Productions haben wir einen animierten Kurzfilm mit dem Boids-Modell namens Stanley und Stella in: “Breaking the Ice”. Dieser Film wurde erstmals im Electronic Theater auf der SIGGRAPH ’87 gezeigt. Auf der gleichen Konferenz wurde auch ein technischer Aufsatz über Boids veröffentlicht. In den Kursnotizen für die SIGGRAPH ’88 gab es einen informellen Beitrag über Hindernisvermeidung.

Seit 1987 gab es viele weitere Anwendungen des Boids-Modells im Bereich der Verhaltensanimation. Der erste war der Tim-Burton-Film Batman Returns von 1992. Er enthielt computersimulierte Fledermaus- und Pinguinschwärme, die mit modifizierten Versionen der ursprünglichen, bei Symbolics entwickelten boids-Software erstellt wurden. Andy Kopra (damals bei VIFX, das später mit Rhythm & Hues fusionierte) produzierte realistische Bilder von Fledermausschwärmen. Andrea Losch (damals bei Boss Films) und Paul Ashdown erstellten eine Animation einer “Armee” von Pinguinen, die durch die Straßen von Gotham City marschieren.

1

Das ist vorallem in diesen zwei Filmausschnitten zu sehen:

Für die objektorientierte Programmiersprache Processing gibt es ebenfalls eine sehr representative Darstellung zu Boids und der Programmierung.*

1 http://www.red3d.com/cwr/boids/

https://cs.stanford.edu/people/eroberts/courses/soco/projects/2008-09/modeling-natural-systems/boids.html

Bild:

1 http://www.red3d.com/cwr/boids/

*https://processing.org/examples/flocking.html

Aeolis – A Virtual Instrument Producing Pitched Tones With Soundscape Timbres

Arbel, L. (2021). Aeolis: A Virtual Instrument Producing Pitched Tones With Soundscape Timbres. In NIME 2021. https://doi.org/10.21428/92fbeb44.64f66047

Till the 1970s only a few sound artists and musicians paid attention to sounds that are constantly produced by nature. Because of the work of Pierre Schaeffer (The World Soundscape Project – 1969) artists started to use sounds made by our environment for experimental and artistic purposes. Intentionally made sounds and melodies (= Anthropophony) made by humans are not the only sounds that affect people´s feelings. The wide variety of sounds that nature provides are also altering emotions. Sounds can be made by a singing bird (Biophony), or just by the wind that moves some leaves over the street (Geophony). I believe that sound gets its relevance from the context between itself and the space it is embedded in. Sounds can be either limited by time or completely separated from it – but it can somehow never be separated from a physical space.

By recording soundscapes sound artists unintentionally also take a snapshot of the space. By listening to the soundscape people try to imagine the place where the sounds might have been recorded. As a soundscape artist, you kidnap your listeners to another alternate place. So people can get amazed by the pure form of the recorded soundscape. It´s upon the artist how and how much he processes the audio for artistic purposes. More processing does not necessarily mean more pleasure for the human ears. In my opinion, the way Aeolis designed his filters for the subtractive synthesis is very interesting and unique. Even though the real-time visuals are part of his art, Aeolis takes his listeners in a different sphere by altering the recorded sounds in his own way. Aeolis designed his filters in certain way that the listeners are still able to hear the parts of the original and unprocessed audio – this is where the magic happens. In the future, I would like to hear which soundscapes in combination with his subtractive synthesis can sound more musical.

https://nime.pubpub.org/pub/c3w33wya/release/1

Suitable effects for Idea #2

As the title suggests, this blog entry will be concerned with suitable effects for the “Extended Guitar Performance” project. Admittedly, one could argue that any effect can be triggered by a guitar player’s hand movements. However, I want to discern certain effects that are especially suitable for this kind of expression control and sound well when modulated by the natural hand gestures of a guitarist.

There is also a decision to be made if the effects should be activated (turned on and off) by certain hand gestures/movements or if the intensity of the effects (like a dry/wet knob) should be modulated by certain hand gestures/movements. Of course, implementing both options is also possible.

Additionally, as my idea involves placing a sensor on each of the guitar player’s hands, there are effects that will be triggered by the movement of the picking hand and others that will be triggered by the movement of the fretting hand.

Automatic Solo-Mode

The first idea for an effect is based on the following observations:

  1. A typical guitar solo takes place higher up the neck (fret-wise) than the rhythm guitar part.
  2. Rhythm guitar tone and lead guitar tone almost always differ a little and often differ a lot.
  3. It is stressful to activate all lead tone effect pedals when your solo comes up in a live situation.
  4. It is stressful to disactivate all lead tone effect pedals when the solo ends and the rhythm guitar part must be continued.

The effect I came up makes use of this observations. The idea is to automatically activate an assortment of effects, typical for a lead guitar tone, if the guitarist reaches for higher frets up the neck and to subsequently disactivate the lead tone effects as soon as the guitarist comes back to lower frets with his/her fretting hand.

This “Solo-Mode” effect is achieved with an accelerometer (for details check blog XY), placed on the back of a guitarist’s hand, that interprets the length of the neck as the X-axis. According to the position of the guitarist’s hand along the neck (and thereby along the sensor’s X-axis), the “Solo Mode” effect is turned on or off. A certain fret which serves as the point of reference when to switch the Solo Mode on or off must be determined.

I really hope that this works the way I picture it XD.

Individual effects that the “Solo Mode” could include are:

  • Overdrive/Distortion/Fuzz/Booster
  • Delay
  • Chorus
  • Reverb
  • Octaver
  • Harmonizer
  • (Different) equalizer settings (compared to rhythm tone)
  • (Higher) volume settings (compared to rhythm tone)

Neck position as altering factor

IF the system works, the fundamental idea of using the fretting hand’s position along the neck to modulate sound could also be extended to other effects. The intensity of an effect could be increased for instance as the fretting hand moves up or down the neck.

Possible examples include:

  • Faster/slower delay times according to neck position
  • “more of” some effect/more intense settings
  • High-pass filter sweep
  • Low-pass filter sweep
  • Bandpass filter sweep

The Emotional Space | #2 | Walking Soundscape

In my last post I wrote about the first two of the three concepts I came up with for my upcoming project. This post is solely dedicated to the third concept, which set the direction for the further course of my project development.

I play the drums and other percussion instruments, and as soon as I have any kind of a beat in my ear, I struggle to keep my body completely still. Sometimes my fingers drum against my thighs, at other times I just wiggle my toes within my shoes. Another one of those situations is when I am walking with my headphones on and align my steps to the beat. This sparked an idea in my head. What if it was the other way around? What if the music would align itself to the steps I take?

Walking
Photo by Clem Onojeghuo on Unsplash