I used the Christmas holidays to compile the research done in November and December and subsequently wrote my Exposé. I am excited (and also a little bit anxious) to see how it will be received by the staff of the FH JOANNEUM and the KUG. This week, I will also start to work on my final presentation – here the greatest challenge will probably be to reduce the whole project idea to a mere five minutes of presentation time.
Initially, I also wanted to start with some practical work and thus rented an Arduino board and bought some ultrasonic sensors. Unfortunately, January is pretty much packed with tests and final submissions, so I had to put experimenting with the Arduino on the backburner. Nevertheless, I still have the Arduino until the end of the semester so maybe I will still get a chance to try it out…
While for my Walking Soundscape concept (that I wrote about here) it was almost too easy to find existing reference works, now for my Emotional Space sound installation, this proves to be quite the challenge. But to draw inspiration and build upon knowledge from previous works of other people is such a valuable asset that this step should clearly not fall short. I managed to gather a collection of reference works that I affiliate with different aspects of what I want The Emotional Space to become. While in this post I will focus on installations that I found through various resources, I will dedicate my next post to the same topic, but present the findings that were approached in a more scientific way and got a paper published about them. (This categorization is purely made for reading convenience and does definitely not aim to assert that any of the works below are unscientific).
[…] an arrangement is created in which visitors take on an active influence. Rhythm and variance, like in music, are essential components of the installation […]
The Vibrato Scanner is an electromechanical device that produced vibrato and chorus effects in old Hammond organs. Unlike newer technologies that modulate the signal at their source, the Scanner system modified the sound right on its way from the keyboard to the amplifier.
The hardware consists of two main elements: a phase shift line enclosing a series of passive all-pass filter stages and a scanner that is a single-pole 16-throw air-dielectric capacitor switch, connecting taps on the delay line to the output.
Each filter stage of the delay line is shifted in phase relative to the previous one, resulting in an increasing time delay between every successive step of the line (about 50μs per filter stage). Nine of these steps are connected to the sixteen inputs of the scanner, allowing it to sample back and forth along the line. For example, labeling each output of the delay line with a number from 1 to 9 would result in the following 16 digit sequence: [1, 2, 3, 4, 5, 6, 7, 8, 9, 8, 7, 6, 5, 4, 3, 2].
As the scanner gradually transitions between the taps, it alternates the phase shift applied to the sound, causing slight variations to the pitch and resulting in a vibrato effect. The depth of the vibrato effect relies on the width of the frequency shift fed into the scanner, which means that scanning about one-third of the delay line would produce a lighter vibrato effect while scanning the whole line would significantly increase its depth. The chorus effect is achieved by simply mixing the dry input signal with multiple outputs on the delay line.
In effect, scanning back and forth along the delay line is like moving toward and away from a sound source. This causes a change in frequency due to Doppler shift.
(Vorkoetter, 2009)
Originally, the delay line is part of the Hammond Vibrato Scanner unit and it incorporates a series of second-order audio filter stages where each stage is shifted in phase relative to the previous one. This results in an increasing time delay between every successive step of the line (about 50μs per filter stage). This is then fed into a scanner, that is a single-pole 16-throw air-dielectric capacitor switch connecting nine selected taps from the line to the output. As the scanner gradually transitions between these taps, it alternates the phase shift applied to the sound, causing slight variations to the pitch that result in a vibrato effect.
A notable limitation of this original design is the fixed gear ratio tied to the Generator Run Motor, which spins the rotor continuously at about 7Hz. The depth of the vibrato depends on the width of the frequency shift fed into the scanner, which means that scanning about one-third of the delay line would produce a lighter vibrato effect while scanning the whole line would significantly increase its depth. In addition to the vibrato, a chorus effect can also be achieved by mixing the dry input signal with multiple outputs on the delay line, or a chorus-vibrato by mixing the dry signal with the output of the vibrato.
The iconic chorus and vibrato effect is the result of Hammond’s extensive search for expression and emotional intensity. The Scanner Vibrato unit achieved to deliver a unique sonic character by adding depth and movement to the sound of the Hammond Organ. As the popularity of modulation effects were increasingly growing within pop music, so was the Hammond Vibrato unit. Since the 1960s, its unique sound has made its way into a wide range of genres, from blues and rock to hip hop and downtempo. Some notable artists include Booker T. & The MG’s, Yes, Beastie Boys and Portishead.
Sources:
Vorkoetter, S., 2009. Overhauling and Improving the Hammond M-100 Series Vibrato System. [Blog] http://www.stefanv.com, Available at: <http://www.stefanv.com/electronics/hammond_vibrato_mod.html> [Accessed 7 January 2022]. http://www.stefanv.com/electronics/hammond_vibrato_mod.html
Benton Electronics. n.d. Service Manual – The Hammond Vibrato – Benton Electronics. [online] Available at: <https://bentonelectronics.com/service-manual-the-hammond-vibrato/> [Accessed 7 January 2022].
After a couple of meetings with my supervisor and many hours of research and deliberation, my project idea has finally begun to take shape. My initial plan was to build a Mellotron from used cassette player parts, in which all effects would also be created with magnetic tapes. I soon realized that there is a lot of work done in this domain, leaving me little space for innovation, so I had to reconsider the direction of my project.
My new proposal is the creation of a guitar effect chain that combines different types of obsolete time-based audio effect technologies with the capabilities of modern microcontrollers. The foundation of this idea came to me first, after discovering the Scanner Vibrato & Reverb guitar effect by Analog Outfitters, a hardware entirely made of refurbished Hammond organ parts. Since then I have managed to acquire one of the core elements of this device, the so-called phase shift line, and build an experimental vibrato effect prototype by combining it with an Arduino Uno microcontroller.
Fortunately, my previous research on magnetic tape was not in vain, as it shaped the development of my new project idea Since the initial project would also include audio effects based on magnetic tape technology, I came across several solutions that make it possible to convert portable cassette decks into delay effects. The combination of a cassette tape delay with the aforementioned phase shift line, led to the idea of a multi-effect and thus the concept of an analog delay chain was born. These two components could provide modulation effects such as vibrato, chorus, and various types of delays, but by introducing a spring reverb tank, even more color could be added to the chain.
In order to expand the capabilities of this delay chain, I would add a microcontroller that is responsible for all control processes. This could even enable manipulation via Bluetooth or WLAN and therefore compress the size of the physical interface on the device. Thus, it would be enough to include only a few rotary encoders to control basic operations, such as volume or rate and an LCD module to display these values. But in more detail on these issues in my upcoming blog entries.
Since my work will be implemented in a eurorack system I attempted to find references of existing modules that would do a similar thing. As expected I found no modules where Music-Information-Retrieval is implemented. But there are Modules with which like Pichfollowers, envelope-followers, combinations of those two and separate building blocks.
Doepfer a-196 PLL:
The A-196 PLL is a Phase-locked-loop (PLL) module. PLL-circuits are commonly used in pitch-tracker devices. It is a comparative circuit that compares two oscillating signals in their relative Phase. The A-196 is more of a weird oscillator than a Modulation source but it has 3 different parts one of which is a PLL circuit.
These are quite ‚simple‘ envelope followers which take the amplitude of a signal over time and translate it into an envelope. Every module is its own interpretation of controllable parameters like threshold, attack, release or internal triggers (Buchla). As you might recognize the 230e is not a eurorack format, but as there are not many examples I included a Buchla module.
XAOC Devices – Sevastopol 2:
Also an envelope follower but with a twist. It has more functions one of which is an envelope follower but also a comparator module between two signals.
Analogue Systems RS-35N:
Here the envelope follower is combined with a pitch tracker and a trigger which are the basic values to play a synthesizer voice be it percussive or tonal. It also is equipped with its own set of adjustable parameters to control the inputs and outputs of the signal.
Expert Sleepers Disting mk4:
The Disting Mark 4 is a Digital signal processor which provides many algorithms for modular synthesis. One of those algorithms is a pitch and envelope tracker.
Erica Synths Black Input:
Is not an existing module. it is a concept in unclear development stage. The functions it may provide are the following:
1. Balanced inputs withXLR, 6.3mm TRS and 3.5mm TRS
2. Input preamp with adjustable gain and level indicator
3. Envelope follower with adjustable threshold and rate
4. Gate and Trigger outputs
5. Accurate, low latency monophonic pitch tracker
6. Continuous and quantized (by semitones) CV outputs
For the Musician–Synthesizer Interface it is important to translate the pitch, Amplitude-envelope and note length. But those are the Basic values that define the most basic values of translating Music into the air. The pitch and the relative length are defined for example by sheet music and the envelope by the Characteristics of the instrument played. The most common envelope found in synthesizers ist the ADSR shape standing for ‚attack‘, duration of the rising ramp of the signal, ‚decay‘, duration of falling ramp of the signal starting after the attack ramp is at its peak value, ‚sustain‘, value of the held signal as long as the gate is open and ‚release‘ duration of the signal falling from the last value to zero after the gate is closed. This is also one of the simplest ways to portray many acoustical instruments in their Amplitude envelopes.
But the timbral structure of sounds are mostly not only described by their amplitude envelopes. Many musical instruments are defined by variations in pitch, and the color of the sound. So the simple amplitude picked-up by an envelope follower is a very basic tool to define the sound of a musician. Furthermore it only draws conclusions of the basic values a musician puts into his instrument. So to capture a musician more fully her expression plays a big role in the interpretation of control voltages.
So how can we define musical expression? As said before in most notation of western music pitch and relative length are written down, things like tempo and dynamics or direction for technique are written down in words or abbreviations. But the finer points of a performance which are mostly inherent to every musicians individuality are much nowhere to be found except the playing of the musician. So the common expression for tempo in italian are widely known as follows roughly from slow to fast: adagissimo, adagio, lento, andante, andantino, allegretto, allegro, presto, prestissimo. (Britanica)
As for dynamics roughly from quiet to loud: piano, pianissimo, mezzo piano, forte, mezzo forte, fortissimo and some changes in dynamics: fortepiano (loud then soft) sforzando (sudden accent) crescendo (gradually louder), diminuendo (gradually softer).
So those are nowadays all the definitions which a composer uses to translate his musical thoughts to the performer. but it wasn’t always like this.
„…[I]n much 17th- and 18th-century music, the composer notated only the main structural notes of the solo part, leaving the performer to improvise ornamental figuration.“
https://www.britannica.com/art/musical-expression
Those figurations or ornamentations gave the musician the freedom to express themselves and influence the tradition of the then current music.
Excerp from a Sonate by Arcangelo Correlli Da Fusignano Opera Quinta
Here you can see the bottom two lines are the composers’ structure of the piece and the top line are the individual ornaments an artist put over the piece.
In modern Midi keyboards, there are several possibilities to record expressions. The widest spread feature is the velocity control. This parameter is controlled by the velocity one hits the keys and thus can be easily added by keyboard performers in their playing like they would playing an acoustic instrument. With the synthesizer and the possibility to create the sound of the instrument individual to the performance also came the possibility to control parameters of a sound which in acoustic or electro-acoustic keyboard instruments with keys and pedals only really possible. The Pitch and Mod wheel were introduced to make such changes possible. The first was a spring-actuated week or stick which was mostly used to modulate the pitch like with a guitar. The other was an adjustable heel with which one could send fixed values or modulate them manually. The fourth modulation source developed for keyboard synthesizers is aftertouch. As the name suggests, it is applied by altering the pressure after the key is depressed. This can be applied mono- or polyphonically. All of those controls added to the expressivity of synthesizer performances mostly. Only one of those controls is determined before the tone or as the tone is generated. The others are applied in the decay of the sound. So those are 4 control values that have been proven to add expressivity in performance.
Ofcourse, these weren’t the only tools that were developed to do very expressive performances, although they are the most common ones. There is a multitude of midi controllers to add expression to an electronic music performance. The expressive E ‘Touché’ or ‘Osmose’, Buchla and Serge capacitive keyboards and joystick-controllers on synths like the EMS Synthy, Korg devices like the Sigma or the Delta and as controller module for Eurorack-, 5U-, Buchla- and Serge-modules.
Other Concepts
Then there are control surfaces that take another approach to the whole concept of the Keyboard entirely. These Concepts go often but not always hand in hand with a synthesizer engine.
HAKEN Continuum
The Haken Continuum for instance is a Synthesizer with a control surface that can detect movement in 3 axes.
The Haken Continuum Fingerboard is an instrument born to be as expressive and as rewarding to play as an acoustic instrument. The uniquely designed sensitive playing surface has been symbiotically merged with its powerful synthesis sound engine to produce a truly unique playing experience. The Continuum is a holistic electronic instrument that puts its player at the heart of a uniquely fluent, gestural and intuitive musical playing experience.
The Roli SEA Technology which is implemented in rolis seaboard controllers is as roli puts it:
“Sensory, Elastic and Adaptive. Highly precise, information-rich, and pressure-sensitive. It enables seamless transitions between discrete and continuous input, and captures three-dimensional gestures while simultaneously providing the user with tactile feedback.”
www.roli.com
Roli Seaboard Rise 49
Linnstrument
The Linnstrument is a control surface developed by famous instrument designer Roger Linn. Interesting here is the approach to not apply a piano-style keyboard but rather use a grid-style keyboard which rather reminds of the tonal layout of string and guitar instruments. With the linnstrument there is also a release velocity recorded which places it even more into guitar territories where pull-offs, when one rapidly pulls of the finger of a string to excite it and thus making it sound, is a standard technique.
So few of the looked at control surfaces if any have more than 4 modulatable values. This would be then a minimum for a module that should be able to translate the expression of an instrumentalist into control voltages.
Sarah Belle Reid is a Trumpet player and Synthesist who takes the sound of her brass instruments and puts them through her modules Systems like Buchla, Serge, or Eurorack. She has developed a device with which she translates her trumpet playing to CV and/or MIDI messages called MIGSI.
They Developed MIGSI in a big part to enable her to use all of the techniques Sarah Belle Reid has developed on her Instrument to translate into more than ‘just’ her instrument and open the horizon of the instrument the electronic music-making possibilities.
MIGSI
MIGSI: Minimal invasive gesture sensing interface. She calls it ‘electronically augmented trumpet’ too. The device was co-developed by her and Ryan Gaston around 2014. They also founded ‘Gradient’ a joint venture between them where they develop “handmade sound objects that combine elements of the natural world with electronic augmentation.” (vgl.: Gradientinstruments.com).
Migsi is a Sensor-based Interface with 3 types of sensors and 8 streams of Data. Pressure sensors around the valves which read the pressure of the grip force, an accelerometer that senses movement of the Trumpet, and optical sensors which reads the movement of the Valves.
The hardware is then read by a MIGSI app which is a MAX map Patch. The app is used to process thee the audio signal of the trumpet, modulate external equipment with the sensor input or modulate a synth engine inside the MIGSI App.
While presenting my Ideas and reasoning how and why I choose the order of interest in them, I could give a clear statement of intention that programming and electronics should be a vital part of the final product.
Supervision
From the Faculty of KUG Prof. Marco Ciciliani has chosen to work with me on my project. The Projekt – ‘Kidnapping the Sound of an Instrumentalist’ was my least favorite, but only because I would have done it outside of Uni anyway. His reasoning for choosing me was that he works with modular synthesis too.
Project
‘Kidnapping the Sound of an Instrumentalist’
The main focus should be that the forthcoming device should be very performable. This means that I have to find a working surface which for one is familiar to me and secondly gives me enough room to develop in multiple directions. The performance aspect means that the Instrumentalist has to be able to convey their expression to the device and I have to be able to pick it up and use it for further modulation of my setup. Below is a chain of thoughts which stood at the very beginning of the project which concludes in a module for a modular Synthesizer.
The Idea of developing a Musician Interface Module was well received by Prof. Ciciliani with the remark that for the technical side I have to be self-sufficient for the largest part.
1st Thoughts
EXPRESSION OF A MUSICIAN LIES VERY MUCH IN THE SONIC COLORATION OF THE SOUND – FFT ANALYSIS
COULD WELL BE THE TOOL TO EXTRACT PARAMETERS FOR THE SONIC COLOR OF A SOUND – BREATH
CONTROLLERS RECORD EXPRESSION PARAMETERS TOO – COULD GRANULAR SYNTHESIS BE A GOOD WAY TO
CAPTURE SONIC COLOR OF A SOUND – IS GRANULAR SYNTHESIS ONLY A EFFECT OR DOES IT MAKE THE
SOUND ITS OWN – HOW MANY PARAMETERS DOES EXPRESSION HAVE – IS THERE EVEN A NUMBER – ARE
THERE DIFFERENCES BETWEEN INSTRUMENTS – ARE THERE ANY SIMILARITIES –
A MODULE: THE MUSICIAN INTERFACE
Input
For the Analysis of the instrument, Music Information Retrieval (MIR) was suggested. Music information retrieval (MIR) is the interdisciplinary science of retrieving information from music. MIR is a small but growing field of research with many real-world applications. Those involved in MIR may have a background in musicology, psychoacoustics, psychology, academic music study, signal processing, informatics, machine learning, optical music recognition, computational intelligence, or some combination of these.
Machine analysis and human hearing often correlate unexpectedly. High frequencies for example have a lower audible harmonic spectrum than Lower frequencies but are received vastly differently by the human ear in terms of expression or sonic coloration. So there are many Experiments to attempt to find the right algorithm and workflow to translate expression by the musician.
MIR is inherently digital so the module will probably be driven by some kind of DSP. So the question is if there is a programable DSP chip with the right periphery to build a module around? Like a DSP raspberry Pi. Bela board, Arduino, Daisy, Teensy,…
To choose a topic for our semester project work we should develop 3 ideas. One of these Ideas could form our master theses in the 4th semester. But one of the ideas should ideally be a topic for the next 3 semesters.
My emphasis in collecting my thoughts for these ideas was to support my interest in topics I wanted to learn in the next two years. My primary interest of mine is sound synthesis and composition. With the second point, my approach has always been performance-based. Therefore developing an instrument of some sort was a logical decision.