Sarah Belle Reid is a Trumpet player and Synthesist who takes the sound of her brass instruments and puts them through her modules Systems like Buchla, Serge, or Eurorack. She has developed a device with which she translates her trumpet playing to CV and/or MIDI messages called MIGSI.
They Developed MIGSI in a big part to enable her to use all of the techniques Sarah Belle Reid has developed on her Instrument to translate into more than ‘just’ her instrument and open the horizon of the instrument the electronic music-making possibilities.
MIGSI
MIGSI: Minimal invasive gesture sensing interface. She calls it ‘electronically augmented trumpet’ too. The device was co-developed by her and Ryan Gaston around 2014. They also founded ‘Gradient’ a joint venture between them where they develop “handmade sound objects that combine elements of the natural world with electronic augmentation.” (vgl.: Gradientinstruments.com).
Migsi is a Sensor-based Interface with 3 types of sensors and 8 streams of Data. Pressure sensors around the valves which read the pressure of the grip force, an accelerometer that senses movement of the Trumpet, and optical sensors which reads the movement of the Valves.
The hardware is then read by a MIGSI app which is a MAX map Patch. The app is used to process thee the audio signal of the trumpet, modulate external equipment with the sensor input or modulate a synth engine inside the MIGSI App.
While presenting my Ideas and reasoning how and why I choose the order of interest in them, I could give a clear statement of intention that programming and electronics should be a vital part of the final product.
Supervision
From the Faculty of KUG Prof. Marco Ciciliani has chosen to work with me on my project. The Projekt – ‘Kidnapping the Sound of an Instrumentalist’ was my least favorite, but only because I would have done it outside of Uni anyway. His reasoning for choosing me was that he works with modular synthesis too.
Project
‘Kidnapping the Sound of an Instrumentalist’
The main focus should be that the forthcoming device should be very performable. This means that I have to find a working surface which for one is familiar to me and secondly gives me enough room to develop in multiple directions. The performance aspect means that the Instrumentalist has to be able to convey their expression to the device and I have to be able to pick it up and use it for further modulation of my setup. Below is a chain of thoughts which stood at the very beginning of the project which concludes in a module for a modular Synthesizer.
The Idea of developing a Musician Interface Module was well received by Prof. Ciciliani with the remark that for the technical side I have to be self-sufficient for the largest part.
1st Thoughts
EXPRESSION OF A MUSICIAN LIES VERY MUCH IN THE SONIC COLORATION OF THE SOUND – FFT ANALYSIS
COULD WELL BE THE TOOL TO EXTRACT PARAMETERS FOR THE SONIC COLOR OF A SOUND – BREATH
CONTROLLERS RECORD EXPRESSION PARAMETERS TOO – COULD GRANULAR SYNTHESIS BE A GOOD WAY TO
CAPTURE SONIC COLOR OF A SOUND – IS GRANULAR SYNTHESIS ONLY A EFFECT OR DOES IT MAKE THE
SOUND ITS OWN – HOW MANY PARAMETERS DOES EXPRESSION HAVE – IS THERE EVEN A NUMBER – ARE
THERE DIFFERENCES BETWEEN INSTRUMENTS – ARE THERE ANY SIMILARITIES –
A MODULE: THE MUSICIAN INTERFACE
Input
For the Analysis of the instrument, Music Information Retrieval (MIR) was suggested. Music information retrieval (MIR) is the interdisciplinary science of retrieving information from music. MIR is a small but growing field of research with many real-world applications. Those involved in MIR may have a background in musicology, psychoacoustics, psychology, academic music study, signal processing, informatics, machine learning, optical music recognition, computational intelligence, or some combination of these.
Machine analysis and human hearing often correlate unexpectedly. High frequencies for example have a lower audible harmonic spectrum than Lower frequencies but are received vastly differently by the human ear in terms of expression or sonic coloration. So there are many Experiments to attempt to find the right algorithm and workflow to translate expression by the musician.
MIR is inherently digital so the module will probably be driven by some kind of DSP. So the question is if there is a programable DSP chip with the right periphery to build a module around? Like a DSP raspberry Pi. Bela board, Arduino, Daisy, Teensy,…
To choose a topic for our semester project work we should develop 3 ideas. One of these Ideas could form our master theses in the 4th semester. But one of the ideas should ideally be a topic for the next 3 semesters.
My emphasis in collecting my thoughts for these ideas was to support my interest in topics I wanted to learn in the next two years. My primary interest of mine is sound synthesis and composition. With the second point, my approach has always been performance-based. Therefore developing an instrument of some sort was a logical decision.