The introduction of digital musical instruments (DMIs) has removed the need for the existence of a physically resonating body in order to create music, leaving the practice of sound- making often decoupled from the resulting sound. The inclination towards smooth and seamless interaction in the creation of new DMIs has led to the development of musical instruments and interfaces for which no significant transfer of energy is required to play them. Other than structural boundaries, such systems usually lack any form of physical resistance, whereas the production of sounds through traditional instruments happens precisely at the meeting of the performer’s body with the instrument’s resistance: “When the intentions of a musician meet with a body that resists them, friction between the two bodies causes sound to emerge. Haptic controllers offer the ability to engage with digital music in a tangible way.
Background
Dynamic relationships occur and are ongoing between the performer, the instrument, and the sounds produced when playing musical instruments. These exchanges depend upon the sensory feedback provided by the instrument in the forms of auditory, visual, and haptic feedback. Because digital interfaces based around an ergonomic HCI model are generally designed to eliminate friction altogether, the tac- tile experience of creating a sound is reduced. Even though digital interfaces are material tools, the feeling of pressing a button or moving a slider does not provide the performer with much physical resistance, whereas the engagement required to play an acoustic instrument provides musicians with a wider range of haptic feedback involving both cutaneous and proprioceptive information, as well as information about the quality of an occurring sound. This issue is recognized in Claude Cadoz’s work regarding his concept of ergoticity as the physical exchange of energy between performer, instrument, and environment. A possible solution to these issues is the use of haptic controllers. As has been previously noted, “we are no longer dealing with the physical vibrations of strings, tubes and solid bodies as the sound source, but rather with the impalpable numerical streams of digital signal processing”. In physically realizing the immaterial, the design of the force profile is crucial because it determines the overall characteristics of the instrument.
My Conclusion
Haptic Feedback of Cross-Modal Terrains in combination with Sonic Sound is a very interesting way to experience a feel of terrains. Experience structure of surface imperfections are the next step to make an impact and improve the experience of Cross-Modal Terrains.
Furthermore the technology already has achieved the Sonic Sound Haptic feedback though ultrasonic waves. For example the company Ultraleap with STRATOS Inspire where soundwaves a concentrated into on point. This will be an interesting approach for testing the haptic feedback in modern technology and might be a more detailed response.
The way they used Max8 for there testing was a good choice due to having a good connection with devices and monitoring values and the images which is provided as a greyscale 2D image. There it is very simple to see and detect where we are on the surface and how intense it is to feel.
–
Isaac, Gabriella; Hayes, Lauren; Ingalls, Todd (2017). Cross-Modal Terrains: Navigating Sonic Space through Haptic Feedback https://zenodo.org/record/1176163
One could summarize the paper by Christopher Ariza with ‘using a controller as interface for live music performances’ – how it works, what benefits and limitations there are. A controller, in the paper often referred as ‘Dual-Analog Gamepad’ is originally designed as a gaming peripheral/interface, for consoles and computers. But some folks figured it out back then, that all its inputs also could be interpreted by a computer as MIDI signals and subsequently used to map certain sounds or modifiers to these inputs – thus generating music by making inputs to the buttons on the controller. This not even limited to one instrument or soundscape alone, because there are various buttons left on the device, some could be used to alternate between different instruments which have either different constraints on using or are just controlled completely different from others. Since there is also the possibility to create complex interaction patterns, like to simultaneous button presses, the amount of immediately available instruments vastly increases.
This whole approach isn’t not the newest invention, the concept of repurposing any digital interfaces to transfer their various interactions into inputs to a machine to generate anything, yet it’s not being seen everywhere though. In most live musical performances, theses ‘input methods’ are very rare – although they could greatly enhance the audience’s perception of the artist; meaning that they don’t only interact with their laptop and just use ‘conventional’ input methods, like mouse and keyboard. As the paper correctly stated, it would create the impression that the artist actually ‘plays an instrument’ and have profiled in its use.
Coming back to the paper, it mostly focuses on explaining existing interface mappings for controllers, but the goal of this paper is mainly to promote the use and experimentation of literally ‘playing’ with a controller to create new experiences in music making.
What me struck me as most interesting, since the paper is now roughly ten years old, there have been numerous improvements and advances made in controller technology. So, if someone now would harness the various sensors, input and feedback methods of a newest generation controller – like the PlayStation5 DualSense Controller – the possibilities would be mind boggling.
To reiterate, what this little piece of plastic and electronics can do:
16 discrete buttons
2 Thumb sticks (essentially Joysticks) which also can be pressed
Adaptive triggers for haptic feedback (creating various resistance experiences when pressing the triggers), which also can differentiate various strengths of button presses
(Also, pressable, like a button) Touchpad which can track up to 2 fingers very precise and differentiate between certain various button press locations, like left and right
Vibration motors for haptic feedback (precision rumble sensations)
Acceleration sensor
Gyro sensor
LED light panel capable of displaying a lot of covers
Built in Speaker
Built in Microphone
Headphone Jack
Bluetooth Connectivity (to Apple products it is even optimized out of the box)
So, it’s quite a list of things of what a new generation controller can do. For example, I thought of changing the different instruments by dividing the Touchpad in segments and touching different segments of said Touchpad could correspond to activating different instruments. Adding to that, the current state of the instrument selection could be represented through a corresponding colour trough the LED panel – adding insult to injury, the successful switch to another instrument could be communicated trough a short rumbling of the controller, like a little shockwave; to give more haptic feedback to the change in instruments. Also, since the Touchpad can detect touch/swipe inputs, an interaction like scratching done by DJ’s could be emulated. There is one example, where a game uses the Touchpad to detect inputs for a guitar playing minigame – in TLOU Part II. You choose a chord (from a radial menu of presets) via the Thumb stick, and then strike individual strings or all of them via the Touchpad to get a sound.
Staying on the topic of the LED panel, communicating different events or states with light and even sound directly could be used to tell the rhythm, or the Haptic feedback with Vibration or adaptive triggers could be used to indicate rhythm and enable precision timing. Coming back to the various ways of haptic feedback, with the precision vibrations or rumblings, either the current beat timing could be felt like a little bass drum – or even wilder, whatever sound has been currently created with the controller, the beats vibration pattern could be used to make the newly made music ‘tactile’ and add an interesting layer of immersion/feedback experience.
To address the other options of input methods which take advantage of the different sensors, like the Gyro sensor to map movements to music, similar to the theremin or the Accel sensor to map events, like a change in tempo, drop, etc. The option to use the speakers as output in extreme situations could also be very helpful – but maybe just for something small like a metronome – but the headphone capability of the controller could come in handy at every opportunity.
All in all, utilizing a modern age controller like the DualSense controller could really open up new and various other ways to make and literally ‘play’ music.
Über die Bedeutung von Formen und Ornamenten haben wir uns in einem vorherigen Blogeintrag bereits auseinandergesetzt. Das folgende Experiment befasst sich mit der Bildung von Buchstaben, die aus dem Altkyrillischen und der Optik von Stick-Ornamenten beruht. Das Ziel war es die Anfänge der Entstehung zu dokumentieren und eine zeitgenössische Lösung zu finden diese Schrift visuell darzustellen.
Das Altkyrillische Alphabet ist im 9. oder 10. Jahrhundert im Ersten Bulgarischen Reich entstanden, um das Altkirchenslawisch zu modernisieren. Der Vorgänger, das glagolitische Alphabet, dass um 863 vom Mönch Kyrill und seinem Bruder Methodius gestaltet worden ist, wurde schnell von Kyrills Schüler in den 890er Jahren in der preslawischen Literaturschule als geeignetere Schrift für Kirchenbücher geschaffen. Die Schrift basiert auf den griechischen Unzialen, jedoch wurden die glagolitischen Buchstaben für die es im Griechischen nicht vorhandene Laute gab, beibehalten. Eine andere Hypothese besagt, dass die Schrift in den Grenzregionen der griechischen Missionierung der Slawen entstanden ist, bevor sie von einem Systematiker unter den Slawen kodifiziert und angepasst wurde. Als inspiration für dieses Experiment wurde das rumänische kyrillische Alphabet herangezogen.
Eine sehr geometrische Sans Schrift wurde als Basis für die Größe und der Form der Buchstaben herangezogen. Dadurch, dass in Ornamenten höchste Präzision gefordert ist, sollten die Buchstaben mit einem Kreuzstich gebildet werden, indem diese Formen entstehen lassen und somit Assoziation.
Nachdem einige Buchstaben bereits geformt und schon einzelne Wörter daraus gebildet wurden, habe ich die Wirkung der Schrift in verschiedenen Kompostionen auf Papier und Textil getestet.
Obwohl die Form der Schrift ganz klar an Stickereien erinnert, kann sie sehr gut als Dekorationselement für bestimmte Zwecke verwendet werden. Interessant zu beobachten ist die Zusammensetzung aus bereits einfachen Kompositionen in Kombination mit dieser Schrift.
I had a look on the paper “Creativity in Children’s Music Composition” written by Corey Ford, Nick Bryan-Kinns and Chris Nash, which was published at Nime (https://nime.pubpub.org/pub/ker5w948/release/1) in 2021. The authors conducted a study examining which interactions with Codetta – a LOGO-inspired, block-based music platform – supports children’s creativity in music composition. In this context, “LOGO” refers to Papert’s LOGO philosophy to support children’s learning through play. Such experiential learning approaches are based on intrinsic motivation and tinkering. The authors stated that there was a lack of empirical research that investigated if a LOGO-inspired approach was conductive to creativity, which is why their aim was to get a better understanding on how children make music with such technologies.
About the study
The concrete research question of the study was “Which interactions with Codetta best support children’s ability to be creative in composition?” with the aim to uncover patterns of creativity within the participant’s first use of the tool. To get a better understanding of how Codetta works, I found this explanation video on youtube: https://www.youtube.com/watch?v=b7iMPuEaPts. The study was performed with 20 primary schoolers between the age range of 6-11. Due to the COVID-siutation the study was conducted in an online setting where the children had to perform two tasks: 1) composing a short piece of music and 2) answering a post-task questionnaire afterwards.
Procedure
Once the children opened Codetta, they were provided built-in, age-appropriate, instructions to get to know the basics of the tool (see Fig. 1). After completing the tutorial the children were asked to compose a short piece of music. No other motivation was given to keep the task open-ended. Once finished composing, the children completed a short post-task questionnaire online.
Data collection
The following data was collected: 1) log data of each child’s interactions using Codetta (and consequently their final compositions); 2) questionnaires responses and 3) expert ratings of each composition.
For visualizing the collection of log data a color scheme with several interaction categories was developed (see Fig. 2). The returned logs from Codetta were mined and visualised using Python and Anaconda. Once the data was prepared, statistical analysis was conducted using SPSS.
The post-task questionnaire consisted of 13 5-point Likert-scale statements, asking the children about their confidence in music lessons, writing music notation, using blockbased programs and using computers as well as the children’s perceptions of Codetta as a whole.
In order to quantify the children’s creativity, six expert judges rated each composition using several scales. Each judge assessed each child’s composition independently and in a random order. Their ratings were then averaged.
Results
I don’t want to go too much into the empirical data here, but to sum up the results it can be said that they focused on three subsections: the children’s compositions, interactions and UI perceptions.
Most children composed short pieces (mean length was 11.950 seconds) with conventional melodies like arcing up and down in pitch. The logged interaction data was visualised as a stacked bar chart using the color scheme mentioned before (see Fig. 3). The results of the questionnaire showed that the children felt they had to plan before writing their music and that they found it difficult to understand what each block was for.
Discussion
Based on the results several conclusions were drawn (shortened):
Note-level interactions (rise/lower pitch, edit note length) best support children’s ability to be creative in music composition
First-time novice users should initially be encouraged to focus on learning how to use Codetta’s notation engine (i.e. Introduction workshops)
Codetta could automatically introduce blocks sequentially, based on children’s current knowledge and interaction patterns
More complex features should be introduced gradually
The UI Design could be more intuitive to avoid mistakes and confusion
Lastly it is to mention that the results are limited by the small sample size and user’s levels of experience. longitudinal study with a larger number of participants would be needed to throughly investigate how interaction styles develop with time.
Own thoughts
I found it super interesting to read about the topic “creativity in children’s music composition” for several reasons: First of all, I spent a huge part of my childhood and teenage years in music schools, playing several instruments and taking music theory courses, but I never got in touch with actually composing music myself – Neither in primary/high school nor in music school or in any of the music-theory courses I took. So I totally agree with the authors’ statement that composing was a neglected area of music education. Moreover, I liked the connection between sound design, interaction design and sociology since digital music composition combines all of that. It could also be useful as inspiration for finding a master’s thesis topic.
Though it was a bit hard for me to understand the “Results” because I have only basic experience and knowledge in statistics and would need a deeper dive into that topic to get all the measures and values right. Yet it was nice to engage with research after not binge-reading studies and empirical stuff in a while (speaking of bachelor’s thesis).
Every musical creation starts with an idea, which can be a phrase, a sound object or a rhythmic pattern. Musicians are inspired by the individual sounds, bringing compositions to life and playing around the idea by improvising and adding parts to increase complexity. In the context of songwriting sessions of music groups, new compositions are often created through improvisational interactions between the musicians – this is called jams.
Spire Muse is a co-creative agent that supports musical brainstorming. The following is a brief explanation of how it works.
During jam sessions, basic musical ideas emerge, and the more one subsequently interacts with this idea, the more diverse the musical form develops. Creative interaction during improvisation is therefore very important for the emergence of further ideas. In general, musical interaction is a strategy built up from iterative phases.
The more one interacts with the idea, the more diverse the musical form can change. To make an open-ended creative interaction possible, improvisation is very important. A musical interaction is a strategy built from iterative phases. This means that improvising musicians decide whether they want to change something in their sequences, which can either be an initiative (new) change or a reaction. Reaction categories are adoption, augmentation and contrast.
Feedback during improvisation is essential; if it is positive, it can reinforce certain ideas; if it is negative, it can extinguish the spark of an idea or lead to new ideas different from the previous one.
Creativity cannot be defined in a uniform way, but is a process of multiple interactions that trigger unpredictable and undefined results. Through emergence of human-computer interaction (HCI), as well as artificial intelligence (AI), the perception of creativity is changing. With the help of AI, there are two ways that have an impact on the creativity process. Systems that have a supportive effect on human intelligence, so-called “creativity support tools” or those that generate results that are classified as creative from the perspective of an unbiased observer. The symbiosis of human and computer-assisted idea generation or creative process is then a co-creativity.
We have focused on designing a co-creative system that realizes the concept of a virtual jam partner.
Spire Muse focuses on creating such co-creativity through a virulent jam partner. The strategy behind this is that the human or computer components interact with each other.
In a human interaction, action and decision interact, so the human is often the decision maker, which a computer cannot be. To avoid a dominant side in a co-creation, interactive behaviors are analyzed and defined in 4 categories, which are classified based on Reactive and Procative properties.
Shadowing – user follows what he does synchronously
Mirroring – Information or musical content is mirrored back in a novel way.
Coupling – system, can clearly take the lead
Negotiation – attempts to achieve goal through output modification.
When switching between the three behaviors – Shadowing, Mirroring, Coupling – Negotiation occurs, either autonomously or by the user.
The Spire Muse agent was generated on MASOM (Musical Agent based on Self-Organising Maps) and in MAX.
The learning process of the agent starts in the 1st stage by slicing the audio data in the source folder (the corpus) and then labelling the individual audio slices with feature vectors. This results in 55 dimensions of melodic and harmonic data.
Duration, loudness, fundamental frequency and chroma are used to encode the harmonic dynamics.
The feature vectors are then mapped onto a selforganizing map (SOM), an artificial neural network that employs unsupervised learning, so that the feature vectors can be displayed in a two-dimensional topological grid. Then, the tempo for each song is derived from the Python script via OSC to match the original tempo of the music piece. The last training part focuses on creating one sequence per song, so that repetitions and interaction dynamics can be shown.
The four influence parameters are rhythmic, spectral, melodic and harmonic.
The influences can be adjusted with sliders, so any combination of relative influences is possible.
Interactive modes
Shadow mode: Agent is reactive here. It plays the most appropriate audio slice in the corpus for each registered onset in the input.
Mirroring Mode: agent is reflexive interaction, respond with similar phrases after prolonged listening to single phrases.
Coupling Mode: song is automatically selected from the corpus and chosen with respect to two criteria:
Harmonic dynamics on the meso-time scale:.
Tempo similarity:
The program starts in Shadow mode, which is the initial mode, as well as the evasive mode. The program returns to this mode if the requirements for activating Mirror or Coupling Mode are not met.
In addition to the automated behaviors of the agent, there is also a button with the functions Back, Pause/Continue, Change and Thumbs Up.
Thumbs Up indicates to the system that the current interaction is good and then maintains the current state for the next 30 seconds. All keys are operated with foot pedals.
“Throwback” is supportive of call-and-response interactions, but can become unpredictable in some SOM regions. These interfaces allow manipulation of behaviors and provide sufficient room for automated operations.
→ Spire Muse is not intended to enhance the agent’s musical performance, but rather to get the user to create ideas with a sense of shared exploration.
→ Ultimately, we believe that the most promising feature of Spire Muse is not the agent’s musical performance per se, but rather its ability to get users to explore ideas with a sense of shared ownership.
Spine Muse is expected to learn more algorithms in the future to reduce unpredictability through repeated use. By observing multiple sessions, the agent should be able to generate a profile that recognizes behavior and thereby play out different responses based on the situation.
Personal Comment:
Encouraging creativity is always a great approach, just by engaging with this new interface artists can already break out of their usual environment and be inspired. Such systems can be very useful in artistic development, especially when there are blockages or lack of creativity. Nevertheless, as an artist, one should not only rely on digital systems, but also be able to and be inspired by analog.
I would find a stronger integration of human-human interaction very exciting, in order to bring more emotions into the generated ideas. Maybe an implementation of Jam-with-Friends-Feature would be a good approach to bring together several artists with computer support in different places and to create a kind of community.
Source: Spire Muse: A Virtual Musical Partner for Creative Brainstorming – by Notto J. W. Thelle and Philippe Pasquier (https://nime.pubpub.org/pub/wcj8sjee/release/1)
Dieser Blogeintrag widmet sich dem Thema „Überlagerung“. Die Grundlage für das Experiment ist das Übereinanderlegen zweier Ebenen bzw. Fotos. Die Systemkamera XT-3 der Marke Fujifilm besitzt einen Doppelbelichtungsmodus. Hierfür werden zwei Fotos nacheinander geschossen und übereinander gelegt. Dieser Arbeitsschritt wird direkt in der Kamera erledigt, d.h. es ist kein Programm wie beispielsweise Photoshop nötig. Das doppelbelichtete Foto wird dann als JPG abgespeichert. Eine weitere Einstellung, für die die Marke Fujifilm bekannt ist, ist das Einstellen eines Filmprofils. Verschiedene Voreinstellungen für Farbe, Helligkeit, Farbtemperatur etc. können bereits ebenfalls in der Kamera festgelegt werden. Beim Aufnehmen eines Fotos werden zwei Dateiformate gespeichert: das RAW (Filmprofil und Farbeinstellungen werden nicht übernommen) und das JPG (Filmprofil und Farbeinstellungen werden übernommen). Da ich für die XT-3 das Filmprofil „Classic Chrome“ verwende, sind alle Fotos im JPG-Format im Grunde genommen schon ein wenig bearbeitet, das das Filmprofil „Classic Chrome“ auf die JPGs angewendet wird. Daraus folgt, dass ein doppelbelichtetes Fotos ebenfalls aus zwei bearbeiteten Fotos besteht. Das Filmprofil „Classic Chrome“ steht für helle Schatten, weiche Übergänge und Kanten und sanfte Kontraste.
Vorgehensweise Für die digitale Doppelbelichtung wurden Pattern verwendet, die in der Natur oder im öffentlichen Raum vorkommen. Das erste Bild besteht aus Gras und Kopfsteinpflastern. Das zweite Bild setzt sich aus einem Foto eines Baumes und eines Asphalts zusammen, das bereits an eine Körnung erinnert. Das dritte Bild besteht aus einem Foto von kleinen Fließen und eines Gebäudes.
Nach dem Aufnehmen der Doppelbelichtung wurden die JPGs in Photoshop weiter bearbeitet. Auch hier wurde die Thematik der Überlappung aufgegriffen. Eine Farbfläche wurde über das Bild gelegt, das den Farbton der Durchschnittsfarbe des Bildes hatte. Anschließend wurde für diese farbliche Ebene unterschiedliche Füllmethoden angewandt:
Durch die experimentelle Vorgehensweise entstanden nicht vorhersehbare Fotos, die alle durch die gleichen Arbeitsschritte erstellt wurde. Die Technik der Doppelbelichtung wurde hier nun digital mit der Systemkamera durchgeführt, sie stammt aber aus den Analogfotografie. Die analoge Doppelbelichtung wird Thema der nächsten Blogeinträge sein. Nach der Durchführung weiterer Experimente können dann Rückschlüsse gezogen werden und die Ergebnisse mit einander verglichen werden.
Touch interfaces for musical applications have been first introduced in the early 90s by Schneiderman. They started out controlling vocal synthesis. Since then the Monome grid has developed into the standard of interfaces for musical equipment over the last 15 years.
The article I read explains how they made an effort to provide an other solution to it. Their suggested solution is a grid device with capacitive touch technology to use swipe gestures as a menu system, expanding the interface capabilities of the popularized Launchpads nowadays.
One of the reasons that might have made the launchpad keep it’s simplicity and popularity is the fact that most instruments rely on haptic feedback, but almost none rely on visual feedback.
These two grid controllers, Tenori-On and Monome Grid har pushed their interfaces into the music industry which have been the same for 15 years now.
It’s now in more recent times adopted by the more popularized and professional standards in the different Launchpads.
The great thing about the interface and allegedly why it’s been adopted is it’s generic layout which allows any artist to customize and play however they want.
Added functionality to the grid layout is the buttons around the “core” allowing the launchpad to control different hardware interfaces with specific software.
Their project
Touchgrid was developed to solve problems from limited resolution and restricted space provided by grids using touch interfaces. It keeps the generic button layout of the launchpad, but uses capacitive touch to extent it’s possibilities.
1st iteration
Their solution was the capacitive touch display consisting of 16*8 grid of touch areas. By using time-based date they recognized different interactions such as swipes and bezel interactions.
Problem: Due to their now huge processing requirements, the system ran on a slower maximum sample-rate than preferred.
2nd prototype
For their next prototype they used a Adafruit NeoTrellis M4 which is a readymade DIY kit. Using LEDs and silicone buttons to expand their Launchpad.
Chosing from a wide range of possible interactions they chose the “Drag from off-screen” and “Horizontal swipe” as their interactions as they are well known from smart devices adopted by the common public today.
with a more restricted but better hardware layout they manage to fix their problem with high sampling rates and hence performance was bettered.
In an innovative solution this group made interactions replace the buttons from the Launchpad, allowing more space for music, less for navigation. The buttons allocated to menu-changing actions from the launchpads were now replaced with swiping motions we learned using our phones.
Their new menu layout is made in a spatial context to reduce the learning curve and mental workload of learning their device. Using the swiping interactions mentioned earlier they make an intuitive mapping as shown below.
In the end they managed to gather user insight from 26 people with prior knowledge of how to use similar instruments. Answers from their survey revealed that the touch patterns are recognized and their mapping works. There is however worries that the suggested interaction system might add complexity and interferences with an already working product.
Conclusion
As they are saying themselves, they will make the argument of expanding the capabilities of tangible devices instead of making touch screens more tangible. Meaning that they will take their learning from touch screen interactions and implementing them on the grids and tangible devices for a haptic screen feel.
Being an interaction designer it’s refreshing seeing a well developed tangible interaction system stand its ground towards more “modern” screen touch-based ones. In a very human way this project combines what we know and adopted from before with another dear interaction to us from another system.
Thriller und vor allem Horrorfilme stützen sich auf einige Aspekte der menschlichen Psychologie, die den Rezipienten ein Angst- und Ekelgefühl vermitteln können. Die Kombination aus fiktiven Angstzuständen und der körpereigenen Produktion von Adrenalin und Endorphine erzeugt die sogenannte Angstlust, die ebenfalls mit in die Faszination des Horrors hineinspielt. Diese Art der Filme sind ein, wenn gelungen, perfektes Zusammenspiel aus Psychologie, Dramaturgie und Unterhaltung. Selbst die brutalsten Genres, wie Slasher und Splasher zeigen dramaturgische Mittel, die mithilfe des Wissens über die menschliche Psychologie den Zuschauer den Horror erleben lassen.
Meine ursprüngliche Faszination von Stephen Kings und Alfred Hitchcocks Werken hat sich durch meine Recherchen noch mehr gefestigt, wie auch ausgeweitet: die Filmreihe „Halloween“ von John Carpenter hat mich auf eine neue Art und Weise den Horror spüren lassen. Der Thrill hat mich während dem Anschauen, öfter als es mir lieb war, zusammenzucken lassen. Daraus schließe ich, dass selbst wenn man sich mit der Thematik des Horrors intensiv auseinandersetzt, man immer noch überrascht werden kann. Und das schätze ich sehr an diesem Genre.
Zudem finde ich es außerordentlich interessant wie viele Subgenres im Genre Horror existieren. Die, die ich bisher beschrieben habe, sind nur ein kleiner Teil. Dazu teile ich Screenshots aus dem Inhaltsverzeichnis des Buchs „Horror films by subgenre a viewers guide“ von Fernandez-Vander Kaay und Kathleen Vander Kaay unter diesem Blogeintrag.
Des Weiteren fand ich die Thematik ausgesprochen interessant, was die Unterschiede zwischen der Ausstrahlung von brutaleren Filmen (ab 16/18 Jahren) im TV sowie im Kino sind und das nicht nur die FSK für den Jugendschutz versucht zu sorgen, sondern das ebenfalls die FSF mit hineinspielt. Auch die „Tricks“, die private Fernsehsender nutzen, um dennoch solche Filme veröffentlichen zu können, hat mich erstaunt. Da ich selbst selten fern sehe, ist mir die Zensur von Filmen bisher nicht stark aufgefallen.
Mehr vertiefen möchte ich mich in die Thematiken, wie Horror aus der Sicht der Schauspieler entsteht und sich anfühlt, was Filmrollen mit ihnen psychisch anstellen können, dramaturgische Mittel im Vergleich: Horror vs. Familienfilme und wie das Marketing vom Stephen King- „Imperium“ funktioniert.
Um meinem letzten Artikel ein wenig auf den Grund zu gehen, habe ich eine kleine Umfrage auf Instagram durchgeführt, um herauszufinden, ob uns klar ist, was überhaupt wo weggeschmissen wird und mich absichtlich für zwei schwierige Beispiele entschieden. Wichtig für die Interpretation und “Auswertung” des Experiments: Es handelt sich um kein tatsächlich wissenschaftliches Ergebnis und muss somit als reines “Experiment” gewertet werden. Weiters sind alle Beteiligten, wie auf dem Screenshot steht, persönliche Bekannte und lebhaft in Wien. Daher beziehe ich mich bei den “richtigen” Antworten auch auf das sogenannte “Mist-ABC” der MA48.
Frage 1: Wohin werft ihr Pizzakartons…
den Restmüll oder die Altpapiertonne?
Das Umfrage Ergebnis mit 15 gegen 12 Stimmen für den Restmüll ist eigentlich eine gute Repräsentation für die tatsächliche Regelung. Laut MA48 sind Kartons mit leichten Fettflecken problemlos im Altpapier zu entsorgen, befinden sich allerdings Essensreste in oder am Karton, ist dieser im Restmüll zu entsorgen.1 Die Frage ist somit auch bewusst “schwierig” oder “gemein” gestellt, allderdings hat nur eine einzige Person die richtige Antwort als anschließenden Kommentar geschrieben.
Frage 2: Eierschalen gehören in den …
Restmüll oder Biomüll?
Hier ist schon eine deutliche Tendenz spürbar mit 16 zu 9 Stimmen für den Biomüll – was ich wiederum auch sehr spannend finde, da ich mir nun natürlich die Frage stelle, ob die Personen dann auch immer die selbe Tonne nehmen, oder unter Umständen auch differenzieren, wie bei den Pizzakartons? Das Mist-ABC stellt aber klar: Eierschalen sollen, entgegen der hier gezeigten Antworten in den Restmüll. Ich habe mich nun gefragt warum, da Wien tatsächlich das einzige Bundesland in Österreich ist, dass auf eine Entsorgung im Restmüll verweist.2 Leider konnte ich keine verlässlichen Quellen zu den Gründen finden, da die Entsorgung von Eierschalen in allen anderen Bundesländern und auch privat über die Biotonne oder den Kompost passiert. Es gibt Seiten, die behaupten, dass von den Eierschalen eine Salmonellengefahr ausginge, was allerdings nicht wirklich belegbar ist, da eine Gefahr nur dann bestehen würde, wenn die Konzentration massivst erhöhrt wäre – was bei einer normalen Entsorgungsmenge von privaten Haushalten kaum der Fall sein kann. Weiters sprechen viele über den Kalkgehalt der Schalen, der kann in hohen Mengen die Qualität des Kompost beeinträchtigen, wirkliche Gefahr geht hier aber auch nicht aus. Eierschalen können im Biomüll entsorgt werden (es ist keineswegs verboten), empfohlen wird von offizieller Seite aber die Entsorgung über den Restmüll.
Fazit
Mir ist natürlich bewusst mit diesen Fragen keine “echte” Umfrage ersetzen zu können bzw. Ergebnisse zu erhalten, die tatsächlich repräsentativ sind. Trotzdem finde ich die Zahlen sehr spannend und könnte mir vorstellen noch weitere (vielleicht auch größere) Umfragen zu machen, die zwar auch nicht repräsentativ für die Bevölkerung, aber vielleicht für einen bestimmten lokalen Bereich sind. Bsp. Ein Wohnhaus, Eine Wohnanlage im bewussten Vergleich mit einer anderen auf dem Land, ein Platz, ein Kindergarten etc. Außerdem fände ich es spannend weitere kontroverse Abfälle zu thematisieren, da ich mich für diese zwei Beispiele bewusst schnell, auch eher unüberlegt, entschieden habe, da ich hier selbst die Antwort nicht kannte. In meiner Recherche bei der Auswertung bin ich dann auf viele Dinge gestoßen, die mich überrascht haben und auch interessieren würde, “wer das sicher weiß”. Beispielsweise muss für die richtige Entsorgung eines Joghurt Bechers der Deckel vom Becher getrennt werden, ein mit Plastik versehener Briefumschlag (Sichtfenster) kann aber problemlos in der Altpapiertonne bleiben und recycelt werden.
1: Stadt Wien: Das Mist-ABC – Müll richtig entsorgen. online auf: https://www.wien.gv.at/umwelt/ma48/beratung/muelltrennung/mistabc.html. Zugriff (09.06.2022). Wien
Methods and findings of a multi-day performance research lab that evaluated the efficacy of a novel nerve sensor in the context of a physically inclusive performance practice.
Making the musical performance more accessible is something that many artists, such as Atau Tanaka and Laetitia Sonami, as well as scientists, have been aiming for a while. Therefore many efforts go toward the “nerve-sensor” direction. With this kind of approach, the detection of signals from nerve fring is more likely to happen rather than the skeletal muscle movement, so performers with physical conditions have more control over the sensors.
Even though the variety of gestures wasn’t as broad as other gestural instruments offer, the affordance of communication of gestural effort was better as proved in the explorations made on different sound-practices like free improvisation and the development of a piece called Frustentions.
Thanks to the Electromyography, a technique used to measure the electrical activity of skeletal muscles and nerves through the use of non-invasive sensors placed directly on the skin, we have seen more and more people with muscle atrophy or compromised volitional control of skeletal muscles, having access to technologies, for example when it comes to gaming. But, as it usually happens, the broader the accessibility is the more potentially harmful lens can come with it. Therefore it is important to keep in mind that every individual is unique and be aware of the invisible boundaries that the technology can set around the people it’s supposed to serve.
The more people with different physical and mental abilities get involved in these sound-making explorations, the better and opener accessible the design of the interfaces will be.
For this specific exploration, there were 4 investigated parameters: sensor position, gesture types, minimal-movement gestures, as well as various sound-mapping parameters. The lab was structured into several sessions, each concluding with a performative exploration, as well as a structured public showcase and discussion at the end of the lab. Other research lines like minimal-movement “neural” gestures were also investigated but not much data could be gathered. The outcome of the session was the previous said composed piece: Frustentions. A fixed media composition developed during the workshop.
Three groups of gestures were determined during the sessions in order to record the needed data: Effort gestures, which were particularly suited to audio effects that are well-aligned to swelling such as distortion, delay, or reverb, and adjustment gestures, which often required full focus and were not necessarily accessible at all times during a performance; and trigger gestures.
The nerve sensor was compared with other interfaces like the MiMu glove, the Gestrument, and Soundbeam. Even though these other instruments allowed wider recognition of the number of gestures with better accuracy, it was more challenging to use with limited fine-motor capabilities. In addition, the wearable sensor afforded the performer greater opportunities to connect visually with ensemble members or the audience as there was no immediate requirement to view or interact directly with a screen.
Conclusions
Research aimed at making musical performance accessible to everyone is something that has multiple benefits, clearly on a physical level, but above all on a neural and psychological level. It is surprising how many things associated with leisure are out of reach for many people, simply because their physical condition does not meet the standards for which they are designed. The possibility that all these people can access activities of enjoyment represents a clear increase in the quality of life for them and for the people around them.
Nerve sensors are just one example, and thanks to this exploratory initiative we can get to know them and compare data with other instruments on the market. In more advanced stages of research, I would like to imagine that these interfaces are also used medically, to alleviate the effects of some diseases, improve physical conditions, and even reduce motor damage that originates in the brain by promoting nerve and muscle movement. Music is obviously a means of enjoyment, but together with science, it can be a means of healing.