Musical Grid Interfaces are now used for around 15 years as a method to produce and perform music. The authors of this article made an approach to adapt this grid interface and extend it with another layer of interaction – with touch interaction.
Touch interactions can be divided into three different groups:
Time based Gestures (bezel interactions, swiping, etc.)
Static Gestures (hand postures, gestures, etc.)
Expanded Input Vocabulary (using finger orientation towards the device surface)
During their experiments, they mainly focused on time based gestures and how they can be implemented in grid interfaces.
First Prototype
Their first prototype was build out of 16*8 grid PCB with 128 touch areas instead of buttons. This interface was able to record hand movements in a low solution, in order to detect static and time based gestures. But they had problems with the detection of fast hand movements and they could not solve them without having a major change in the hardware.
TouchGrid
For their second prototype, they used a Adafruit NeoTrellis M4 consisting out of a 8*4 LED buttons, which are able to give RGB feedback.
They managed to incorporate two time based interactions: Dragging from off-screen (accessing not frequently used features like menus) and horizontal swiping (switching linear arranged content). In order understand the different relationships and features and not overwhelm the users, they also incorporated animations.
Take a look at the video to get a better understanding of the functionalities:
Evaluation
To evaluate their concept, they made an online survey with 26 participants, whom they showed the video above. Most of them stated that they are already familiar with touch interactions and that they can image to use this interface. They even came up with a few more ideas for touch interactions, like zooming in a sequence of data. When they were asked to state their concerns, they said for example that it might get a bit too complex and they feared the malfunction and interference with current button interactions.
Conclusion
With their concept, the authors took a different approach than a lot of other people: Instead of aiming to make touchscreens more tangible, they tried taking already known touch interactions and combine it with the tangible grid interfaces. They try to take the best out of best out of the two worlds and combine it with their TouchGrid. As for now, they are still in the concept phase, focusing on the technical proof of concept and are getting help from an expert group in the evaluation of their concept. For the future, they hope that they can further work on the “[…] development of touch resolution, with which more interactions can be detected and thus more expressive possibilities for manipulating and interacting with sounds in real time are available. Furthermore, combinations of touch interaction with simultaneous button presses are conceivable, opening up currently unconsidered interaction possibilities for future grid applications.”
There are more and more forms of AI in different fields. From assistants on websites, to online bots (cleverbot) and including the famous voice assistants on phones (Alexa, Siri).
I have always found the use of these interesting, not only for the amount of information they contain, but also for discovering the funniest answers (like the ones in the images below). That’s why I find this paper so interesting. Applying all this AI knowledge to the world of music can be complex and at the same time super interesting.
In this paper they talk about COSMIC, a (COnverSational Interface for Human-AI MusIc Co-Creation). This bot not only responds appropriately to the user’s questions and comments, but it is also capable of generating a melody from what the user asks for. In the following video you can see an example (referenced in the same paper).
The complexity of these projects is always in the back of my mind. Knowing how devices are capable of reacting like humans seems to me to be a great advance that at the same time can be a bit alarming.
Still, this case opens up a lot of new opportunities. Using this same system, a new method of learning can even be generated. After all, this bot simply edits parameters of a melody (speed, pitch…) to resemble different emotions. One could therefore learn how different emotions tend to imply different sounds or speeds, or many other details.
Using text in designs for 6-8 year olds can be tricky, especially because at first, you try to avoid showing a lot of information. Even so, it is important to know that the little text that is shown should be very understandable and pleasant.
When children learn to read or even learn what letters are, they start by recognising each character one by one. This is a very slow process and can be very boring and frustrating.
This is why, in children’s books, the typeface usually has a warm and friendly look, with simple letterforms. The aperture of the letters should be rounded and open, not angular or rectangular.
To facilitate legibility, it is not only necessary to take into account the use of adapted language, but also to be aware that the use of condensed typefaces, in italics or the exclusive use of capital letters can be a problem. All these details make typefaces complex and difficult to understand for people who are still in the learning phase.
Apart from avoiding decorative or complex typefaces (realistic typefaces should be adopted), there are other details related to the properties of typefaces that should be taken into account: line spacing, size, x-height and single-storey “a” and “g”.
To easily understand these details, an image is shared from Material Design, a page that contains information on all kinds of design elements. In this image you can clearly see the different parts of a typeface.
Firstly, it’s recommend the use of typefaces with a size between 14pt and 24pt (depending on the age). Related to this idea, think about the line spacing of the text, which is recommended to be between 4pt and 6pt bigger.
Regarding the x-height, it is important to know that typefaces with larger x-heights are usually easier to read than those with short x-heights, especially for children.
Not only that, but this x-height is a very important point for creating typeface pairs, if their height is similar, it will create more harmony. To better understand this concept, Ricardo Magalhães gives as an example in his article the typeface Gill Sans and Fira Sans.
Although both appear to be the same size with respect to their first letter (in capitals), it can be seen how the x-height (marked by the red line) of the second typeface is larger than the first, so the harmony might not be good.
Finally, for very young readers, texts should use typefaces that have the single-story “a” and “g” (also called children’s characters), as these are the lowercase forms that pre-school and school-age children learn to write. This concept refers to the way the two letters are written.
Double-story letters can be reminiscent of older typefaces, while single-story letters look more modern and simplified. For this reason they are more suitable for a child audience, as they are undecorated, simple and straightforward.
International Conference on New Interfaces for Musical Expression (NIME): The paper “Vrengt: A Shared Body-Machine Instrument for Music-Dance Performance” caught my attention because it explores the body as a musical interface. Since I am working on a face-tracking project that uses face gestures as a musical interface, the paper is of great relevance for my current work.
The paper is about the multi-user instrument “Vrengt”, which is developed for music-dance performance and in which dancer and musician interact co-creatively and co-dependently with their bodies and machines. “Vrengt” is based on the idea of enabling a partnership between a dancer and a musician by offering an instrument for interactive co-performance. The guiding question is to what what extent the dancer can adopt musical intentions and whether the musician can give up control of performing while still playing together. The focus is on exploring the boundaries between standstill vs motion, and silence vs sound. In the process, sonification was used as a tool for exploring a dancer’s bodily expressions with focus on sonic micro interaction. To capture a dancer’s muscle activity during a performance, two Myo gesture control armbands are placed on the dancer’s left arm and right leg. Moreover, the dancer’s sound of breathing is captured with a wireless headset microphone. Based on the aim of creating a body-machine instrument for a dancer to interact with her/his body and a musician with a set of physical controllers, the project members started with capturing muscle signals and breathing of the dancer. In this context EMG plays a big role: “Electromyogram (EMG) is a complex signal that represents the electrical currents generated during neuromuscular activities. It is able to report little or non-visible inputs (intentions), which may not always result in overt body movements. EMG is therefore highly relevant for exploring involuntary micro motion.”
The instrument offers great freedom in collectively exploring sonic interactions and the outcome/performance is structured in three parts: – Breath (embodied sounds of dancer modulated and controlled by musician)
– Standstill (even though the dancer is barely moving, the audience can hear the dancer’s neural commands causing muscle contraction)
– Musicking (active process of music-making)
To create sound objects that approximate responsive physical behavior and are appropriate for continuous physical interactions, the Sound Design Toolkit (SDT) in Max was used. The sounds used boost the imagination of associating body movements with everyday sounds. The following figure displays the sonic imagery.
What I found especially interesting about the paper was the inclusion of the subjective evaluations of the dancer and the musician in the discussion part. From the musician view, it requires stepping out of the comfort. The familiar instrumental circumstances are exchanged by the athletic and artistic environment of a dancer. For the musician, it is important to understand the dancer’s feelings and develop a common language. Even though the dancer is in charge of the main gestural input, the musician decides on the sound objects, scaling and mix levels. For a dancer, performing with interactive sonification makes a big difference to dancing to music. The dancer describes listening as the main aspect for decision making and physical play and exploration happen while moving along intuitively. The dancer describes her experience as “not knowing where to, and how to, still with a clear sense of direction”, the focus shifts from the body to the sound.
In my view, this project offers interesting insights on developing a new way of communicating and creating art through a new type of body language and a new physical language. I think it has great potential to be part of art installation installations in an experimental context. It offers a great opportunity to open new ways of feeling one’s own body and hearing the consequences of one’s moves. “Vrengt” enables an individual music-dance performance as well as a creative collaboration between dancer and musician. As was clear from the text, through the shared control, musician and dancer both feel like “owners” of the final outcome, generating a feeling of being part of something bigger. For me, it was inspiring that the project turned the usual music-dance performance upside down by a dancer not moving to a given sound but creating the sound through movement. Nevertheless, it would be interesting to see the experiment in a more even bodily contribution of dancer and musician, because in the described setup, compared to the full-body experience of the dancer, the musician uses only his hands to operate with the computer.
If you are curious what “Vrengt” looks like in action, you can watch the following video of a live performance I stumbled across while researching the topic a bit further: https://www.youtube.com/watch?v=hpECGAkaBp0
REFERENCE
Cagri Erdem, Katja Henriksen Schia, Alexander Refsum Jensenius. Vrengt: A Shared Body-Machine Instrument for Music-Dance Performance. https://doi.org/10.48550/arXiv.2010.03779
This project aims to improve physical learning, in this case Tai-Chi, by implementing audio along with visual feedback. As you do your Tai-Chi exercises music composed of bell, chime and wind sounds plays in the background and two parameters: tempo and alignment of the joints compared to a reference video, are measured. If you drift from the reference in either parameter you will be given an audio-cue that you are getting off course. When drifting in alignment a low pass filter is applied to the soundscape cutting off some of the ethereal richness in the music and if you drift in tempo the tempo of the music changes. I think this sort of feedback is very appropriate for physical exercises considering the fact that you often have to use your vision for balance or coordination. The way they implemented the feedback in such an unobtrusive and unannoying way is very fitting for the meditative act of Tai-Chi, however I think it is a great way to approach any sonic project. They did not mention it in the paper, but I guess you correct your posture by looking at a visual. However, I wonder if it would be possible to eliminate the visual component by adding more parameters. For an activity like Tai-Chi this might not be feasible because of all the factors at play, but for a simpler activity, for example biljards, it might be possible.
Radiology is the discipline that is responsible for deciphering the information contained in images of our body to diagnoseor inform other specialists and even the patients themselves of how the pathologies suffered are developing.
Radiologists are exposed to a workload of approximately 200 cases in a normal working day. 200 cases equal 200 people, with different illnesses, and different stories. With this workload, the person’s story is normally relegated to the background, but we all know that human closeness in a context like this is a necessary asset and it should always remain in the foreground.
There are different software tools, as well as imaging standards, that radiologists use to do their work. Among them, we could speak of DICOM (Digital Imaging and Communication in Medicine), the standard for the communication of information related to images. Regarding software, we could talk about PACSonWEB, a portal where both hospital specialists or doctors from private practice, and patients themselves can easily access their image repositories, avoiding the bureaucracy and long waits involved in transferring from one source to another.
Companies from all over the world are dedicated to the development and improvement of these systems, trying to reduce as much as possible the time that specialists spend examining each image so that they can dedicate that time to their own professional well-being or to dealing directly with patients, procuring that necessary human proximity for a longer time.
How can design help in this specific specialty? Through the interfaces and the constant study of the behavior of professionals in the work context. The interfaces with which doctors interact daily should be clear and accessible, all the most used tools should be quickly identifiable and above all, the series of images should be able to be combined in the way that is necessary to obtain a 360 view. degrees of the pathology being analyzed. It must be taken into account that these people spend the day in a dark room, therefore, the interfaces must be designed so as not to overexpose the eyes of the professionals. Likewise, it is at that very moment of image analysis in that dark room when the diagnosis is made, not afterward. Typing would be a waste of time, therefore the design and development of dictation tools and good speech recognition are highly necessary.
To sum up, the continuous interaction with radiology professionals to understand their needs and how they work, the design and conception of interfaces in which navigation is totally intuitive, and the reinforcement of tools that facilitate voice interaction within the software, are the steps that the UX Design and UI Design must imminently address in order to improve the quality of life of radiologists in their work environment.
In the course of my visit to Spain during the Easter holidays, I had the opportunity to have a short interview with one of the heads of the cardiology department of the Salamanca general hospital.
After listening to what the doctor told me, 3 things were clear to me:
The time spent on correcting minor technical errors or sharing information between specialists should be reduced to the maximum in order to be able to use it in dealing directly with patients.
Advances in technology are very effective, but we cannot forget that the main customers of health systems are the elderly people, and this is going to be the tendency at least in the near future.
Pedagogy is key in order to making patients stop seeing the hospital as a hostile environment. Time and tools are needed to provide this inclusion prior to treatment and, if possible, the availability of real people for those who do not feel so comfortable with virtual assistants.
Here is the full interview.
*This interview has been the first instance recorded in Spanish and afterward translated and transcripted to this document.
Me: Hello! Thank you for taking some time for me, I know you have probably a very packed day.
C: Yes, indeed. But no problem at all, thank you for coming.
Me: My pleasure. Just a little information before we jump into the interview itself: This interview is for educational use. I want my master’s thesis to be related to healthcare, more specifically speaking I want to find out what are the design challenges healthcare is facing right now. Since healthcare is just a big umbrella that brings together many different disciplines, I have decided to delve into radiology. I think it is a field where many different types of software and hardware are used, and also there is an ongoing relationship with other specialists and patients if I am not mistaken. All of this makes the field very attractive for a designer, and that’s why I’m here.
The information will be shared in a blog that students and teachers from my department can access as well as will be used to shape the final version of my master thesis. Only if you consent, I’ll use some of the comments made during this session as quotes within the master thesis, only then, by using the proper form of citation. And here ends the bureaucracy.
C: hahaha! No problem, of course, you can use the information for your master thesis and I’ll be happy to be quoted within it. So go ahead!
Me: Thank you so much. Let’s start then.
Me: How long have you been working here?
C: 15 years already!
Me: That is time enough to gather many different experiences. Tell me a bit about your daily routine. How many patients do you see on a normal day?
C: 15 to 30. Depending on the pathologies. Sometimes I meet on the same day with patients who are going through similar pathologies because it is easier for me to do a bit of pedagogy with them and also to easily access the repositories. There is a huge archive where all heart diseases are collected, and they are labeled in alphabetical order by the name of the pathology. Therefore, I spend less time looking for a specific record if I move around the same area all the time. But of course, I always give priority to urgent issues, and you never know when a new one is going to show up during your day.
Me: Besides meeting your patients, do you have other important meetings that you have to attend in your day?
C: Other cardiologists. Residents. Specialists from many different fields, like, for example, radiologists. We compare diagnoses, we talk about further steps within a patient’s treatment… Nurses, and sometimes administrative staff.
Me: Between all of those meetings, do you have time to take a break every now ad then?
C: Hmmm… I guess I do have breaks, but not really as many as I’d need. Sometimes I don’t have time to eat properly or I can’t take the time to clear my mind after giving bad news to a patient. Although we are used to dealing with bad scenarios and informing patients about them, in the end we are all human beings, so it is also a human need to have a space to digest that you “hurt” someone’s feelings before going on with your day.
Me: I understand. The breaks are not only to have a physical rest but also to recompose yourself psychologically.
Me: Is there something you think that will make you have more of that time you are seeking?
C: Hay muchas veces que las reuniones se podrían hacer más rápido. Algunos trámites administrativos podrían hacerse más rápido. La transferencia de datos de médico a médico para un diagnóstico posterior debería ser más rápida. Incluso hay ocasiones en las que tenemos que repetir procedimientos por falta de instrucciones dadas al paciente, o alguna pérdida de información. Son cosas que no deberían estar pasando en un gran hospital como este, porque si hay algo que buscamos, en realidad, todos buscan, es tiempo.
Me: Let’s go to the analysis of some of the devices and procedures that you daily use.
Me: What kind of devices do you use more often?
C: Cardiac ablation catheters, Cardiovascular angioplasty devices, Cardiac pacemakers, Implantable cardioverter defibrillators, Prosthetic (artificial) heart valves, Stents, Ventricular assist devices, “domestic” monitor devices, web and mobile applications or portals.
Me: Can we focus on the last ones? As I understand, patients can get more involved in those.
C: Definitely.
Me: Within the use of “domestic devices or those that patients take home, is the data collected sufficient to provide a complete diagnosis?
C: Yes and no. For many patients, it is enough because we only look for abnormalities. So we see them or we don’t. But answering your question, no. It is not enough to completely diagnose a pathology. If we find an abnormality through the monitor, we will have to perform further procedures to find the best way to proceed with that specific patient. And many of those procedures will need to be done inside the hospital.
Me: How many times do you need to repeat monitoring procedures? Average per year.
C: Around 15% of the whole amount of procedures.
Me: What do you think is the main cause?
C: Technical problems many times. But there are also times when patients don’t understand exactly what we need for them, so we have to explain again how to interact with the device, what they can do, what they can’t do, and redo the procedure.
Me: Is there a specific demographic group where the repetitions occur more often?
C: Elderly people. In reality, most cardiology patients are older people. Sometimes we have children or young people who have heart problems but… You know, we are in a very old society. And it is expected to be even greater in the coming years… Medicine has to see the elderly as its main client and adapt procedures to them. And that takes time. Exactly what we don’t have here.
Me: These elderly people, is there something they complain about when they are required to wear this device on them?
C: They feel insecure. They don’t see why they need this machine if I’m here. I’m the doctor, what a machine is going to the better than me? They have a lot of questions. Many don’t even understand the most basic things so… They feel uncomfortable, even though they have to wear the device for only one or two days.
Me: Do you think these people feel involved when dealing with these devices? Do they feel like an active part of the procedure or a secondary actor?
C: Honestly, I think they feel totally aside. They don’t know what is going on. They follow instructions. Everything is cold and aseptic. They don’t have a voice. But they do have a voice, but many times we don’t have the time to listen to them because we use all our available time discovering what is going on there.
Me: What do you think could be done in order to improve patient engagement and involvement?
C: Definitely avoid all of those technical errors that are grabbing our time for the face-to-face relationship with our patients and the pedagogy we could do with them. More time = more pedagogy = less repetition of procedures = more rate of success when finding the diseases in their early stages.
Me: What do you think has been already done to improve this?
C: Many of our applications and portals, as well as AIs, provide training to our patients previous to their procedures. We go back to the problem of getting the elderly people there. They rather speak with you than navigate an application looking for the information they need or listen to the instructions from a non-human character. This has been widely discussed among the medical staff, so the awareness is also a good point and I guess some steps will be done to address this.
Me: Thank you so much for your time. I think I have all I need. Do you maybe want to talk about something that I didn’t ask you that you would like to share with me?
C: I think I don’t have any further information I can share with you. I already talked too much! But if more questions come up to your mind, don’t hesitate in contacting me. Thank you for the interview!
Kid A Mnesia Exhibition is a virtual Radiohead art museum featuring music from the two albums Kid A and Amnesiac. The game was released in 2021 and it can be downloaded for free on Epic Games for macOS, Windows and PlayStation 5. Although the exploration game doesn’t use virtual or augmented reality, I want to write about it because I think it is a great example of virtual exhibitions.
The journey starts in a forest. You don’t get any instructions about where you are or where to go. So you just wander through the forest until you find the entrance to the exhibition. When you enter the exhibition, the style of the game changes. From now on, you can explore the different rooms of the exhibitions with different artworks or installations while listening to music of Radiohead. For example, there is one room with lots of televisions on the wall on them, which displays are always changing. In this room, you here a unnerving yet familiar version of “The National Anthem”. In another room, there is paper all over the floor, which starts to fly away and reconstruct the room. There are also several hidden interactions, which can be discovered for example when you step on a specific object. Also, you are not the only visitor of the exhibition. There are other creatures walking through the exhibition as well. On many creatures you can see the smile with the pointy teeth from Radioheads logo.
Although you can only walk around and zoom in/out, it is very entertaining. There are little details hidden in the exhibition and you to take a closer look to encounter them. The rooms, installations and artworks are changing over time, so it would not be a good idea to rush through the rooms. Although the exhibition is a bit creepy, you can find beauty in every room. The game is definitely worth a try, even for people who aren’t the biggest Radiohead fan.
The Treachery of Sanctuary is an experience of birth, death, and transfiguration and the creative process. It projects the transformation of the participants’ shadow on a white panel. It is an older project, Milk made 10 years ago but it is still simple and impressive.
The work consists of three 30-foot high white panel frames suspended from the ceiling on which digitally captured shadows are reprojected. A shallow reflecting pool sits between the viewers and the screens. In the background, an openFrameworks application utilizes the Microsoft Kinect (Kinetic Connect Controllers) SDK for Windows and infrared sensors. This talks to a front end running Unity3D in which articulated 3D models of birds interact with the shadows captured by three hidden Kinects.
In the first panel the body disintegrates into birds, the moment of birth or the moment you have an idea. The second panel is representative of the critical response, eighter by yourself or the outside force. This is what it feels like to have your purest expression picked apart by a thousand angry beaks, it represents death. The third panel, where you sprout the giant wings, represents the transfiguration. When you overcome death and the idea is transformed into something larger.
Thoughts
You can see the artist behind this project. Overall, it’s a great illusion to transcend into a bird, it is pretty magical to see your shadow transforming. Not that immersive but perception wise, a very nice project.
Aura is a multimedia installation inside the Notre-Dame Basilica in Montreal. It uses light, sound and video mapping to showcase the basilica in a completely new way. The story has three acts: The Birth of Light, The Obstacles and The Open Sky. The story contains religious themes and highlights the gothic structure.
To use the church’s interior as a canvas, the team did a complete 3D scan of the space, ensuring that all details would be perfectly matched to the projections. Due to the multifaceted nature of the project, it took over a year of work, including sound recording and visual imagery to move the installation from idea to reality.
The complexity of the architecture made it more difficult to plan ahead. They had to make tests with projectors on a weekly basis. Oftentimes it was trial and error, when something didn’t work, they had to come up with a different solution and test the new idea.
The moment factory who created the installations used 21 projectors, 140 lights, and 20 mirrors for the lighting and projections. Marc Bell and Gabriel Thibaudeau used 30 musicians and 20 chorists for the music. The results are an incredible transformation throughout the performance, as light, color, and sound are used to create a unique mood.
Thoughts
Aura is an incredible immersive experience, and the projection is well done. The only thing that would enhance the experience is live music. The acoustics are incredible in churches.