WS #13: Prototyping phygical experiences to bring the lab closer to the street

During the international week, I attended the workshop “prototyping phygical experiences to bring the lab closer to the street” by Carla Molins. Overall, the workshop was about how we can use interaction design to communicate scientific topics. Our task was to create a prototype that explains chromatin. Chromatin is a way of the DNA-string to fit untangled and in order into the chromosome.

Day 1 | Research and Ideation

On the first day, we got an explanation of  Carla about what chromatin is and about the use of it. After that, we jumped right into the topic and had a brainstorming session together. We ask ourselves the question “Can we co-create different experiences to explain chromatin to non-scientists?” and wrote everything that came to our mind on post-it notes and put them on the wall. Afterwards, we tried to bring some order into it and grouped the words into 6 different categories: “What?”, “Where?”, “When?”, “How?”, “For whom?” and “Why?”. Then, we separated in different groups of two to work on our concept. In our group, consisting of Fridtjof and me, we started with trying to answer the question together again. That was kind of hard to do, because we felt like we did not know that much of chromatin yet to come up with a good concept.

Day 2 | Ideation and Prototyping

On this day, Carla was unfortunately sick and stayed at home, but we had zoom calls with her during the day to keep her up to date about our process. We started with researching more about the topic to get a deeper understanding of it and to make it easier for us to create a concept. We even called Fridtjofs mother, because she is a biologist, so she could also give us an explanation. Then, we all went in different directions with our prototypes and tried to explain different parts of it. Our concept was about to explain the different kinds of chromatin: Euchromatin and heterochromatin. Euchromatin is a “loose” structure, where the information can be easily accessed in contrast to heterochromatin. We decided to have the user build the structure him/herself to get a better understanding of it. To showcase the euchromatin, we made a “T-RNA Scanner”, which reads the information stored in the euchromatin to build protein, which the user got in form of a chocolate. During the day, we also pitched our concepts to each other and evaluate them together, which was really helpful. Then, we already started building the prototype. For our prototype, we used wool, pipe cleaner and Styrofoam. In addition, we created posters that gave instructions and explanations regarding our prototype. The creation of the concept and prototype was an iterating process, because we already made a few small tests and adapted the prototype/concept regarding the outcome.

Day 3 | Testing and Enhancing

On this day we performed around five test with different participants. The main outcome was that we should think about our posters again, since we had a lot of different posters with no clear hierarchy so the users got a bit confused or did not recognized them at all. So we spend a lot of the time trying to find out what information is important and should be on the posters, what we can exclude from it and what the overall structure should be. After we were happy with our poster, we enhanced our prototype with different supplies, we bought in the hardware store. We also managed to go to the photo studio to get some pictures of our prototype. We also spend a lot of the day testing the projects of the other people.

Conclusion

I overall really enjoyed the workshop and I think it is a really interesting topic, in which I would like to work on in the future as well. Carla was also a really great teacher. She could answer a lot of our questions and really helped us improve the projects. The workshop was really great to get a bit of an insight into the topic of science communication through interaction design. But I also think that the timeframe for the workshop was a bit too little, because I think we did not have a lot of time to do proper research about this topic and really understand what chromatin is, so I think our projects would need some further iterations to really work. But in general, it was really nice to have the opportunity to meet people out of different universities and dive into different fields for a week. It was also very interesting and inspiring to see what other workshops have done during the week.

An Overview of the Relationship between Artificial Intelligence and Design

Since the second half of the semester, I decided to change my topic. Researching about Augmented Reality Storytelling was interesting, but I do not feel that I want to continue doing my research in that direction. Thinking about another topic was also hard for me, because I always have a hard time making decisions, but I think I have found a new interesting field, which I would like to focus during the course. It is the relationship between Artificial Intelligence and Design. I am not quite sure where exactly this path will lead me, but I really enjoyed learning about it and thinking about it’s possibilities.

What is Artificial Intelligence?

There are several definitions for AI. For example, John McCarthy’s (computer scientists) defines AI as following:

“It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

In general, Artificial Intelligence is the simulation of intelligence processes associated with humans through machines. Examples for this processes can be the ability to reason, discover the meaning of something, generalize of learn from past experiences. As for now, there is no machine able to match the human flexibility over wider domains or in basic everyday knowledge. But, there already are some processes with the ability to perform certain specific tasks with the same or even higher level than experts. Today, we already have a lot of touchpoints with AI. Sometimes, it is obvious, for example if we use systems like Siri (Apple), or Alexa (Amazon). But we also deal with it through search engines or recommendation algorithms (like YouTube, Amazon or Spotify).

There are two different types of Artificial Intelligence:

Weak AI (or Narrow AI)
Weak AI is created/trained to perform a specific tasks and is not able to function without human interaction. It is the type of AI, which we deal with in everyday life, like on speech recognition devices.

Strong AI
Strong AI is made up of Artificial General Intelligence (a theoretical form of AI, where Artificial Intelligence matches human intelligence, which means it would have self-aware consciousness) and Artificial Super Intelligence (it would even pass the human intelligence), which is, for now, only theoretical without any practical examples.

Can Artificial Intelligence be creative?

During my research, a lot of times this question popped up. Creativity is often perceived to be connected with people, certain methods or tools – not with computers. But do computers also have the ability to be creative? Well, they are to a certain level. For example: IBM (International Business Machines Corporation – an American multinational technology corporation) was asked by 20th Century Fox to create a trailer for their horror movie “Morgan” with the use of their AI “Watson”. Watson analysed the visuals, sound and composition of other horror movie film trailers and with that knowledge managed to create a trailer. You can see the outcome and a more detailed explanation in this video:

This shows that AI has the ability to be creative – to a certain extent. As for now, AI is only able to show creativity with the help of humans. For Watson, it was only possible to create a trailer by analysing other trailers and use deep learning in order to create patterns to understand how a trailer for horror movie works. Only with that knowledge, Watson was able to produce something similar to that.

“It’s easy for AI to come up with something novel just randomly. But it’s very hard to come up with something that is novel and unexpected and useful.”
 – John Smith, Manager of Multimedia and Vision at IBM Research

How can we co-create with Artificial Intelligence?

To answer the questions, I have taken a look into various projects to find examples on how AI is already used to co-create. I realised that most of the projects I saw could be either categorized in an AI that we, as designers, could profit while creating or AI that we can include in our projects, so that the user can profit from it, or both. Here are some selected examples, which I found especially interesting:

Airbnb – Sketching Interfaces

Airbnb created a tool to transform low fidelity prototypes into code by using AI. You just have to scan your hand drawn wireframes and the AI transforms it into a prototype. This could really speed up the process of prototyping and visualizing ideas in general.

Website: https://airbnb.design/sketching-interfaces/

Image (and Asset) Creation

There are many AI that can be used to create images or other assets. One example of this experiments is described in this article of medium.com. The AI is used to turn photographs of real objects into abstract illustrations. It is a part of Google Artists and Machine Learning initiative.

Website: https://medium.com/artists-and-machine-intelligence/perception-engines-8a46bc598d57

Oi – AI used in Graphic Design

The berlin based studio Onformative created a tool to transform the logo of Oi, a Brazilian telecommunication brand, in fluid shapes through the use of sound input. The outcome is a sound-reactive interactive logo.

Website: https://onformative.com/work/oi/

Scribbling Speech

The interaction and product designer Xinyue Yang created a tool that uses speech to create animations. You just say what you would like to see in a scene and the program creates an animation out of the information.

Website: https://experiments.withgoogle.com/scribbling-speech

To see more interesting projects and experiments with AI, I highly recommend taking a look at these two websites, where a lot of interesting projects are listed:

Google AI Experiments: https://experiments.withgoogle.com/collection/ai
Algorithm-Driven Design: https://algorithms.design/

NIME: TouchGrid – Combining Touch Interaction with Musical Grid Interfaces

By Beat Rossmy, Sebastian Unger, Alexander Wiethoff

https://nime.pubpub.org/pub/touchgrid/release/1

Musical Grid Interfaces are now used for around 15 years as a method to produce and perform music. The authors of this article made an approach to adapt this grid interface and extend it with another layer of interaction – with touch interaction.

Touch interactions can be divided into three different groups:

  • Time based Gestures (bezel interactions, swiping, etc.)
  • Static Gestures (hand postures, gestures, etc.)
  • Expanded Input Vocabulary (using finger orientation towards the device surface)

During their experiments, they mainly focused on time based gestures and how they can be implemented in grid interfaces.

First Prototype

Their first prototype was build out of 16*8 grid PCB with 128 touch areas instead of buttons. This interface was able to record hand movements in a low solution, in order to detect static and time based gestures. But they had problems with the detection of fast hand movements and they could not solve them without having a major change in the hardware.

TouchGrid

For their second prototype, they used a Adafruit NeoTrellis M4 consisting out of a 8*4 LED buttons, which are able to give RGB feedback.

They managed to incorporate two time based interactions: Dragging from off-screen (accessing not frequently used features like menus) and horizontal swiping (switching linear arranged content). In order understand the different relationships and features and not overwhelm the users, they also incorporated animations.

Take a look at the video to get a better understanding of the functionalities:

Evaluation

To evaluate their concept, they made an online survey with 26 participants, whom they showed the video above. Most of them stated that they are already familiar with touch interactions and that they can image to use this interface. They even came up with a few more ideas for touch interactions, like zooming in a sequence of data. When they were asked to state their concerns, they said for example that it might get a bit too complex and they feared the malfunction and interference with current button interactions.

Conclusion

With their concept, the authors took a different approach than a lot of other people: Instead of aiming to make touchscreens more tangible, they tried taking already known touch interactions and combine it with the tangible grid interfaces. They try to take the best out of best out of the two worlds and combine it with their TouchGrid. As for now, they are still in the concept phase, focusing on the technical proof of concept and are getting help from an expert group in the evaluation of their concept. For the future, they hope that they can further work on the “[…] development of touch resolution, with which more interactions can be detected and thus more expressive possibilities for manipulating and interacting with sounds in real time are available. Furthermore, combinations of touch interaction with simultaneous button presses are conceivable, opening up currently unconsidered interaction possibilities for future grid applications.”

Virtual Exhibitions

Radiohead: Kid A Mnesia Exhibition

Kid A Mnesia Exhibition is a virtual Radiohead art museum featuring music from the two albums Kid A and Amnesiac. The game was released in 2021 and it can be downloaded for free on Epic Games for macOS, Windows and PlayStation 5. Although the exploration game doesn’t use virtual or augmented reality, I want to write about it because I think it is a great example of virtual exhibitions.

The journey starts in a forest. You don’t get any instructions about where you are or where to go. So you just wander through the forest until you find the entrance to the exhibition. When you enter the exhibition, the style of the game changes. From now on, you can explore the different rooms of the exhibitions with different artworks or installations while listening to music of Radiohead. For example, there is one room with lots of televisions on the wall on them, which displays are always changing. In this room, you here a unnerving yet familiar version of “The National Anthem”. In another room, there is paper all over the floor, which starts to fly away and reconstruct the room. There are also several hidden interactions, which can be discovered for example when you step on a specific object. Also, you are not the only visitor of the exhibition. There are other creatures walking through the exhibition as well. On many creatures you can see the smile with the pointy teeth from Radioheads logo.

Although you can only walk around and zoom in/out, it is very entertaining. There are little details hidden in the exhibition and you to take a closer look to encounter them. The rooms, installations and artworks are changing over time, so it would not be a good idea to rush through the rooms. Although the exhibition is a bit creepy, you can find beauty in every room. The game is definitely worth a try, even for people who aren’t the biggest Radiohead fan.

Augmented and Virtual Reality Exhibitions

Museums and exhibitions aim to bring their collections to live. Since the ongoing development of augmented and virtual reality technologies it seems obvious to integrate them in the classical exhibitions. Through the usage of AR and VR technologies, museums can add a virtual layer to their exhibitions and create immersive experiences. Some areas of application could, for example be, allowing users to explore Egyptian burial chambers, meet historical characters or find out more about an artist by virtually visiting their hometown.

As part of a study, the Research Centre of Excellence in Cyprus (RISE) has interviewed 15 global museums about their experience in including AR and VR technologies in their exhibitions. Around 50% of them stated, that they made use of these technologies in order to create an augmented spaces for visitors to experience the exhibition, for example in form of a virtual time travel. They integrated VR and AR experiences in their exhibitions as an extension to the classic exhibitions, instead of outclassing them.

Another possibility to create a virtual exhibition can be done by scan exhibitions and arrange them in a virtual space. In this way, exhibitions can be accessible from all around the world. It could also enable a larger audience, for example disabled people, to visit exhibitions they could not visit in the real life.

Examples

Mona Lisa: Beyond Glass

Source: https://www.viveport.com/18d91af1-9fa5-4ec2-959b-4f8161064796

The Virtual Reality experience “Mona Lisa: Beyond Glass” was part of the Leonardo da Vinci blockbuster exhibition taken place at the Louvre in Paris, in October 2019. Through the use of animated images, interactive design and sound, it allowed the users to explore it’s details, the wood panel texture and how it has changed over the time.

Source: https://www.gmw3.com/2018/02/national-museum-of-finland-offers-virtual-time-travel/

The National Museum of Finland enabled their visiters a virtual time travel back to the year 1863, by letting the users walking inside the painting “The Opening of he Diet 1863 by Alexander II” by R. W. Ekman. In this VR experience the visitors could speak with the emperor and representatives of the different social classes or visit historical places.

References

Storytelling with Augmented Reality | Part 2

In the last post, I gave an overview about the technical aspects of Augmented Reality Storytelling and the three main components of it. In this post, I want to focus more on the story. I want to give an insight into Interactive Storytelling, which can be used with Augmented Reality to create an immersive experience for the user.

Interactive Storytelling

Interactive Stories are stories that can and need to be influenced by the user. Throughout the story, the user needs to make decisions in order to continue. These decisions influence the further course of the story. The user is no longer a passive observer of a linear story, but can be an active part of it.

A interactive story is usually divided in different parts. At the end of a storyline, the user is asked to make a decision, based on different options, which are provided. After making the decision, the user is forwarded to another storyline.

The term is sometimes used synonymously for digital and transmedia storytelling (storytelling through the use of digital media), but this is not always the case. Interactive Storytelling can also be applied in, for example, books. At some point of the story, the reader has to make a decision and has several different choices to choose from. Depending on what decision he/she makes, he/she has to scroll to a certain page where the story continues.

Use of Interactive Storytelling

Interactive Storytelling often finds its usage in marketing. There are several campagnas, which make use of Interactive Storytelling to promote their products. But Interactive Storytelling can also be used to disseminate social and difficult to communicate topics. An example of that is a campaign from “Wiener Linien”, where they created an interactive campaign to educate about civil courage. Another example if from the “International Red Cross”, which made a spot to generate and show awareness of work in crisis areas.

Common Structures

There are several different options to structure an interactive story. These are some of the most common structures:

Branching Narrative

A relatively classic narrative structure in which viewers can make more and more far-reaching decisions about the course of action. The narrative branches into different endings depending on the choices you make. Depending on how many branches the narrative contains, this type of structure can get very complex very quickly.

Fishbone Narrative

This is a traditional linear structure that allows viewers to explore the sub-stories of their story, but keep bringing them back to the main theme of their story. This structure still gives a lot of control over the route viewers take through your project.

Parallel Narrative

With this structure, on the one hand, viewers are offered choices in the story, and on the other hand, they are repeatedly returned to the main theme of the narrative for decisive moments.

Threaded Narrative

The perfect structure to tell a story from multiple angles. Topics can be linked to one another or remain completely separate. Rather, the story consists of a number of different themes that develop largely independently of one another.

Concentric Narrative

In this structure, there are different storylines, which orbits around a shared central point The viewers are provided with different entry points, where they get to choose one. No matter which entry point the viewer choose, in the end they will always return to the core area.

References

Storytelling with Augmented Reality | Part 1

Augmented Reality opens new possibilities of storytelling. With Augmented Reality, you are not just watching a story been told. You are immersed in the experience and become part of the story.

“We witness stories our entire lives. All the storytelling mediums we know and love are ones where an author recounts a tale and we bare witness to that tale. What gets me so excited about these immersive mediums is it feels like we’re crossing the threshold from stories we witnessed to stories we live as our own.”
– CEO of the VR tech and entertainment company, Within

You experience the story as a character of it, you can interact with other characters and they interact with you and you have the ability to influence the story. You walk away with the memory of your own story and not of just media you have consumed.

Three main components of Augmented Reality Stories

In most of AR scenes, you need to focus of the three main aspects.

1. Assets

Assets are all the elements of a AR story, like 3D or 2D models, audio files or videos. They help you tell your story. 3D models, especially when they are combined with audio, can create an immersive experience by taking the user into the world of the story. 2D aspects can also be an important part, for example by providing information via text.

Something you need to also keep in mind is on which device the user will be experiencing your AR story. Not every user is using the latest device, so you need to pay attention on the size of your assets.

2. Interactions

While creating an AR story, you have to consider, how you want the user to be able to interact with the story. These could be through really simple interactions, like the user can for example rotate assets, take a closer look at some of them or look at the scene from a distance. Or more complex ones, for example interacting with characters, speak to them and in order to that influence the story.

3. Environment

Augmented Reality takes place in the real world. So you need to consider where it takes place and how it does influence the role of the user. Does it take place in a room, like the surface of a table, where the user is in the middle of the story, or does the story take place outside, where the assets are far away and the user gets the role of an observer.

Example: Between Worlds by Skip Brittenham

A great example of storytelling with Augmented Reality is Skip Brittenhams book “Between worlds”. Through the use of the Augmented Reality technologies, the fantasy world becomes alive in interactive 3D.

Interactive Print: Augmented reality in print media

There are several other methods for creating an interactive paper. For example with the use of barcodes and QR codes, social media codes, shortlinks. But most of all, the term “interactive print” is used to describe the combination of Augmented Reality and the traditional print media.

Through the development of mobile technologies, Augmented Reality is becoming more and more accessible, which opens new opportunities of its usage. At the beginning of the development of Augmented Reality, it was much more difficult to create interactive print. Printed images needed markers in order to work with AR content. Nowadays, this is no longer the case. Any high-quality print can be used to trigger an AR application.

There are several apps, which make it easy for users to experience Augmented Reality. The most well-known one of them is probably the free app “Artivive”.

“Artivive is an AR tool that allows artists to create new dimensions of art by linking classical with digital art.“ – Artivive

With this mobile app, the user just needs to point the camera at the print object. Then, there is a layer with an animation overlaid, so that it seems like the print object is starting to move. By using several layers, it is also possible to create an 3D environment.

Interactive print in education

Interactive print has a great potential in education. Through already familiar paper-based activities, combined with Augmented Reality, it is possible to create immersive learning experiences. It can be integrated in textbooks to showcase complex and abstract concepts by using for example interactive animations. The interaction with the content may help people to get a better understanding of it. Another effect is that it increases the users motivation to study. Not only schools could profit of the use of Augmented Reality for educational reasons. Also in exhibitions, interactive print can be used to provide information about specific topics.

Questions

  • What tools can be used to create interactive print?
  • How can all the senses be included?
  • For what topics can interactive print be useful?
  • What is the best balance between Augmented Reality and traditional print media?
  • In which fields can it be used?