After analysing a few examples of interactive children’s exhibits and looking at the results obtained from the database, they went on to read and research the 125 Universal Design Principles.
To this end, it was decided to update the progress found, as the book is very comprehensive and detailed.
After reading the first 30 principles, we found some very interesting details that we considered very important to use in exhibitions for children.
These principles and the reasons why they might be interesting are listed below.
Accessibility This is a very important concept in design, but especially in design for children, as it is necessary to adapt the devices so that children can access and understand them. That is why, within the concepts of this principle, operability (everyone should be able to use the design) and simplicity (everyone should be able to understand the design) stand out.
Advance organiser This principle is very important and is somewhat related to the simplicity seen in the previous section. This principle stresses the importance of being able to explain concepts so that everyone is able to understand them. To do this, the idea of using words that children already understand is used, from which the main concept is generated and explained.
Biophilia Effect Spaces reminiscent of nature reduce stress and increase concentration. When planning an interactive exhibition for children, it is necessary to understand that children need to be as concentrated as possible in order to carry out the actions. That is why trying to create a natural environment can help.
Chunking This concept also relates to the way in which information is displayed. It is necessary to divide the content into units in order not to launch too much content in too little time.
Colour Obviously colour is a very important point, which was already analysed previously. In the case of children, more saturated colours should be used to give more excitement and dynamism.
Contour Bias In this case, we talk about the importance of using more rounded edges that make the user feel closer to the object. Still, it is true that straighter edges can be aggressive but they certainly attract the user’s attention. Still, in my opinion, I don’t think it is a necessary thing to use with children.
Constraint and control I place both concepts together as they are related. They consider the importance of knowing how much control the user should have. The constraint relates to the limitations that should be placed on the user. In this way, both work together to limit and leave the necessary freedom to the user.
Obviously there are many more important concepts, but these listed above are, in my opinion, the most important for children. The idea is to finalise the list of principles and add some details about these principles to the databases, to continue analysing interactive exhibits in order to understand the correct and best use of resources to generate the most impactful exhibits for children.
REFERENCES
Lidwell, W., Holden, K., Butler, J., & Elam, K. (2010). Universal Principles of Design, Revised and Updated: 125 Ways to Enhance Usability, Influence Perception, Increase Appeal, Make Better Design Decisions, and Teach Through Design. Rockport Publishers. https://books.google.at/books?id=3RFyaF7jCZsC
| design challenges and principles from the car navigation system developer company TomTom
As it was stated in my earlier blog entry, one of the current cockpit design trends is the multiplicity of screens in cars. This increasing display real-estate is creating challenges for automotive UX designers in creating an effective driver experience instead of displaying as much beautiful information as possible and as a result distracting the driver.
The navigation system and mapmaker company TomTom is also discussing this topic with their principal UX interation Designer Drew Meehan in a blog post, with insightful content about the design principles to be considered.
Finding balance in information overload
The keyword of building an interface with informational balance is: “action plus overview”. When looking at several screens, the shown information should be clustered to provide hints for next actions, and further give an overview of the car’s journey. This should be achieved by sorting the information shown on separate screens to compensate each other.
An example would be a car equipped with head-up display (HUD), a cluster behind the steering wheel and a central display. On the HUD only the current status information would be shown, about the “here and now”. The cluster would show information about oncoming actions in the near future. The central stack would have the job to give the complete overview about the journey, arrival time and complementary info such as refueling/recharging possibilities.
This structure creates a flow of eye movement, which helps the driver will understand the information placing easily and know where to look for specific interests.
Challenges in automotive interface design
There are some aspects and strategies that need to be considered when designing in-car interfaces:
Responsive and scalable content according to screen size: complying with different screen sizes in different vehicle models of a brand
Adaptive content: displaying only the needed information for the current driving situation. This requires prioritization of the information according to drivers’ needs. —> if the fuel/battery charging is critical, the next stations should be displayed. If the tank/battery is full, the screens can focus on less data. —> if there is no immediate route change action necessary, e.g. straight highway for 50 km, other data from other driver assistance systems could be shown (e.g. line keeping). —> in the city with intense navigation needs, the best could be to show prompt actions on the HUD, closest to the drivers’ eyeline for easy help.
Creating one interface ecosystem: all screens should be connected and not segregated. The screens and the shown information should create continuity and complement each other.
Customization options: despite good information balance, some people could be overloaded and stressed by multiple screens. They should be allowed to change screen views and positions of content.
TomTom’s UX department has done user research with varied screen info content. They found that “users want easy, glanceable and actionable information”, which reduces cognitive load and stress.
In summary, the UI design has to support the drivers’ actions by showing essential, easily digestable information. It should be placed where the driver mostly expects the content to be and have just the right amount of detail, according to the current driving situation.
In this short blogpost I want to analyze one examples of a deceptive design pattern* that I stumbled across during my research in detail.
On every detail page of an apartment on airbnb there is a small overview on booking dates and prices (left image). The whole container is basically divided in two parts: a summary with a CTA and the calculation underneath. The hierarchy within this module is clear as the price per night is highlighted with big, bold font style. They use this number as the most representative value even if there are some additional fees added later on and it is not possible to book the apartment at that price. So to get the „real“ price per night the user has to manually divide the overall price for the stay including the service fee by the nights. The CTA is placed above the calculation, therefore some users might click on the pink button before they read about additional fees. Furthermore the weekly discount is displayed twice and highlighted whereas the fee is just in default text style. My suggestion to correct this deceptive design pattern* is to use the correct price per night including all fees, add a plus to the service fee amount and move the CTA to the bottom (right image).
| short summary of a cockpit design trends report, published early 2021
According to a report of 2020 looking at new car models and concepts cars released in the last years, following major directions of intelligent automotive cockpit design trends can be summarized:
Richer versatilities New products are getting introduced with developing automotive electronics, such as driver monitoring systems, driving recorders, rear row and co-pilot entertainment displays. Additionally, intelligent surfaces allow further versatility – window or sunroof glasses can become displays and intelligent seat materials can become interfaces as well.
Multi-channel, fused human-vehicle interaction New ways besides touch and voice control are active voice assistants, gesture control, fingerprint reader, sound localization, face recognition and holographic imaging. These multi-channel interaction modes can contribute to safer use and driving as well as deliver an extended user experience.
3D and multiple screen cockpit displays We see dual-, triple- and quint-screen and A-pillar display implementations for delivering control, co-pilot and rear row interactions.
“User experience”-centricity and scenario-based interaction In-vehicle scenario modes are getting in focus – the car interior should serve as an intelligent, connected, flexible and comfortable personal space, for e.g. driving, resting, working or even shopping. As a UX-centered implementation example the Mercedes-Benz S-Class ambient lighting system was named, that uses 263 LEDs to adapt to driving situations (warnings) or give a real-time feedback of interactions with the onboard computer.
Interaction with every surface via intelligent materials New surface materials are introduced in concept cars to explore touch control possibilities like displaying fuctional buttons in new ways.
Touch feedback as key technology for higher level of safety Besides TIER-1 suppliers also several start-ups are developing touch feedback technologies for supporting less distraction and more effective driver-car interaction.
Software systems will be keys of differentiation The introduction of Android to in-car entertainment systems was a big step. The need for personalization and simultaneous software and hardware iterations and 3D vision are new challenges for the operations system development in realizing intelligent cockpit systems.
Museums and exhibitions aim to bring their collections to live. Since the ongoing development of augmented and virtual reality technologies it seems obvious to integrate them in the classical exhibitions. Through the usage of AR and VR technologies, museums can add a virtual layer to their exhibitions and create immersive experiences. Some areas of application could, for example be, allowing users to explore Egyptian burial chambers, meet historical characters or find out more about an artist by virtually visiting their hometown.
As part of a study, the Research Centre of Excellence in Cyprus (RISE) has interviewed 15 global museums about their experience in including AR and VR technologies in their exhibitions. Around 50% of them stated, that they made use of these technologies in order to create an augmented spaces for visitors to experience the exhibition, for example in form of a virtual time travel. They integrated VR and AR experiences in their exhibitions as an extension to the classic exhibitions, instead of outclassing them.
Another possibility to create a virtual exhibition can be done by scan exhibitions and arrange them in a virtual space. In this way, exhibitions can be accessible from all around the world. It could also enable a larger audience, for example disabled people, to visit exhibitions they could not visit in the real life.
Examples
Mona Lisa: Beyond Glass
The Virtual Reality experience “Mona Lisa: Beyond Glass” was part of the Leonardo da Vinci blockbuster exhibition taken place at the Louvre in Paris, in October 2019. Through the use of animated images, interactive design and sound, it allowed the users to explore it’s details, the wood panel texture and how it has changed over the time.
The National Museum of Finland enabled their visiters a virtual time travel back to the year 1863, by letting the users walking inside the painting “The Opening of he Diet 1863 by Alexander II” by R. W. Ekman. In this VR experience the visitors could speak with the emperor and representatives of the different social classes or visit historical places.
References
Walczak, K.; Cellary, W.: Virtual Museum Exbibitions
Walczak, K.; Cellary, W.; u.A.: Building Virtual and Augmented Reality museum exhibitions
Finally I was able to take advantage of the Christmas holidays to make use of my close circle for conducting a few interviews with the end users. Perhaps it is not the best methodology, since we have no specific product to develop, but I have focused it as a continuation to deepen and test the initial results of the survey type test that I showed in the previous post. I hope that the results of the research can be extended in this way.
Having as a public any user of the network, I have tried to establish two key parameters when selecting different profiles for interviews:
– Level of use
Here I divided people who design, program, or work with sensitive data. In general, those who are not only users, but are also part of the structure that handle data. On the other hand, we have those who are only passive users of this type of service.
– Time of use
For this one I made a simple division between people who are most exposed to the internet and spend more time per day on a screen, so they are accustomed to read and understand the architecture of a web, as well as react agilely to pop-up and visual inputs. And with those who spend less time than average at day or week in front of the desk or with a mobile device.
This division seemed to me more accurate and efficient to establish a demographic categorization by age, gender or studies; since they are factors that I think relate directly to the user’s response to the cookie window.
Understanding the division and choosing at least one person to cover each quadrant to be able to cover all combinations, I planned the interview script as follows:
Introduction in which the person presents himself and recounts his or her living situation: studies and career.
Examples of questions: how old are you? what are you doing with your life? do you study? do you work? In what?
2. Relationship to technology, time of use.
Examples of questions: Do you get along with computers? Do you work with computers? Do you use computers in your day to day? Do you think you are dependent on mobile? Do you think you could stop using it if you wanted to?
3. Feelings/beliefs regarding the issue of data security and cookies
Examples of questions: Do you feel sure leaving your data on the net? Are you afraid of what they might know about you/do with them? What do you feel when an ad appears about something you’ve said, previously searched for? Do you do anything to avoid giving out your data? Do you mind being asked about your data or consent?
4. Specific question about the design of the cookie window and/or your rights as a user.
Examples of questions: What choice of days when you get the cookie question? Do you know if you can ask a platform to delete your data?
Conclusions
I do not plann to make public the literal transcript of each of the interviews, only to highlight the differences or surprises I have had when analyzing the users answers to each question and sharing it. In general, all the answers were similar to the majority percentages of the test type, they worry about the use of their data for purposes you do not know; but nevertheless they do not hesitate to accept without reviewing the conditions of each website in relation to data protection.
There were some comments that were repeated and that had not been included in the previous test questionnaire as the recognition by the average user that their data is given as payment to access internet content free of charge. For example, YouTube ads in their videos annoy them, but as they are given the option to pay to access the content without advertising interruptions, they accept better to watch the add. It seems that this fact is related in the mind of the user in the same way by changing the advertising vision by its data transfer to advertise its profile; that is, as a transaction in which the user, the advertisers and the owners of the website participate. Thus, they give their data when they believe that the product they access (the website) cannot be given to them if they do not give something.
Another interesting behavior is that most users understand that their data can serve as specific advertising purposes in their profile, many recognize that after searching for a product in the network appear ads of this same product. However, they consider that it does not affect them, that is, that the belief of knowing the functioning makes them assume that they will not be affected by the ad, that it will not achieve its purpose because they are not sensitive to it.
It seems a dangerous belief and also a false one, the fact of knowing how something works does not exempt you from being affected. And, in fact, it makes you more likely to be fulfilled without you noticing as you reflex less about it (Dunning-Kruger effect). It is also a false way of thinking since they only believe they know what it is, but only relate them to a known and partial part of the uses that can be given to their data. This does not seem to affect the user who is in the top right quartet, that is to say those who work with data, who are also the only ones who correctly answer questions about their data rights. This type of user understands that he is permanently exposed to this type of manipulation and tends to protect more which websites access part of his data.
As final information-pill, some users claim to try to “confuse” the algorithm by subscribing to product websites that would never look for or searching for items that they do not need so that the profile they have of this does not resemble reality.
What do you think of this trial?
Thank you very much for reading me, see you in the next post where we will see negative examples (dark patterns) of cookie window.
Various analytical tools are used to analyze data in the medical area, thanks to which it is easier to make decisions based on facts. These methods later help in planning, measuring, designing, and educating. Now the global health service is suffering from shortages among doctors and nurses who make primary care. As a result, already overworked specialists have to perform their duties even faster. Unfortunately, the situation is predicted to be even more difficult over the next few years, here the only right solution is to analyze the data and design a system that will make the process easier.
The benefits of analyzing medical data can be: faster delivery of results, making permanent changes, and later designing a new process that will be better, reducing the risk and the number of errors. The first step may be to introduce appropriate programs and artificial intelligence to the health system, which in the future may take on some of the responsibilities. These tools can absorb huge amounts of information and learn from many different types of data.
A heart rate monitor (HRM) is a personal monitoring device that measures heart rate in real-time or records the heart rate for later study. It is commonly used to collect heart rate data while performing various types of activity which are part of the patient’s day-to-day life. Portable medical devices are referred to as Holter Monitor which is designed for everyday use and does not use wires to connect.
Modern heart rate monitors commonly use one of two different methods to record heart signals: electrical and optical. Both types of signals can provide the same basic heart rate data, using fully automated algorithms to measure heart rate.
_ Electrical Devices: The electrical monitors consist of two elements: a monitor/transmitter, which is worn on a chest strap, and a receiver. When a heartbeat is detected a radio signal is transmitted, which the receiver uses to display/determine the current heart rate. This signal can be a simple radio pulse or a unique coded signal from the chest strap.
_ Optical Devices: More recent devices use optics to measure heart rate by shining light from an LED through the skin and measuring how it scatters off blood vessels. Smartwatches and cell phones can be included within this category, but their use for medical purposes is limited even though the accuracy in detecting several diseases increased significantly in recent years. Many professionals recommend anyway their use as support in data collection processes.
# HolterMonitor
A Holter monitor is a small, wearable device that keeps track of your heart rhythm. The doctor may want the patient to wear a Holter monitor for one to two days. During that time, the device records all heartbeats. This procedure can be repeated several times if the medical practitioner requires it to accomplish the goal of the overall study.
A Holter monitor test may be done if a traditional electrocardiogram (ECG) doesn’t deliver enough information about the heart’s condition. A Holter monitor may be able to spot occasionally abnormal heart rhythms that an ECG missed due to the short time that the patient is hooked up to the machine. The medical practitioner uses information captured on the Holter monitor to figure out if the patient has e a heart rhythm problem or a heart condition that increases your risk of an abnormal heart rhythm.
# What Patients say:
After interviewing two patients who had to wear the Holter monitor due to different heart conditions, plus the information collected in different clinical studies, these would be the most significant insights referring to the patient experience:
_ Most of the patients found the device uncomfortable. Sometimes, some of their daily activities were difficult to carry out due to the device.
_ Some patients had difficulty putting the device back on after showering. Due to this, the collected data wasn’t accurate.
_ Even though the majority of patients expressed full trust in their doctors, many did not feel involved in the process. They simply had to “follow orders” but did not understand what kind of data was being measured.
_ Some patients experienced a feeling of vulnerability when they gave the device back to their medical facility. Although the device was uncomfortable to wear, it made them feel safe, assuming that their heart was being controlled. Once they removed it, that feeling of security disappeared.
_ Some patients experienced skin allergies due to the adhesive tape on the device.
# What Doctors say:
After interviewing a cardiology physician and a general practitioner, I was able to gather the following insights:
_ It is difficult to ensure successful measurements with elderly patients due to the technical requirements of the device. The majority of patients using this device are older than 60 and sometimes they need to repeat the procedure more than once.
_ Although the device collects valuable information, some conditions,which might be symptoms of more serious heart problems, go unnoticed. (ex. certain types of arrhythmias).
_ They get frustrated because they cannot find the time to explain all the details of the procedure to their patients and they feel patients’ dissatisfaction and uncertainty.
_ “It takes too long to get the data that we need”. Patients must return the device back to the medical center, then the technicians extract the data from the device by connecting it to their system, then the data are included within the patient’s records and saved in databases. Only then doctors are able to have a clear vision of what is going on with their patients.
# Compared with other Devices for Heartbeat Monitoring:
Despite the fact that the Holter monitor is one of the most used devices to measure the heartbeat even today, for a few years, other heartbeat monitors have been coming onto the market offering significant improvements for patients. This is the case of Zio Patch, from iRythm Tech.
But even though this new monitoring option is more manageable, less cumbersome, and has a higher level of data accuracy than the original Holter Monitor, it requires longer use than the Holter Monitor to detect the same number of conditions. This makes procedures a bit harder to perform with both children and the elderly.
As mentioned before, many smartwatches and mobile phones can provide a service similar to heartbeat monitors but are not capable of perceiving certain abnormalities that may indicate heart problems. They are a very good aid for home use because they measure many different parameters such as oxygen saturation, thanks to the use of sensors including accelerometers, gyroscopes, and GPS.
# Conclusions and Challenges
Finding the ideal device is not easy. Both patients and medical professionals have needs that should be met in order to succeed when going through these procedures, and it seems that if you try to improve one part, the other will get worse.
For me, the main challenge is to rethink the design of the device. It is clear to me, that it needs to be something free of technical assistance. The ease of use must be a basic requirement due to the characteristics of the patients who generally use this type of device.
On the other hand, it should offer some kind of interface that could deliver clear information to patients, making them understand what data is being collected. This is already offered on smartphones or smartwatches.
And last but not least, the data collection. The more accurate data collection in the shortest possible time, the medical professionals will be able to deliver better diagnoses in less time. In this way, they will be able to care about the psychological needs of patients before, during, and after the procedures. In my opinion, finding a way to shorten the process from data collection until professionals can access and evaluate this data, should be a priority.
Experts conducted a Usability Review following a user test on three potential users corresponding to the target. The usability review considers the responses from users on how they feel, perceive, and achieve their goals on tasks. The usability review takes account of the different factors to establish an analysis: 7 usability attributes, 10 Heuristics, 20 UX laws, and the expert’s experience.
Before analysis results:
In this user test, the three users have to perform these specific tasks:
Share meeting details to invite a person who’s not a part of the current meeting
Add a person named “Austin Brandon” to the call
Take some notes during the meeting which would be 50 characters long.
Share a particular screen along with the computer audio.
Stop sharing your screen after you’ve shared your screen.
Send a personal message to one of the participants of the video call.
The aim will be to obtain an analysis using the Seven usability attributes:
Effectiveness: Can the user accomplish specific tasks through this system? (ISO 9241-11)
Efficiency: How does the product help the user to perform tasks with the least amount of resources? (ISO 9241-11)
Safety: How does the system avoid undesirable situations so that the user feels safe?
Utility: Does the product provide the right functionality to accomplish tasks easily?
Learnability: Can the user learn the system as quickly as possible and without too much effort?
Memorability: How easily can the user remember how to use a product?
Satisfaction: Is the individual satisfied by using this product? (ISO 9241-11)
Analysis of the results:
Effectiveness: The user can perform the most basic tasks during the meetings. But we can notice that the user can make several mistakes by clicking on the wrong options, resulting in breaking some UX laws.
Efficiency: Users can complete tasks in no more than 15 to 20 seconds. But at some points, some tasks are not easy to perform for those who have not used the platform not often, so teams made some changes to the interface, but this did not make it easy for experienced users to get used to.
Safety: The user makes many errors while navigating the system, most of which are due to language design discrepancies, high latency during interactions with the system, as well as some transgressions of UX laws (Fitt’s Law, Jacob’s Law, Miller’s Law)
Utility: The necessary tasks are feasible, but the problems lie in the time and effort to do them.
Learnability: It is sometimes difficult for a novice user to get to grips with the system, which differs from other virtual discussion software. But the various possibilities are more than enough for the user to learn, so he can learn the most basic uses with some latency, due to additional features that add layers of complexity.
Memorability: Once learned the system, it is difficult to forget the navigating process of the system and not make the same mistakes again. The system has a uniform language design that allows for a better mental representation of the system and easy navigation. The system has tips to make it easier to navigate. Error messages are clearer, and automatic suggestions allow us to overcome them.
Satisfaction: The tool at hand is an alternative means for professionals and academics in that it can satisfy the continuity of work through video-conferencing, instant chats, and file exchange. But some learnability and safety details need to be fine-tuned to improve the satisfaction of the user, so to guarantee this satisfaction, the user has to build a mental model of this system to improve the ease of use.
Conclusion
As a whole, MS teams meet the essential needs of the user. Let’s remember that students and professionals had to be equipped with technological means to pursue their activities. The software developed by Microsoft answers the call by proposing functionalities satisfying major active users. Before taking this satisfaction into account, the idea of using this software came in spite due to the pandemic, the majority of novice users were apprehensive about this new system which was unusual compared to other videoconferencing software. This apprehension is mainly due to the vast possibilities in terms of content and functionalities that push the novice user to a necessary learning curve. We have noticed in this analysis that the user can learn the basic functionalities of the software and thus have a mental model of the system. However, we notice errors mainly due to a high latency during interactions and a transgression of the UX laws, which nevertheless allowed Microsoft to make some modifications. Finally, the learning curve will always lie in the time and effort required to remember the system.
However, the software has advanced features that are still not explored yet which are necessary among the basic features we presented previously. For example, the automatic access to other software for productivity or other purposes.
After having dissected the analysis of this software in more detail. I would like to set up a survey at the institute of design and communication to get an overall impression of the software and carry out some user testing with experienced and novice users and compare the results.
_Back in early 2020, Mark Cerny, the lead system architect for the SONY PlayStation Company held a talk about the soon to be released PlayStation 5 console an its technological aspects and achievements. He explained how they ventured into the field of a new audio technique, the 3D audio engine TEMPEST – it makes it possible for users, to hear ingame sounds with a feeling, as if they were happening around them, by some clever tricks outsmarting our brain and the way it detects sounds. This is, shortly speaking, by measuring the time between an incoming sound signal on the one ear and the arrival on the other ear, defined by the inner distance of our ears to each other, which the brain knows inherently. So, timing sounds just with the right time between left and right headphone speakers the illusion of the sound happening in real life can be achieved.
_To get this done, they scanned several hundred peoples hearing data, there known as HRTF (Head Related Transfer Function) and handpicked some of the most common ones to put it into the TEMPEST engine. He also explains that given the fact if this complex and very subjectional/individual perception – and the fact not every user can be scanned to get their personal HRTF – maybe not everyone is physically able to perceive 3D audio correctly – for some it only seems like a bit better stereo audio.
So, in the end one could say future technologies can open the doors for some, but slam the doors shut for others. Although they will try to synthesize HRTF data in the future to maybe make it able to even a wider range of people it may be locked aways for some forever.
_On a side note, MINECRAFT (MOJANG) also did some development into a highly sophisticated system for full 3D spatialization for ingame sounds as an optional feature, meaning you could determine the location of a noise emitter very accurately by hearing it alone. In the end, this system was turned down serval notches (pun intended) to it only determining, if a sound offscreen comes more from either the right or left side and indicated this with an arrow appended to the subtitles pointing in either one of these to directions. They scrapped the idea of spatial sound effects because they realized in competitive games this feature could, if turned on, help players gain an unfair advantage against other which chose not to play with it.