_We recently started a small game called DECAY in UNITY, a horror-exploration game set in an abandoned bunker facility.
_While we were hard at work building the game, we thought about implementing future, rather simple accessibility options; like making collectibles easier to see and find, adding an optional item counter to help you keep track of your needed collectibles and maybe some kind of colorblind mode (yet we weren’t sure how to implement such a feature in such a game with it making sense). But the easiest and nicest solution we came up with, besides a difficulty setting which would influence the time it would need for the game to result in a game over in certain moments, was the idea to optionally remove enemy encounters entirely from the game, so one could explore to their hearts content, since we poured a lot of love and detail into level itself, like little micro-narratives and so on. Finally, since the music can get creepy sometimes (as it should in a game like that) we considered if would be available to influence the game music, e.g., adjusting the volume or turning it off completely. Yet we are still on the fence on these topics, since we try to decide what is needed for our game to function in its core and what can be made more easily accessible.
When designing for medicine, the designer must be aware of various aspects of this field. He must analyze how the processes run in a particular medical institution and the first thing he must pay attention to is the medical aspects of the problem he is trying to solve. In a short time, the designer or design team must delve into the medical aspects of the prescribing problem and use three sources of information:
Visual medical information
Written medical information
Scientific medical information.
Visual medical information must be collected during the procedure the designer is investigating, such information can indirectly stimulate motivation and inspiration for superior product design. Written medical information is found mainly in brochures, through which the patient is informed about the treatment. The Internet is also an important source of information.
Scientific medical information gives state of the art research in the form of congressional reports. The designer must understand that the researcher has hidden desires that need to be found. It should be part of an open discussion with all interested parties. The designer or design team must extract all medical information to find out the requirements and wishes of all medical parties involved in the treatment and then come to the design of the product. Optimization should always be a topic of discussion to come up with the best product design. In a fuzzy interface, designs with medical information require special attention to communication between the designer and the specialist, otherwise, opportunities for optimized product design may be missed. Knowledge must be brought directly to the level of understanding and exchange of medical and technical information. Shared information must be known to all parties involved for the success of the project. Projects with medical science as their starting point require a new approach to design development.
While writing the last blog entry, I came across an issue that may turn out to be relevant to taking under consideration for answering my research question. Therefore, this blog entry serves as an “interim” post to discuss and contextualize my thoughts and, in the best case, provide scientific input.
The starting point constituted the following sentence from the article What You’re Getting Wrong About Inclusive Design I read for blog post #4: “Inclusivity. It’s one of the biggest buzzwords inside corporations right now. But the person who brought the practice of inclusive design to Microsoft-Kat Holmes-isn’t so sure that companies really get the idea yet.”
The statement above immediately started an inner discourse and ‘trains of thought’:
‘Trains of thought’:
„For topics that are socially relevant, specific terms develop in the media that stand for the "whole issue". In principle, this is a good idea, since it is easier for people to grasp something complex that has been presented in a simplified way. However, this observation seems extremely ambivalent to me. As much as umbrella terms emerge to take up important social issues, summarize them and make them accessible, they abstract the substance behind the actual issue. This is furthered by the inflationary use of those terms. Polarization occurs very quickly. Either everyone wants to adorn themselves with the feathers or is reluctant to put them on.
This can be seen very well with the term sustainability. Everyone knows very well that sustainability is important. Due to the inflationary use of the word, one reads about sustainability everywhere. From it two groupings developed. Those who have integrated sustainability into their lives and are convinced of its importance, and those who exploit sustainability by using it as a marketing strategy (keyword greenwashing within companies). It becomes intangible to the gross consumer as all is sustainable without communicating exactly how and why. This fosters that people can no longer relate to the topic, because it has become too general and it might causes anger that „suddenly everything has to be sustainable“, and if „I am not as sustainable as possible I am less worth“, whereby also fear of change plays a role. A similar problem takes place with inclusive design relating to this excuse of issue addressing in society and media. Everything has to be inclusive, because it is socially important to think inclusive / to be diverse.“
Trains Of Thought — Mood
These ‘trains of thought’ were the starting point of the considerations after which I did my further research on buzzwords.
Buzzwords
In fact, in many areas the problems of buzzwords are discussed and in many cases there is a political and economic factor that cannot be denied. The digital research shows that the topic around buzzwords, took place 5-10 years ago and only little current reports or statements can be found. However, the content is in keeping with the times.
In a dossier by Thomas Niehr published on the website of the German Federal Agency for Political Education in 2010 it was demonstrated that buzzwords are an excellent linguistic instrument for implementing strategies because they can be used to influence people’s thoughts and feelings.
Thus, the language strategy usually consists of strengthening one’s own position while devaluing that of the opponent. To get an approximate idea of the strategic impact of buzzwords, one needs to know who has used that buzzword in public discourse and in what way. This is especially true for buzzwords that are used to refer to controversial political ideas. Especially in public debate buzzwords are used to propagate certain demands and programs.
On the one hand, when a buzzword hits the zeitgeist and the program it contains finds many supporters, the so called “battle for words” sets in: Different groups will try to pass off the buzzword as their own, to “occupy” the buzzword for themselves. On the other hand, buzzwords are evaluated very differently depending on one’s views. For some, their association might be very positive, for other it is a synonym for the exact opposite. A distinction is therefore made between positive buzzwords and fighting- or stigma-words. The latter are used to discredit the ideas of the opponent, which is f. e. often used in political context.
Peter Josef Harr — Bedrohtes Menschsein.
What makes a word a buzzword?
It can be stated that there are no words that are buzzwords per se. Words require certain environmental conditions in order to be used as buzzwords at all. The existence of a public sphere in which buzzwords can be used and received is essential. If a demand or a program becomes explosive in such a public and is represented by a grouping, a buzzword can emerge. In retrospect, it becomes clear that such a buzzword has emerged in public discourse and has suddenly been used very frequently. This is true, for example, of environmental protection from the 1970s onward.
Some buzzwords acquire international significance and therefore also circulate, sometimes even with a time lag in different language communities. Therefore, it makes sense to analyze other structures (Note: Shape user research internationally).
Anything that you don’t want to implement for its own sake, but for an image, cannot work in my opinion.
Which leads to my conclusion that the Establishment of UX can not be driven by a specific terminology or buzzword like „Establishing UX“ rather by an unconscious process combined with political support, otherwise change will not be possible.
The questions I’m asking myself now are: Do I need buzzwords to establish a new approach to UX? Do I even need to get rid of them to make it work? And if I use some buzzwords, how dangerous or essential could they become to the process of establishment?
Buzzwords that came to my mind during research and that I recently encounter in everyday life:
Big Data Covid19 Commitment Diversity Foreigners Feminist Innovation Sustainability (derived from buzzword „environmental protection“)
Source: https://www.bpb.de/politik/grundfragen/sprache-und-politik/42720/schlagwoerter?p=0 Thomas Niehr (1993): Schlagwörter im politisch-kulturellen Kontext. Zum öffentlichen Diskurs in der BRD von 1966 bis 1974. Wiesbaden. Thomas Niehr (2007): “Schlagwort”. In: Ueding, Gert (Hrsg.): Historisches Wörterbuch der Rhetorik, Bd. 8. Tübingen, Sp. 496-502. https://www.creativejeffrey.com/creative/buzzwordproblem.php?topic=creativity https://en.wikipedia.org/wiki/Buzzword https://www.inc.com/jeff-haden/40-buzzwords-that-make-smart-people-sound-stupid-most-overused-corporate-jargon.html Harr, Peter Josef. Bedrohtes Menschsein. Eine kritische Analyse unserer Gesellschaft unter dem Aspekt der Liebe. Lit Verlag. Berlin, 2009. S.87 Buzzword Innovation — article: „Moving beyond buzzwords.“ by Dov Greenbaum and Markk Gerstein 2 books reflecting on society’s semantic station with „innovation“. In both books innovation was stated out as a term that has been reduced to a buzzword. Both books are valuable in forcing us to appreciate what is truly valuable to society. https://www.science.org/doi/10.1126/science.abd9805
So there is a multitude of values to be extracted to pick up a musician’s expression in performance. If the music is written down, some of it is readable by the sheet music. Some of it however is an individual expression of the musician. which is far more abstract in character and much more difficult to pick up because it is not possible to predefine it or calculate it. So we have to quantize expression somehow directly from the performance. Clemens Wöllner suggests in his opinion article to quantify artistic expression with averaging procedures.
A big point of the expression is to raise the attractiveness of the musical piece one is playing to a point to make it one’s own in the sense of the performance. Individuality is highly valued in the expression of a performer. Cognitive psychology studies teach us that average modalities in visual and auditory modalities are viewed as more attractive. Averaging procedures typically produce very smooth displays in pictures and sound. Listeners of performance typically expect more from a concert or a recording than an even performance. As said individuality is highly appreciated in music.
In classical genres, expression is often added by subtle timing perturbations and fluctuations in dynamic intensity, as unexpected delays or changes in intensity that are different from the typical expectations of the listener can cause surprise and other emotional reactions and thus help the individual performer’s musical expression. In earlier decades of the 20th century, for instance, musicians typically employed large rubati which are deviations in note length, most of the melody voice. It is not as common anymore, the changes of note length are far smaller today. Research along these lines has for a long time studied expressive timing deviations from a non-expressive metronomic version. These timing deviations constitute an individual expressive microstructure. As performers are not able to render a perfect mechanical, metronomically exact performance. To quantify those timing variations using a so-called deadpan rendition as average, can not be a valid indicator of individuality.
So musical performances can be averaged according to the main quantifiable dimensions of duration, dynamic intensity, and pitch. As for the average performance, it was suggested in seminal studies 1997 by Repp that the attractiveness is raised by not deviating from the average, expected performance, but it is also considered a dull performance if there is no individuality in it by straying from the average.
Averaged deviations from the notated pitch in equidistant temperament could be analyzed. The sharpening or flattening of tones may reveal certain expressive intentions of individual performers. Also, musicians are able to shape the timbre of certain instruments to some extent which adds to their expression.
What hardware microcontrollers and DSP chips are readily available to power the Interface module? That is a central question to start working on ways to implement MIR algorithms into a module. The second question is what code language is compatible with the chips and how can one implement it.
Those questions are examined in a paper by the International Conference on New Interface for Musical Expression (short NIME) named: „A streamlined work ow from Max/gen~ to modular hardware“ by Graham Wakefield, 2021 which focuses on the oopsy workflow which streamlines digital sound processing algorithms to work with the modular synthesizer environment.
As microcontrollers such as Arduino and Teensy get more powerful by the day they are more and more useful for musicians and luthiers to use in music and musical instruments. The play to make electronic music live and without a laptop that would run a DAW is a strong motivation for musicians to get into coding and learn to develop equipment which is providing often the few tools a DAW is offering them for live performances.
For DSP chips to read code programmed in a visual language like Pure Data or Max MSP the patch most of the time has to be compiled into C++. Within Max, there is for instance the [gen~] object which is capable of doing so. To implement the mach well into the hardware ‚oopsy‘ was developed which streamlined the workflow, to get an algorithm onto hardware, with a targeted firmware generation that is optimized for CPU usage and low memory footprint and program size, with minimal input required.
Electrosmith Daisy:
Processor: ARM Cortex-M7 STM32H750 MCU processor with 64MB of SDRAM and 8MB of
flash memory, IO: Stern, 31 configurable GPIO pins, 12x 16-bit ADCs, 2×12 bit DACs, SD Card interface, PWM outputs, micro USB port (power and data), Dasy Seed: 51×18 mm
It is a common microcontroller in Modular Synth gear today. The MCU processor is with its maximal 480MHz quite capable and the AK4556 Codec has AC-coupled converters that internally run with 32-bit floating-point. Daisy firmware can be developed using Arduino, FAUST, PureData via Heavy, as well as Max/gen~ using the Oopsy software. internal latency down to 10 microseconds.
Bela Beaglebone:
Bela is an open-source platform based on the beaglebone single-board computer design for live audio. It is compatible with Supercollider, PureData, and C++. It is optimized for ultra-low latency, with 0,5 ms it is better for desktop, cellphone, Arduino, and Raspberry Pi solutions.
IO Eurorack module: 2 audio inputs, 2 audio outputs, 5 CV inputs, 1 gate/trigger in, 1 gate/trigger out, 1 USB Type B connector
References
Graham Wakefield. 2021. A streamlined workflow from Max/gen~ to modular hardware. Proceedings of the International Conference on New Interfaces for Musical Expression. http://doi.org/10.21428/92fbeb44.e32fde90.
This week I found another „good“ example for a deceptive design pattern* to analyze.
Within the checkout process on Lieferando.at they provide a short summary about the order and give feedback on filling out all relevant data to place an order. It seems like they list ALL cost and sum them up, but if you have a closer look the amount is bigger than the summary of the listed products. So the user has to click the button „Weitere anzeigen“ to see, that they add additional cost for delivery. As there would be enough space within viewport height to make the delivery fee visible from the start, it is clear that they want to hide it on purpose. Apart from additional cost they also give the options to edit the order or add notes for specific dishes in the extended version. Consequently it would increase the usability of the site to also change the wording from „Weiter anzeigen“ to „Bestellung bearbeiten“ („Edit order“). On the right hand side I added a quick-fix-design-proposal to cancel this deceptive design pattern* and enhance usability.
Der Versuch, die Vielfalt von Schriftarten zu überblicken
In den letzten drei Beiträgen habe ich mich eingehend mit der historischen Entwicklung der Typografie befasst – vorwiegend mit jener im deutschsprachigen Raum. Die hat deutlich gemacht: Der Stil einer Schrift kann nur schwer losgelöst von ihrem Entstehungskontext betrachtet werden. Gesellschaftliche und technologische Entwicklungen beeinflussen Grafikdesigner:innen und Typograf:innen und damit die Entstehung und den Einsatz von Schrift. Solange Lettern aus Holz geschnitten wurden, waren wirklich exakte Formen unmöglich. Mit der Erfindung der Bleilettern konnten Setzer die Haarlinien verfeinern und Serifen deutlicher herausarbeiten. Neben der Technik war und ist es auch der Zeitgeist, der das Aussehen von Schriften über die Jahrhunderte prägte und es auch heute noch tut. Didot-Schriften, zum Beispiel, zeichnen sich durch stark betonte Grundstriche und extrem feine Haarstriche aus. Sie sind elegant und spiegeln den strengen und intellektuellen, aber feinen Stil des Klassizismus mit seinen griechischen und römischen Vorbildern wider. Im Kontrast dazu wurden im 19. Jahrhundert mit der industriellen Revolution die Egyptienne-Typen (engl. Slab Serif) populär – ihre kräftigen, eckigen Serifen und robuste Form zeugen von der Kraft und Funktionalität der Maschinen (vgl. Gautier 2009:50).
Warum eine Schriftklassifikation?
Beginnt man sich näher mit Schrift und Typografie zu beschäftigen, scheint einen der Umfang dieses Themas nahezu zu erschlagen. Alleine sich in der Unendlichkeit von verfügbaren Schriften zurechtzufinden wirkt wie eine Mammutaufgabe. Angesichts dessen und wohl um das Wesen der Typografie greifbarer zu machen, haben zwei Typografen Klassifikationen erstellt, die Schriften zu Schriftarchetypen zusammenfassen: die Thibaudeau-Klassifikation und die Vox-ATypI-Klassifikation. Letztere wurde von der Association Typografique Internationale (ATypI = Internationale Gesellschaft für Typografie) übernommen. Auch die heutige Schriftklassifikation des Deutschen Instituts für Normung, der DIN 16518, ist an diese Klassifikation angelehnt. In der Folge gab es immer wieder Typografen, die sich mit der Klassifikation von Schriften auseinandersetzten – u.a. Hans Peter Willberg, der eine Weiterentwicklung der DIN-Norm vorschlug.
Von Thibaudeau zu Vox
1921 schlug Francis Thibaudeau eine Schriftklassifikation nach Serifen vor, die vier Klassen umfasst:
Klasse 1: Elzèvirs
Klasse 2: Didots
Klasse 3: Égyptiennes
Klasse 4: Serifenlose Antiqua-Schriften
Diese Klassifikation schien jedoch nicht die Vielfalt der unterschiedlichen Schriftarten abzubilden, weshalb Maximilien Vox 1952 eine Einteilung in elf Klassen vorschlug. Seine Klassifikation beruht auf Kriterien, die zumeist für eine bestimmte Epoche typisch waren: die Art der Grund- und Haarstriche, die Neigung der Buchstabenachse und die Serifenform.
Viele Schriften weisen Charakteristika aus zwei oder mehreren der nachfolgend vorgestellten Klassen auf. Aus diesem Grund halten viele Grafiker:innen eine Schriftklassifikation für umstritten oder obsolet. Für mich liegt der Vorteil in einer Klassifikation vor allem in der Möglichkeit, sich einen Überblick über die komplexe Vielfalt der Schriften verschaffen zu können. Auch wenn ich selbst im täglichen Umgang mit Typografie festgestellt habe, dass eine klare Zuordnung oft schwierig ist, empfinde ich es als Mehrwert, über die einzelnen Klassen und ihren historischen Ursprung Bescheid zu wissen. Aus diesem Grund möchte ich nun nachfolgend näher die Klassifikation der DIN-Norm 16518 sowie auch den Ansatz von H.P. Willberg vorstellen.
Schriftklassifikation nach DIN-16518
Die DIN-Norm legt elf Schriftklassen fest.
1 Venezianische Renaissance-Antiqua (Entstehung ab 1450)
Diese zeitlich erste Antiqua-Klasse zeichnet sich durch folgende Charakteristika aus:
kräftige Serifen
nach links geneigte Schattenachse (= Buchstabenachse)
relativ große Ober- und Unterlängen
schräger Innenbalken des e
2 Französische Renaissance-Antiqua (Entstehung im 16.Jhr.)
Charakteristika:
ebenfalls nach links geneigte Schattenachse
ausgerundete Serifen = stärkere Rundung von Grundstrich zu Serife
teilweise waagrechter Innenbalken des e
Oberlänge der Kleinbuchstaben meist etwas länger als Höhe der Versalien
3 Barock-Antiqua (Entstehung im Barock / ab Ende des 16.Jhr.)
Die Barock-Antiqua wird auch Übergangs-Antiqua oder vorklassizistische Antiqua genannt, da sie ein Bindeglied zwischen den Renaissance-Antiqua-Schriften und den sehr geplanten klassizistischen Antiqua-Formen bildet.
Charakteristika:
der Kontrast zwischen Grund- und Haarstrichen verstärkt sich, da durch die Erfindung des Kupferstichs noch feinere, präzisere Formen möglich waren
nahezu und teils vollkommen senkrechte Schattenachse
feinere und flachere Serifen; Rundung von Serifen zu Grundstrichen nimmt ab
waagrechter Innenbalken des e
Die Barock-Antiqua hat eigentlich kein opulent-barockes Auftreten, sondern sorgt für eine Beruhigung des Schriftbildes.
4 Klassizistische Antiqua („Didots“) (Entstehung um 1800)
Charakteristika:
Kontrast zwischen Grund- und Haarstrichen besonders ausgeprägt: stark betonte Grundstriche und extrem feine Haarstriche
senkrechte Schattenachse
kaum Rundungen zwischen Serifen und Grundstrichen
geplante und durchdachte Schriften; die Buchstabenformen lassen Vorbilder der griechischen und römischen Architektur erkennen
5 Serifenbetonte Linear-Antiqua (Slab Serif / Egyptienne) (Entstehung ab Beginn des 19. Jhr.)
Charakteristika:
starke und auffallende Betonung der Serifen
robuste Buchstabenformen: Grund- und Haarstriche haben nahezu dieselbe Stärke
keine Rundungen zwischen Serifen und Grundstrichen
Slab Serif-Schriften spiegeln mit ihren auffallenden, starken Formen den Beginn des industriellen Zeitalters und die Kraft der Maschinen wider.
6 Serifenlose Linear-Antiqua (Grotesk / Sans Serif) (Entstehung ab Beginn des 19. Jhr.)
Charakteristika:
keine Serifen
oftmals gleichmäßige Strichstärke, d.h. wenig bis kein Kontrast zwischen Grund- und Haarstrichen
horizontale und vertikale Geraden
Die Grotesk wurde ursprünglich als auffallende, „plakative“ Schrift für Akzidenz- und Werbezwecke entwickelt. Heute umfasst diese Klasse sehr viele unterschiedliche Schriften, was wiederum eine Unterklassifizierung erfordern würde. Einige Grotesken basieren etwa auf der klassizistischen Antiqua (z.B. Akzidenz, Univers), andere auf der Renaissance-Antiqua (z.B. Lucida-Sans, Syntax). Parallel entstand in den USA auch die Amerikanische Grotesk (z.B. Franklin Gothic). Ab dem 20. Jahrhundert entstanden die konstruierten Grotesken, die sehr geometrische Formen aufweisen (z.B. Futura).
7 Antiqua-Varianten
In diese Klasse fallen alle Antiqua-Schriften, die keiner der Strichführungen der anderen Klassen zugeordnet werden können. Sie zeichnen sich oftmals durch Charakteristika mehrerer Klassen aus oder haben bestimmte Strichführungen bzw. Besonderheiten, die Regeln bisheriger Kategorien brechen.
8 Schreibschriften (Scripts)
Charakteristika:
miteinander verbundene Buchstaben
grundsätzlich alle Schriften, die die Wirkung einer heutigen Schreibschrift nachahmen
Schreibschriften gab es bereits zu Bleisatz-Zeiten, jedoch wurden sie vor allem durch die Verwendung von Schriften am Computer populär, um dem digitalen Druckzeug eine handschriftliche Note zu verleihen.
9 Handschriftliche Antiqua
Diese Klasse fasst alle Schriften zusammen, die handschriftliche Züge aufweisen, jedoch keine gebundene Schrift erzeugen – also die Buchstaben sind nicht miteinander verbunden wie das bei Schreibschriften der Fall ist.
10 Gebrochene Schriften
Gebrochene Schriften zeichnen sich durch ganz oder teilweise gebrochene Bögen der Buchstaben aus, die einen abrupten Richtungswechsel in der handschriftlichen Strichführung nachahmen. Eine Besonderheit liegt zudem im langen s, das vor allem in der deutschen Sprache verwendet wurde. Gebrochene Schriften waren hauptsächlich im deutschsprachigen Raum verbreitet. In der Mitte des 12. Jahrhunderts entwickelte sich in Europa die Gotik, was sich in der Architektur durch den Übergang von romanischen Rundbögen zu gebrochenen gotischen Spitzbögen zeigte. Dieser Bruch wurde daraufhin auch in der Minuskel-Buchschrift imitiert. Dadurch entstand aus der runden karolingischen Minuskel die gebrochene gotische Minuskel.
Von der DIN-Norm werde gebrochene Schriften in fünf Unterkategorien unterteilt:
Gotisch (Textura) – ursprünglich eine Buchschrift für Manuskripte, später eine Satzschrift
Rotunda (Rundgotisch) – ebenso zuerst eine Buchschrift für Manuskripte, später eine Satzschrift
Schwabacher – Satzschrift
Fraktur – Satzschrift
Fraktur-Varianten – Satzschrift(en)
Durch den Normalschrifterlass 1941 wurden die gebrochenen Schriften aus den Lehrplänen und dem offiziellen Schriftgebrauch verbannt.
11 Nichtlateinische (fremde) Schriften
Fremde Schriften sind nach der deutschen DIN-Norm jene, die sich nicht des lateinischen Alphabets bedienen. Beispiele: Chinesisch, Koreanisch, Kyrillisch, Arabisch, Griechisch, Hebräisch
Diesen elf Klassen fügen Damien und Claire Gautier auch noch die Klassen der Fantasieschriften und der wandlungsfähigen Schriftenhinzu. Zu Fantasieschriften zählen Gaultier alle Schriften, die sehr unterschiedliche, teilweise extreme Charakteristika aufweisen. Oft entstehen Fantasieschriften aus technischen Experimenten und sind vor allem für den Einsatz als Headlines oder plakative Texte gedacht. Als „wandlungsfähig“ bezeichnen sie Schriften, die auf Basis derselben Grundform gezeichnet wurden, aber verschiedenen Klassen zugeordnet werden können – also Schriften, die es als Antiqua- und Grotesk-Variante gibt, aber auch Sans Serif-Schriften, deren Strichstärken einmal gleichbleibend und ein andermal mit Kontrast auftreten (vgl. Gaultier 2009:51). Da sich die Buchstabenformen in der Regel einer der bereits genannten Klassen zuordnen lassen, stellen Monospace-Schriften keine eigene Klasse dar. Ich möchte sie aus Gründen der Vollständigkeit trotzdem an diesem Punkt erwähnen, da sie eine besondere Form aller zuvor genannten Schriften bilden: Bei Monospace-Schriften haben, wie ihr Name schon sagt, alle Zeichen exakt dieselbe Dickte. Ob H oder i, ob W oder Komma – die Breite des Zeichens ist immer dieselbe. Aus diesem Grund muten Monospate-Schriften an, als wären sie von einer Schreibmaschine getippt worden.
Kritik an der Schriftklassifikation der DIN-Norm 16518
Wie bereits erwähnt lassen sich heute viele Schriften nicht mehr eindeutig dieser historisch bedingten Klassifizierung zuordnen. Daniel Perraudin, Typograf, Grafiker und Lehrender, gibt in seiner Typografie-Vorlesung an der Fachhochschule Joanneum an, dass circa neunzig Prozent der neuentwickelten Schriften der Klasse der Grotesk zugeordnet werden müssten – obwohl sie sich in ihrem Erscheinungsbild durchaus unterscheiden.
Schriftklassifikation nach Hans Peter Willberg
H.P. Willberg war ein deutscher Typograf, Grafiker, Illustrator sowie Hochschullehrer. Sein Gestaltungsansatz prägt bis heute die grafische Schule – besonders auch die Typografie. Auch ihm schien die Schriftklassifikation nach Vox bzw. die daraus resultierende DIN-Norm als mangelhaft, weshalb er sie um eine Dimension zu erweitern versuchte. Die Form der Buchstaben, die die Grundlage für die DIN-Einteilung bildet, ergänzt er um die Dimension des Stils. Dieser kann nach Willberg dynamisch, statisch, geometrisch, dekorativ oder provozierend sein.
Die Schriftklassifikation nach H.P. Willberg ergänzt die Buchstabenform um die Dimension des Stils. Bild (c) Daniel Perraudin
Kritik an Willbergs Schriftklassifikation
Jedoch kann auch an Willbergs Klassifizierung Kritik geübt werden: Ob eine Schrift „dekorativ“ oder gar „provozierend“ sei, liegt stark im Auge des Betrachters, was eine objektive Zuordnung eigentlich nicht zulässt. Zudem scheint es gerade bei Scripts, also Schreibschriften, besonders schwierig einen eindeutigen Stil zu identifizieren.
Summa summarum
Nach eingehender Betrachtung der zuvor beschriebenen Schriftklassifikationen möchte ich nochmals festhalten: Auch wenn nicht alle und vor allem nicht neu entwickelte Schriften gänzlich einer Klasse zugeordnet werden können, bieten die vorgestellten Klassifikationen die Möglichkeit, sich in der Vielfalt an Schriften zurechtfinden und eine vorliegende Schrift auf ihr Wesen zu prüfen. Als Grafiker:innen sind wir häufig gefordert, eine oder mehrere Schriften für ein Projekt auszuwählen. Sowohl zu wissen, welchen Hintergrund eine Schrift hat und in welchem Kontext ihr Archetyp entstanden ist als auch das Bewusstsein dafür, aus welchen Gruppen überhaupt zu wählen ist, gibt (zumindest mir) ein Gefühl von Kontrolle in diesem Entscheidungsprozess. Obwohl die Typografie nicht ohne Bauchgefühl, ohne ästhetischem Auge und schon gar nicht ohne Erfahrung auskommt, muss auch die Ratio eine Rolle spielen. Die Wahl einer Schrift auch objektiv argumentieren zu können, ist essentiell. Für diese Objektivität zählen neben Lesbarkeit auch Geschichte und Stil eine Rolle: Type Designer Tré Seals ist überzeugt, dass Schriften Geschichten erzählt – viele auch politische. 2015 stellte der Amerikaner mit afroamerikanischen Wurzeln fest, dass nur circa drei Prozent der amerikanischen Designer:innen schwarz und 85 Prozent weiß waren. Diese Mehrheit war bis vor nicht allzu langer Zeit auch vorwiegend männlich, so Seals. Darin lag für ihn der Grund für die Uniformität von Webseiten – alles sah (und sieht heute noch) typografisch gleich aus. Auf seiner Webseite schreibt er: „If you’re a woman or if you’re of African, Asian, or Latin dissent, and you see an advertisement that you feel does not accurately represent your race, ethnicity, and/or gender, this is why.“ So gründete Seals seine Type Foundry Vocal Type, die mittlerweile acht Schriften im Programm hat. Alle sind mit der Geschichte von Minderheiten verknüpft – zum Beispiel, mit der Bürgerrechtsbewegung in den USA, mit der Frauenwahlrechtsbewegung oder mit den Stonewall-Unruhen, der Geburtsstunde des Gay Pride (vgl. Dohmann 2021: 68). Seals Arbeiten sind ein Beispiel dafür, dass Schriften nicht nur elegant oder witzig, minimalistisch oder opulent, sondern auch gesellschaftskritisch und (sozio-)politisch sein können. Als Grafiker:innen, die mit ihrer Gestaltung Haltung und Verantwortung zeigen wollen, sollen wir uns dessen immer bewusst sein.
Literatur
Dohman, Antje. „Types that matter“, in Günder, Gariele (Hrsg.), Page 03.21.
Gaultier, Damien und Claire. Gestaltung, Typografie etc. Ein Handbuch. Salenstein: Niggli, 2009.
The last entry was about Universal Design, which is a design approach / a paradigm / a strategy that has the overall goal to create environments that can be used by everyone to the greatest extent possible.
This entry will be about Inclusive Design which is often mentioned synonymously in literature for either universal design or accessibility. Therefore, one of the goals for this entry is to deeply understand the meaning of inclusive design and to illustrate the difference (if there actually is a difference) to universal design and other design methods.
I started by googling the terminology inclusive design and I came across a wide variety of definitions. Sometimes it is equated with universal design, then it is described as not quite the same but very similar, then it is explained in comparison to accessibility.
Definitions mood
In the course of researching I came across Kate Holmes, who works for Microsoft and has established inclusive design in the company structure, especially in the design process.
In her article „What You’re Getting Wrong About Inclusive Design“ she states out a major point what inclusive design really is and not is. It is a design process not a design result.
Thus inclusive design is not only about trying to design for an outcome to be accessible, usable, experienceable for everyone. It is a methodology for how to approach design for creating design that can be used by a diverse group of people. In the digital realm, the process of inclusive design starts by identifying situations where people are excluded from using particular technologies. Recognizing that exclusion can happen to anyone depending on the particular circumstances is a key element to inclusive design methodology.
inclusive = not excluding someone or something
Inclusive design and accessible design both focus on the idea that disabilities happen at the intersection where people and their environments interact. Inclusive design, in particular, recognizes that solutions that work for people with a disability are likely to also work well for people in diverse circumstances.
The terminologies inclusive design, accessible design and universal design are often mixed or confused, because there are also many similarities. They are connected by the shared goal to create digital products that can be used by the widest possible group of people, regardless of their current circumstances. To make it clearer to understand the interference, I would place the term Universal Design in the hierarchy as a paradigm above inclusive- and accessible design, as it represents the intersection of their commonalities, and describe Inclusive Design as the process oriented strategy approach and Accessibility as a benchmark for an outcome of design.
As a conclusion, when designers pay attention to the people who are actually using the products they develop, and how they adapt when something doesn’t work well for them, UX designers can use inclusive design principles to create user-friendly products that work for the majority of people and meet accessibility guidelines in the process.
Sidenote:
In the course of researching inclusive design, I came across the term "discursive design", which is about design that encourages discourse. This term provides an interesting component to the establishment of UX and has therefore been added to the Glossary, where you are welcome to read about it.
People are realizing more and more that design in medicine is very important. Previously, companies producing medical devices strongly focused on the engineering aspect, where now, over several years, significant progress can be noticed in the aspect of design. The aim of cooperation with designers is to increase the satisfaction of patients and medical staff, i.e. direct users. The design is directly oriented towards improving the relationship between the user and the devices by changing the appearance of the other to reduce fear and stress for the patient. The positive psychological effect achieved by using well-designed medical equipment improves the relationship between the patient and the doctor too. Designers are able to design useful interfaces with functionalities that can be guessed by looking at them. The Philips company began research under the name Ambient Experience Design, which was to combine medical engineering with design to increase the level of patient satisfaction. As part of the program, rooms with computed tomography have been equipped with equipment of pleasant soft shapes. In addition, smoothly changing lighting and displaying pictures on the walls during the research, absorbing the attention of children and relaxing adults. It turned out that thanks to this, patients are less afraid of tests and are calmer, and the tests themselves have to be repeated less often because the number of errors resulting from undesirable movements of the patient has decreased. The DLR Research Institute has developed a robot design that could assist in surgical operations. The robot has a part that is similar in shape to a human arm, thanks to which they can efficiently assist the surgeon without taking up too much space. In addition to the robotic arms, the MIRO project also includes compatible designs of the operating table, the surgeon’s console, and the support system supporting the robot arms. The casings of all devices are easy to clean and carefully hide all mechanical elements and electronics. The project was awarded the IF Product Design Award in the “advanced studies” category. Nowadays, we can see some standout design trends in medicine. The first is the ability to communicate between the patient and the device. Another important trend concerns the change of the language of shapes, spaces with large smooth surfaces broken by sharper lines beginning to dominate. The dominant color is white, complemented by small color accents which, due to their neutrality, look clean and modern. The important part is to make the medical devices be clean and not just pretend to be clean, hence the growing popularity of smooth, shiny materials that are easy to clean and which, in combination with the white where even small details are visible. The medical design has great potential due to the enormous variety of areas, and at this time it badly needs specialists who will be able to develop and support progress in this field.
It is far easier to develop for Android because Apple is way more strict! It is hard to pass the Apple-Store test. Here it is even more difficult as the app has to have access to the software to freeze and block features, which is a big “No way” at first for Apple and also Android. However-easier with Android.
Let’s say we start with Android. How do we develop an app?
As we start with Android, we would define the general features first and then build 2 prototypes. One For the app itself and one for a faked end result on the frozen smartphone, to see how seniors will cope with that and look for other suggestions, problems and so on. When finished with changes with both prototypes, eye tracking tests on the second prototype with seniors will be necessary. When every problem (at least in the beginning) is eliminated, we can proceed to find developers.
The question is, will there be ads in the free app, or will it be an 1,99€ app. The second choice will be better probably because this is usually an app that you open once and never again as it works in the background. It even has to work through turning on and off your mobile device.