A whole semester has passed since my last blog article (”About throwing everything aside and starting over”), so there’s a lot to catch up on. Let’s not waste time and dive right in: As mentioned in my last blog entry, I changed my master thesis topic from “Augmented Reality in Education” to “UX Case Study: Designing a mobile application to support self-management and therapy of patients with gestational diabetes mellitus (GDM)” (working title for now). For further details on the topic, please read my previous blog entry.
During the past couple of months I did a lot of research on my topic in order to write my exposé, which I soon will hand in. The course “Proseminar Master Thesis” helped a lot during this process, as we had the opportunity to write a first version of our exposé, have it peer reviewed by fellow students, improve it and finally have a one-on-one feedback session with our professor.
My exposé still needs a few adjustments here and there, but it’s at an advanced state already and I’m confident that it will be approved by my supervisor Anika Kronberger, so I can start with the “actual work”. At this point it is to mention that I will be writing my thesis from abroad – from Lisbon to be specific – which will probably bring some challenges as well, but working and communicating remotely has worked out well for the last two years of the pandemic, thus I think that it will also work out for writing a master thesis 🙂
I also had two very insightful meetings with two professionals in the field of interaction design – Orhan Kipcak and Martin Kaltenbrunner. I talked with each of them for half an hour about my topic and received valuable feedback. With Mr. Kipcak I talked a lot about the environment of conducting my thesis. For example he recommended to do research on ongoing projects and studies in the field of my topic in order to get access to valuable data or even collaborate with organizations and people. In this context he recommended several platforms and organizations where I could start my research. Furthermore he underlined the importance of actively involving my supervisor Mrs. Kronberger since she has good connections to other study programs like Midwifery or E-Health as well as to organizations outside of the FH. The talk with Mr. Kaltenbrunner was more about the topic itself and which hurdles could occur in my plan. The most important thing he mentioned was that it is very important to do a proper competitors- and market-analysis of a) existing diabetes apps and b) pregnancy-related apps in general. The first step should be to find out if a new app even makes sense or if it would be better to enhance/adjust an existing app so it fits the needs of GDM patients without re-inventing the wheel. I was/am aware that this could become a problem and it helped a lot to get an opinion and tipps from an expert on how to handle that. Maybe I will have to adjust my plan during the process, but I believe that this is only natural and common.
All in all I now have a more clear plan of my scope, possible hurdles and next steps and am looking forward to start with writing things down.
What are my next steps?
Finish my exposé
Fill out the official form of the exposé for the FH and hand it in
Im Rahmen der Lehrveranstaltung “Proseminar Master’s Thesis” sollte eine Masterarbeit, welche a) zum eigenen Thema passt und b) an einer anderen Universität verfasst wurde, analysiert und bewertet werden. Da sich meine eigene Masterarbeit thematisch im Bereich User Interface & User Experience Design von Apps im Gesundheitssektor (konkret: Gestational Diabetes Mellitus [“Schwangerschaftsdiabetes”]) bewegen wird, habe ich meine Suche daran ausgerichtet. Nachdem meine Suche im OBV keine zufriedenstellenden Ergebnisse bot, wurde ich schließlich bei Google fündig.
Bewertete Masterarbeit:
Idsø, Nanfrid: Mobile Application to Improve Self-Management in Type 1 Diabetes. Ungedr. Diss. University of Bergen. Bergen 2021
Die Gestaltungshöhe ist meiner Meinung nach in Ordnung, aber etwas lieblos – Hat man keine hohen Ansprüche an Layout und Design, ist die einfache und sehr standardmäßige Gestaltung der Arbeit durchaus zufriedenstellend.
Für Überschriften und Fließtext wurde eine gut leserliche Serifenschrift verwendet, der Zeilenabstand ist jedoch relativ gering, wodurch die Leserlichkeit wiederum beeinträchtigt wird. Die Schriftgrößen sind in meinen Augen gut gewählt und auch das Seitenlayout wirkt dank relativ viel Weißraum nicht überladen. Auf andere Farben als Schwarz und Weiß wurde verzichtet.
Alles in allem also absolut solide. Als Studierende aus dem Kreativ- und Designbereich würde ich jedoch meinen: Da ist noch Luft nach oben.
Innovationsgrad
Den Innovationsgrad der Arbeit kann man sowohl aus thematischer, als auch aus umsetzungstechnischer Sicht bewerten.
Thematisch kann ich lediglich eine Laien-Schätzung abgeben: Da Smartphones und digitale Anwendungen immer stärker im Gesundheitsbereich Anwendung finden, bin ich der Meinung, dass die Arbeit grundsätzlich am Zahn der Zeit ist. Für eine konkrete Einschätzung habe ich (noch) zu wenig Hintergrundwissen zum wissenschaftlichen Stand von mobilen Anwendungen zur Unterstützung von Diabetes-Selbstmanagement. Jedoch gibt es bereits zig Anwendungen für Diabetes Typ 1 Patient*innen auf dem Markt, weshalb der thematische Innovationsgrad auf mich mäßig wirkt (6/10).
Aus Sicht der Umsetzung liegt meine Bewertung ebenfalls im mittelmäßigen Bereich. Der*Die Autor*in griff zu gängigen Methoden und Tools wie beispielsweise dem User-Centered-Design Prozess, Personas oder einem Cognitive Walkthrough. Dies ist nicht negativ gemeint, da gängige Methoden wie diese sich nicht umsonst bewährt haben und ich selbst plane, auf ähnliche Tools zurück zu greifen.
Selbstständigkeit
Der*Die Verfasser*in arbeitet in einem umfangreichen Theorieteil sowohl medizinische als auch projektrelevante Grundlagen auf. Daraus schließe ich, dass er*sie sich intensiv mit der Thematik auseinandergesetzt hat und eine gute Basis für eine selbstständige und fundierte Umsetzung des Praxisprojektes geschaffen hat. Dieses Praxisprojekt – ein Prototyp einer App – wurde in Zusammenarbeit mit einer Studienkollegin entwickelt und evaluiert, welche jedoch nicht (Co-)Autorin der Masterarbeit war, wie ich herausfand. In der Danksagung wird erwähnt, dass ebendiese Kollegin eine große Motivatorin und helfende Hand desder eigentlichen Verfassers*Verfasserin war. Daher gehe ich grundsätzlich von einer hohen Selbstständigkeit aus, frage mich aber, ob die Arbeitsleistung ohne fremde Hilfe hätte erbracht werden können.
Gliederung und Struktur
Die Gliederung der Arbeit finde ich sehr gut und logisch gelöst. Es sind alle “Standard-Kapitel” (z.B. Abstract, Abbildungsverzeichnis, Tabellenverzeichnis, Quellenverzeichnis, etc.) vorhanden und auch sonst wurde ein klarer Faden durchgezogen. Nach der Einführung kommen Theorie, Literaturanalyse und Methodik, gefolgt von der praktischen Umsetzung des Projektes und der abschließenden Diskussion. Alle Kapitel sind nummeriert und es gibt drei Hierarchie-Ebenen – nicht zu viel und nicht zu wenig, wie ich finde.
Kommunikationsgrad
Die Arbeit wurde an einer norwegischen Universität verfasst, in englischer Sprache. Der Text liest sich flüssig und ist mit einem B2 (oder höher) Sprachniveau sehr gut verständlich. Informelle Ausdrücke wie “I” oder “my” kommen äußerst selten vor. Viel eher wird das Passiv verwendet, was dem formellen, wissenschaftlichen Stil einer Masterarbeit viel mehr entspricht.
Umfang der Arbeit
Insgesamt umfasst die Arbeit 109 Seiten, davon bilden 76 Seiten den Kern. Die einzelnen Kapitel sind – die Seitenanzahl betreffend – untereinander einigermaßen ausgewogen. Damit liegt der Umfang meiner Meinung nach im Normalbereich und ist bei einem Standard-Layout mit geschätzten 12pt. Schriftgröße gut getroffen.
Orthographie sowie Sorgfalt und Genauigkeit
An manchen Stellen wird im Text deutlich, dass es sich bei dem*der Verfasser*in vermutlich nicht um eine*n Native Speaker handelt, jedoch konnten beim Überfliegen der Thesis keine groben Fehler in Rechtschreibung oder Grammatik festgestellt werden.
Im Bezug auf Genauigkeit und Sorgfalt ließ sich eine durchdachte und sorgfältige Arbeitsweise feststellen. Da und dort gibt es kleine Abstriche, wenn beispielsweise eine Konkurrenzanalyse lediglich aus sechs Bulletpoints und wenigen Stichworten besteht. Dies macht es für den*die Leser*in schwer, die Schlussfolgerungen des*der Autors*Autorin nachvollziehen zu können. Zudem werden manche Begrifflichkeiten, Methoden oder Modelle nur kurz erklärt, wobei es oftmals etwas mehr Tiefgang benötigt hätte. Trotzdem erhält man als Leser*in relevante Inhalte in angemessener Dosis. Dies mag eventuell an der englischen Sprache liegen, da es dabei oft weniger Worte bedarf als im Deutschen.
Literatur
Der*Die Verfasser*in gab 41 Quellen in seiner*ihrer Arbeit an. Das kommt mir für eine Arbeit im Umfang von 76 Seiten etwas gering vor. Wenn man jedoch bedenkt, dass ein großer Teil davon dem selbst entwickelten Projekt gewidmet ist, ist es wiederum nachvollziehbar. Auffällig ist, dass viele Internetquellen verwendet wurden, was als zweifelhaft bewertet werden kann. Bezüglich der Zitierweise lassen das Quellenverzeichnis und die Angaben im Text auf IEEE schließen.
Although the topic “Augmented Reality in Education” is super interesting and definitely has potential for a master’s thesis, I realized that I don’t want to pursue it further. I originally chose it because I had little prior knowledge about AR and wanted to “plunge into uncharted waters”. However, I soon realized that it didn’t really fit my strengths and interests.
Therefore, I used the past semester to find a new topic for my master’s thesis. I started off with writing a list of requirements. My Master’s thesis should…
… have societal relevance and added value for people/the environment
… focus on visual design and user experience, since that’s where my strengths lie
… be realizable from abroad, since I’m planning to go on Erasmus
Having my list in mind, I started brainstorming. I read articles and abstracts of existing Master’s theses in the field of UI/UX design, I browsed through design platforms like Behance and collected examples, ideas and inspiration. So I made a looong list in the notes app on my phone with raw ideas that came to my mind during research. In the end that list ranged from female leaders in the interaction design field to accessibility issues to family banking to blood donation to pet adoption… and more. As a next step I started to narrow that list down and came to the conclusion that I wanted to work on a UX case study for some mobile app or web application following a design process (e.g. Human Centered Design Process, Design Thinking). I felt that I was finally getting somewhere, but the most important part was still missing: The concrete topic. A mobile app for WHAT? There is already an app for everything, I thought – What could I possibly create that would have an impact? That was when I realized that talking to other people might help. So I asked my sister, who is a doctor, if there was anything in her daily life at the hospital which could be improved by digitalization. And actually there was a lot ;). Ranging from analog patient files to rehab programs for stroke patients, she had some ideas where I could see potential. But it should be something that was within the scope of a Master’s thesis (e.g. digitizing the complete patient management system of a whole association of Austrian hospitals was not). In the end there was one idea left, that would perfectly fit my plan as well as my skills: A mobile app which would help pregnant women with gestational diabetes (GDM) to keep track of their blood sugar, diet, exercise and therapy.
What is GDM?
Gestational diabetes mellitus (GDM) is one of the most common complications of pregnant women affecting up to 20% and can lead to many unfavorable outcomes for both mother and newborn. Hence, screening pregnant women for GDM and adequate treatment is essential for the short- and long-term outcome of mother and child. Being diagnosed with GDM comes with major effort including exercise, nutritional therapy, blood glucose monitoring and documentation four times per day, medical appointments every one to three weeks and in many cases insulin injections. Thus, patients tend to struggle with their compliance. Especially doctor appointments can be time-consuming, as patients usually have to document their measurement data in an analog diary. These data are then manually reviewed by the doctor and compared with data stored directly on the blood glucose meter to check for the patient’s reliability (Alfadhli, 2015).
The road ahead
Based on this medical procedure, the aim of my thesis is to find out how a mobile app could support the process of monitoring and analyzing blood glucose data and which advantages it could have for both the patient and the doctor. There are already several diabetes-monitoring apps on the market but none of them appear to be tailored to GDM patients. Therefore, this project offers the potential to specifically address the requirements and needs of GDM patients and provide them with a digital monitoring solution as an alternative to an analog diary. The concrete idea is to design and evaluate a high fidelity prototype of a mobile app using the design thinking process, which is an iterative process that includes five phases. Potential features of the app are:
automatic data transfer from the glucose meter to the app as well as the possibility to enter relevant data manually
automatic generation of comprehensive statistics with the ability to detect limit violations
reminders and notifications (e.g. blood glucose measurment, insulin injection, exercise)
suggestions on diet and exercise based on previously entered data
well-founded information about GDM (e.g. videos, articles, FAQs)
possibility to download a report for the doctor.
Conclusion
After spending so much time researching, brainstorming and talking to people I think I finally found a topic, that I “burn for” (as we say in german). I think the app could really help people affected by GDM and isn’t just another useless app on the market. As the Erasmus application required an abstract of the thesis topic, I have already written a preliminary research proposal and I am happy to have DI (FH) Anika Kronberger, MA as my supervisor.
________________
Sources:
Alfadhli E. M. (2015). Gestational diabetes mellitus. Saudi medical journal, 36(4), 399–406. https://doi.org/10.15537/smj.2015.4.10307
I had a look on the paper “Creativity in Children’s Music Composition” written by Corey Ford, Nick Bryan-Kinns and Chris Nash, which was published at Nime (https://nime.pubpub.org/pub/ker5w948/release/1) in 2021. The authors conducted a study examining which interactions with Codetta – a LOGO-inspired, block-based music platform – supports children’s creativity in music composition. In this context, “LOGO” refers to Papert’s LOGO philosophy to support children’s learning through play. Such experiential learning approaches are based on intrinsic motivation and tinkering. The authors stated that there was a lack of empirical research that investigated if a LOGO-inspired approach was conductive to creativity, which is why their aim was to get a better understanding on how children make music with such technologies.
About the study
The concrete research question of the study was “Which interactions with Codetta best support children’s ability to be creative in composition?” with the aim to uncover patterns of creativity within the participant’s first use of the tool. To get a better understanding of how Codetta works, I found this explanation video on youtube: https://www.youtube.com/watch?v=b7iMPuEaPts. The study was performed with 20 primary schoolers between the age range of 6-11. Due to the COVID-siutation the study was conducted in an online setting where the children had to perform two tasks: 1) composing a short piece of music and 2) answering a post-task questionnaire afterwards.
Procedure
Once the children opened Codetta, they were provided built-in, age-appropriate, instructions to get to know the basics of the tool (see Fig. 1). After completing the tutorial the children were asked to compose a short piece of music. No other motivation was given to keep the task open-ended. Once finished composing, the children completed a short post-task questionnaire online.
Data collection
The following data was collected: 1) log data of each child’s interactions using Codetta (and consequently their final compositions); 2) questionnaires responses and 3) expert ratings of each composition.
For visualizing the collection of log data a color scheme with several interaction categories was developed (see Fig. 2). The returned logs from Codetta were mined and visualised using Python and Anaconda. Once the data was prepared, statistical analysis was conducted using SPSS.
The post-task questionnaire consisted of 13 5-point Likert-scale statements, asking the children about their confidence in music lessons, writing music notation, using blockbased programs and using computers as well as the children’s perceptions of Codetta as a whole.
In order to quantify the children’s creativity, six expert judges rated each composition using several scales. Each judge assessed each child’s composition independently and in a random order. Their ratings were then averaged.
Results
I don’t want to go too much into the empirical data here, but to sum up the results it can be said that they focused on three subsections: the children’s compositions, interactions and UI perceptions.
Most children composed short pieces (mean length was 11.950 seconds) with conventional melodies like arcing up and down in pitch. The logged interaction data was visualised as a stacked bar chart using the color scheme mentioned before (see Fig. 3). The results of the questionnaire showed that the children felt they had to plan before writing their music and that they found it difficult to understand what each block was for.
Discussion
Based on the results several conclusions were drawn (shortened):
Note-level interactions (rise/lower pitch, edit note length) best support children’s ability to be creative in music composition
First-time novice users should initially be encouraged to focus on learning how to use Codetta’s notation engine (i.e. Introduction workshops)
Codetta could automatically introduce blocks sequentially, based on children’s current knowledge and interaction patterns
More complex features should be introduced gradually
The UI Design could be more intuitive to avoid mistakes and confusion
Lastly it is to mention that the results are limited by the small sample size and user’s levels of experience. longitudinal study with a larger number of participants would be needed to throughly investigate how interaction styles develop with time.
Own thoughts
I found it super interesting to read about the topic “creativity in children’s music composition” for several reasons: First of all, I spent a huge part of my childhood and teenage years in music schools, playing several instruments and taking music theory courses, but I never got in touch with actually composing music myself – Neither in primary/high school nor in music school or in any of the music-theory courses I took. So I totally agree with the authors’ statement that composing was a neglected area of music education. Moreover, I liked the connection between sound design, interaction design and sociology since digital music composition combines all of that. It could also be useful as inspiration for finding a master’s thesis topic.
Though it was a bit hard for me to understand the “Results” because I have only basic experience and knowledge in statistics and would need a deeper dive into that topic to get all the measures and values right. Yet it was nice to engage with research after not binge-reading studies and empirical stuff in a while (speaking of bachelor’s thesis).
This blog entry will be a growing collection of questions that educators, designers and practitioners in general need to consider when designing/developing educational AR products.
Questions, questions and more questions
Who is the target group? What’s their educational level?
What is the learning environment? —> Classroom? Distance learning? Workplace? Indoors? Outdoors? …
What contents are to be conveyed?
Which part of the content to learn should be enhanced by AR?
What goal(s) should be achieved by using AR technology?
In what proportion will real and augmented content be combined?
How is the content prepared didactically?
Which AR device(s) will be used?
Which AR technology fits best? —> Trigger-based, View-based?
What are the advantages of AR in the learning context compared to traditional approaches? —> Which added value has AR in this case?
How can multiple senses be addressed?
How can cognitive overload be avoided?
How can teachers easily and quickly add/adapt content?
Hello again! In the following blog entry I will be writing about the advantages and limitations of using AR technology in the educational sector, which many studies already have been conducted to establish.
Advantages & Benefits
Many studies indicate that the use of AR in the educational field brings many benefits. According to a meta-review by Garzón, Pavón and Baldiris (2019), which analyzed 61 scientific publications, a total of 100% mentioned some kind of advantage when using AR systems in education. The following factors are the main advantages mentioned in their paper:
Learning gain: When using AR systems, students can improve their academic performance or even obtain better scores than students using traditional approaches. This improvement was reported not only by data, but also for different teachers and the students themselves
Motivation: The use of AR can increase the motivation of students as well as their level of fun while learning, compared to other pedagogical tools
Sensory engagement: When AR activates multiple senses, knowledge retention can improve.
Abstract concepts: AR can ideal to explain unobservable phenomena (i.e. the movement of the sun)
Autonomy: AR technology can not only help retain knowledge, but also gives students the possibility of retaining it for longer periods of time compared to other pedagogical methodologies
Memory retention: The combination of real and virtual worlds can increase the autonomy of students taking into account their natural abilities and motivation for using technological devices
Collaboration: AR can create possibilities for collaborative learning around virtual content which can facilitate learning, since it allows learners to interact with their partners, as well as with the educational content
Accessibility (not further described in the study)
Creativity(not further described in the study)
In a blog (not scientific!) by Sinha (2021) I found some more advantages of AR in education, that were not listed in the aforementioned study:
Easy access to learning materials anytime, anywhere: AR could replace textbooks, physical forms, posters, and printed brochures. This mode of mobile learning could also reduce the cost of learning materials and make it easy for everyone to access
Safer practice:In cases like practicing a heart surgery or operating a space shuttle can be done with AR without putting other people in danger or risking millions of dollars in damage if something goes wrong
Disadvantages & Limitations
According to the aforementioned meta review by Garzón, Pavón and Baldiris (2019) 15% of the reviewed publications reported some disadvantages or problems when using AR in educational settings. The following factors are the main disadvantages mentioned in their paper:
Complexity: Complexity can be an issure especially when designing for children. AR being a novel technology, which involves multiple senses, can become a very complex tool especially for those who do not have technological abilities
Technical difficulties:Technical problems like latency of wireless networks or limited bandwidth can become a problem as well as lack of teachers’ experience with tech
Multitasking: AR applications can demand too much attention, which can be a distraction factor. This can cause students to ignore instructions or important stages of the experience
Resistance from teachers: Some teachers may prefer having total control over content, despite recognizing the benefits of using AR applications
In a blog (not scientific!) by Omelchenko (2021) and another blog by Aleksandrova (2021) I found some more advantages of AR in education, that were not listed in the aforementioned study:
Need of proper hardware: The use of AR requires at least a mobile device like a smartphone or tablet (which has to be up-to-date in order to install AR apps), which not all students may have
Content portability issues:An AR app needs to work equally well on all platforms and devices
Conclusion
Many studies indicate that AR has the potential to make learning processes faster, more fun and more effective. But some also point out that there are also several problems that can occur when AR is used in educational setting. Some studies also state that the context in which this technology is more effective than other educational media is still not clear and needs further research (Hantono, Nugroho & Santosa, 2018). Some future work could focus on support for teachers in adding and updating content as well as the comparison of AR to traditional teaching methods based on empirical data. It would also be important to do further research on special needs of specific user groups and accessibility features (Garzón, Pavón & Baldiris, 2019).
Garzón, J., Pavón, J., & Baldiris, S. (2019). Systematic review and meta-analysis of augmented reality in educational settings. Virtual Reality, 23, 447-459.
Hantono, B., Nugroho, L.E., & Santosa, P.I. (2018). Meta-Review of Augmented Reality in Education. 2018 10th International Conference on Information Technology and Electrical Engineering (ICITEE), 312-315.
Hello again! For this blog entry I had a look at several educational AR apps (there are a loooot of them) in order to get a picture of when AR has added value for educational purposes and when it doesn’t. So I picked out a few examples and categorized them in good and bad ones and summed up why I did (not) like them. It’s also to mention that I only looked at digital apps that use visual augmentation. But first I want to give a short overview on the wide range of educational fields and educational levels existing AR products on the market cover (this list provided by Garzón, Pavón and Baldiris [2019] might not be complete):
Education levels: Early childhood education, Primary education, Lower secondary education, Upper secondary education, Post-secondary non-tertiary education, Short-cycle tertiary education, Bachelor’s or equivalent level, Non-schoolers (work related trainings) – It’s to mention that educational AR products for Master’s or equivalent level and Doctoral or equivalent level might exist, but weren’t conducted in the study
The good
Augmented Creativity
Augmented Creativity includes a total of six prototypes that can be used with mobile devides: Coloring Book, Music Arrangement, Physical-Interaction Game, City-Wide Gaming, Authoring Interactive Narratives and Robot Programming – I had a look at the first two of them.
The Coloring Book is an application available that brings colored drawings to life: It comes with several templates that can be printed out and colored. When the drawing is scanned with the app on a smartphone or tablet (iOS and Android), it detects and tracks the drawing and displays an augmented, animated 3D version of the character, which is textured according to the child’s coloring (See Fig. 1).
Advantages the authors mention:
Creative Goal: Fosters imagination, allows character individualization, helps to express feelings about character
Educational Goal: Improves coloring skills, 3D perception, and challenges imagination
Potential Impact: User-painted characters and levels, scripting virtual worlds through coloring
Why I like it:
The augmentation doesn’t intervene the act of drawing and coloring by hand (which I think is an important way of creative expression in early ages), but adds additional value by digitalizing it afterwards
Stimulates several senses
Works really well and looks super cute (smooth animations; exact coloring; live updates)
The Music Arrangement is a set of flashcards where each card represents a musical element like instruments and music styles. The user can then choose instruments and styles independently and rearrange the song as imagined. By placing a card on a physical board, the app detects the marker on it and displays an augmented version of the instrument and plays the corresponding audio, as depicted in Fig. 2. AR even allows the user to change the position and the volume of the instruments while the song is playing, allowing them to direct the virtual band.
Advantages the authors mention:
Creative Goal: Experiment with different instruments and styles to rearrange a song
Educational Goal: Teaches concepts of arrangements, styles, and the disposition of the band components
Potential Impact: Collaborative music arrangement experience, learn about the disposition of an orchestra
Why I like it:
Combines physical and digital interaction
It stimulates several senses
Works really well and looks super nice
Quiver Education
Quiver Education is similar to the Coloring Book mentioned above, but with a greater focus on educational content: The user can choose from a range of coloring packs, print them and color them by hand. When the coloring is scanned with the app on a smartphone or tablet (iOS and Android), a colored, animated 3D model is displayed and additional information and interaction options are provided (see Fig. 3). The content is designed around topics as diverse as biology, geometry, the solar system and more.
Why I like it:
The augmentation doesn’t intervene the process of coloring by hand
Stimulates several senses
A wide range of topics
~ I’m still a little sceptical if it’s necessary to color a scene first in order to learn about it (i.e. a volcano)
Merge EDU
Merge EDU engages students in STEM fields with 3D objects and simulations they can touch, hold and interact with. The special thing about Merge is that the user has to hold a special cube in their hands where the augmentation is placed on, so the user feels like actually holding the object in their hands and can then interact with it (See Fig. 4). Merge is available for iOS and Android and can be used with mobile devices – It also offers glasses where a user can put their phone in to have their hands free to interact with the cube.
Advantages the authors mention:
3D tactile learning
Flexibility: Can be used at home and at school
Curriculum aligned
Multisensory Instruction
Spatial Development
Accelerate Understanding
Focused Engagement
Why I like it:
The potential of the cube: It could potentially replace physical teaching aids
Big library of topics to explore
Users can upload and share their own creations
Human Anatomy Atlas
With the Human Anatomy Atlas medical students can turn any room into an anatomy lab: They can view and dissect a virtual model of a human organ or complete human body by scanning a printed picture (see Fig. 5) or simply placing a model on a flat surface (see Fig. 6). It’s also possible to study human muscles in motion by scanning a person as shown in Fig. 7.
Why I like it:
Students can study from anywhere and don’t have to go to an actual lab
Doing a dissection virtually might be helpdul to prepare for doing a dissection in real life (As far as I know from several people who are currently studying medicine, preparation for dissections is mostly done with the help of books, pictures, videos and physical models, but not with interactive digital models)
The bad
Sketch AR
With Sketch AR users can learn how to draw by using their smartphone camera: They can choose a sketch from a library and display it on a sheet of paper in front of them. The user can then follow the virtual lines on the paper step-by-step (See Fig. 8). The app also offers more features like minigames and AI portraits, but I only had a look at the AR feature. In general the app is designed really well and is also personalizable, but all in all I did not see the added value that AR has in this case.
Why I don’t like it:
Drawing might be difficult when looking at the paper though a small screen
While drawing I personally like to fixate the paper with one hand, which is not possible, because you have to hold your mobile device
I don’t see the advantes of AR compared to common image tracing (by printing it out and using it as a template)
An app that does pretty much the same is “Tracing Projector”, where I also don’t see the added value.
On a general note
There are a lot of apps on the market – especially in children’s education – that try to replace a physical game with a digital one (i.e. playing with dominos), which is in my opinion not what AR should be used for. AR is supposed to enhance the user’s physical world and not replace it. I believe that it’s important to experience the world with as many senses as possible – especially in early ages – and haptic experiences should not be limited to holding and controlling a smartphone. Furthermore there are a lot of apps where the user can just randomly place 3D objects in the real world, but can’t do anything with them, which might be fun and playful though, but doesn’t have many educational values in my opinion.
Garzón, J., Pavón, J., & Baldiris, S. (2019). Systematic review and meta-analysis of augmented reality in educational settings. Virtual Reality, 23, 447-459.
Zünd, F., Ryffel, M., Magnenat, S., Marra, A., Nitti, M., Kapadia, M., Noris, G., Mitchell, K., Gross, M.H., & Sumner, R.W. (2015). Augmented creativity: bridging the real and virtual worlds to enhance creative play. SIGGRAPH Asia 2015 Mobile Graphics and Interactive Applications.
Hello again! In this 3rd blog entry I will give an overview of the technology behind AR that makes the magic happen. Let’s go.
Technology
To superimpose digital media on physical spaces in the right dimensions and at the right location 3 major technologies are needed: 1) SLAM, 2) Depth tracking and 3) Image processing & projection
SLAM (simultaneous location and mapping) renders virtual images over real-world spaces/objects in the right dimensions. It works with the help of localizing sensors (i.e. gyroscope or accelerometer) that map the entire physical space or object. Today, common APIs and SDKs for AR come with built-in SLAM capabilities.
Depth tracking is used to calculate the distance of the object or surface from the AR device’s camera sensor. It works the same a camera would work to focus on the desired object and blur out the rest of its surroundings.
Then the AR program will process the image as per requirements and projects it on the user’s screen (For further information on the “user’s screen” see section “AR Devices” below). The image is collected from the user’s device lens and processed in the backend by the AR application.
To sum up: SLAM and depth tracking make it possible to render the image in the right dimensions and at the right location. Cameras and sensors are needed to collect the user’s interaction data and send it for processing. The result of processing (= digital content) is then projected onto a surface to view. Some AR devices even have mirrors to assist human eyes to view virtual images by performing a proper image alignment.
Object detection
There are two primary types used to detect objects, which both have several subsets: 1) Trigger-based Augmentation and 2) View-based Augmentation
Trigger-based Augmentation
There are specific triggers like markers, symbols, icons, GPS locations, etc. that can be detected by the AR device. When pointed at such a trigger, the AR app processes the 3D image and projects it on the user’s device. The following subsets make trigger-based augmentation possible: a) Marker-based augmentation, b) Location-based augmentation and c) Dynamic augmentation.
a) Marker-based augmentation
Marker-based augmentation (a.k.a. image recognition) works by scanning and recognizing special AR markers. Therefore it requires a special visual object (anything like a printed QR code or a special sign) and a camera to scan it. In some cases, the AR device also calculates the position and orientation of a marker to align the projected content properly.
b) Location-based augmentation
Lacotion-based (a.k.a. markerless or position-based augmentation) provides data based on the user’s real-time location. The AR app picks up the location of the device and combines it with dynamic information fetched from cloud servers or from the app’s backend. I.e. maps and navigation with AR features or vehicle parking assistants work based on location-based augmentation.
c) Dynamic augmentation
Dynamic augmentation is the most responsive form of augmented reality. It leverages motion tracking sensors in the AR device to detect images from the real-world and super-imposes them with digital media.
View-based Augmentation
In view-based methods, the AR app detects dynamic surfaces (like buildings, desktop surfaces, natural surroundings, etc.) and connects the dynamic view to its backend to match reference points and projects related information on the screen. View-based augmentation works in two ways: a) Superimposition-based augmentation and b) Generic digital augmentation.
a) Superimposition-based augmentation
Superimposition-based augmentation replaces the original view with an augmented (fully or partially). It works by detecting static objects that are already fed into the AR application’s database. The app uses optical sensors to detect the object and relays digital information above them.
b) Generic digital augmentation
Generic digital augmentation is what gives developers and artists the liberty to create anything that they wish the immersive experience of AR. It allows rendering of 3D objects that can be imposed on actual spaces.
It’s important to note that there is no one-size-fits-all AR technology. The right augmented reality software technology has to be chosen based on the purpose of the project and the user’s requirements.
AR Devices
As already mentioned in my previous blog entry, AR can be displayed on various devices. From smartphones and tablets to gadgets like Google Glass or handheld devices, and these technologies continue to evolve. For processing and projection, AR devices and hardware have requirements such as several sensors, cameras, accelerometer, gyroscope, digital compass, GPS, CPU, GPU, displays and so on. Devices suitable for Augmented reality can be divided into the following categories: 1) Mobile devices (smartphones and tablets); 2) Special AR devices, designed primarily and solely for augmented reality experiences; 3) AR glasses (or smart glasses) like Google Glasses or Meta 2 Glasses; 4) AR contact lenses (or smart lenses) and 5) Virtual retinal displays (VRD), that create images by projecting laser light into the human eye.
Hello again! My second blog entry will be about the the differences between four concepts: Extended Reality (XR), Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR).
XR, AR, VR, MR,… What??
Extended Reality (XR): XR is a “catch-all”-term for technologies that enhance or replace our view of the real world. This can be done through overlaying or immersing computer text and graphics into real-world and virtual environments, or even a combination of both. XR encompasses AR, VR and MR.
Augmented Reality (AR): AR enhances our view of the real world by overlaying the real-world environment with digital content across multiple sensory modalities. It detects objects in the real-world environment and overlaps those with computer-generated data such as graphics, sounds, images, and texts. In other words: AR comines the real world with the digital world. Users can experience AR very easily through an smartphone application, but also through special AR wearables (i.e. headsets, glasses), displays, projectors or even contact lenses.
Virtual Reality (VR): While AR enhances the user’s real environment, VR completely replaces it with a virtual one. By using full-coverage headsets the user’s real-world surroundings are completely shut out while using. Advanced VR experiences even allow users to move in a digital environment and hear sounds. Moreover, special hand controllers can be used to enhance VR experiences.
Mixed Reality (MR): MR is the newest of these immersive technologies and combines aspects of AR and VR. When experiencing MR, virtual content is not only overlaid on the real environment (as in AR) but is anchored to and interacts with that environment. Instead of relying only on remote control devices, smart glasses, or smartphones, users can also use their gestures, glancing or blinking, and much more to interact with the real and the digital world at the same time.
Long Story short:
Extended Reality (XR) is an umbrella term for technologies that enhance or replace our view of the real world
Augmented Reality (AR) overlays virtual objects on the real-world environment
Virtual Reality (VR) immerses users in a fully artificial digital environment
Mixed Reality (MR) not just overlays but anchors virtual objects to the real world
For a better understanding, I found this nice infographic:
Okay, got it. But why AR?
As far as I know at this point, all three techniques – AR, MR & VR – can be useful for educational purposes. The choice of the technology might depend on several factors like the field of education, the equipment or the target group. Still, I chose to focus on AR for several reasons: 1) I like the idea of learning new things by enhancing the user’s environmental view instead of replacing it like it is with VR (my subjective opinion); 2) AR is easily accessible via smartphones or tablets, while VR and MR need more advanced technology (i.e. headsets). There might come up more advantages (and maybe some limitations and disadvantages too) the further I dive into the topic, let’s see. But that’s it for now! 🙂
Hello there! This is my very first blog entry about my journey of finding a suitable topic/project for my master’s thesis, so here we go: I chose “AR in Education” as an overall topic, which I would like to approach rather broadly at first and then gradually narrow it down in order to find a specific research question to work with. The aim of this first blog entry is to give a quick overview of 1) what AR is and 2) how it’s used in the educational sector. Let’s get started:
AR in a nutshell
Augmented Reality (AR) allows to enhance the real physical world through digital visual elements, sound or other sensory stimuli delivered via technology. It incorporates three basic features: 1) a combination of real and virtual worlds, 2) real-time interaction and 3) accurate 3D registration of virtual and real objects. AR thus provides both the real and virtual world simultaneously to the users – either in a constructive (i.e. additive to the natural environment) or a destructive (i.e. masking of the natural environment) way. Further information on the technology behind AR (i.e. hardware, software, algorithms and development) will be covered in another blog entry.
AR in the educational sector
AR techniques are already used in various fields like entertainment, tourism, health care or cultural heritage – just to mention a few. But it’s the educational sector, that caught my attention – especially children’s education. I asked myself “Can AR be used to make learning faster, better and more fun?”. As far as I know at this point, the answer is yes. There is already a range of educational materials like textbooks or flashcards that contain embedded “markers” or triggers that, when scanned by an AR device, produce supplementary information rendered in a multimedia format. But that doesn’t mean that I am not sceptical about AR as an educational tool – In my opinion “children & digital devices” is a double-edged sword. That’s why I would like to take a very close look at where AR has added value and where it doesn’t (in another blog entry).
My next steps
Dive in deeper into the technology behind AR
Find out, what already exists on the market (and hopefully find a niche, where there’s a need)
Discuss, where AR has added value and where it doesn’t
That’s it for today! 🙂
_____
Sources:
Afnan, Muhammad, K., Khan, N., Lee, M.-Y., Imran, A., & Sajjad, M. (2021). School of the Future: A Comprehensive Study on the Effectiveness of Augmented Reality as a Tool for Primary School Children’s Education. Applied Sciences, 11(11), 5277. MDPI AG. doi: http://dx.doi.org/10.3390/app11115277
Elmqaddem, N. (2019). Augmented Reality and Virtual Reality in Education. Myth or Reality? iJET, 14, 234-242. doi: 10.3991/IJET.V14I03.9289