Zbornik 19. mednarodne multikonference INFORMACIJSKA DRUŽBA - IS 2016 Zvezek E Proceedings of the 19th International Multiconference INFORMATION SOCIETY - IS 2016 Volume E Interakcija človek-računalnik v informacijski družbi Human-Computer Interaction in Information Society Uredili / Edited by Bojan Blažica, Ciril Bohak, Franc Novak http://is.ijs.si 11. oktober 2016 / 11 October 2016 Ljubljana, Slovenia Zbornik 19. mednarodne multikonference INFORMACIJSKA DRUŽBA – IS 2016 Zvezek E Proceedings of the 19th International Multiconference INFORMATION SOCIETY – IS 2016 Volume E Interakcija človek-računalnik v informacijski družbi Human-Computer Interaction in Information Society Uredili / Edited by Bojan Blažica, Ciril Bohak, Franc Novak 11. oktober 2016 / 11 October 2016 Ljubljana, Slovenia Uredniki: Bojan Blažica Ciril Bohak Franc Novak Založnik: Institut »Jožef Stefan«, Ljubljana Priprava zbornika: Mitja Lasič, Vesna Lasič, Lana Zemljak Oblikovanje naslovnice: Vesna Lasič Ljubljana, oktober 2016 CIP - Kataložni zapis o publikaciji Narodna in univerzitetna knjižnica, Ljubljana 004.5(082)(0.034.2) MEDNARODNA multikonferenca Informacijska družba (19 ; 2016 ; Ljubljana) Interakcija človek-računalnik v informacijski družbi [Elektronski vir] : zbornik 19. mednarodne multikonference Informacijska družba - IS 2016, 11. oktober 2016, [Ljubljana, Slovenija] : zvezek E = Human-computer interaction in information society : proceedings of the 19th International Multiconference Information Society - IS 2016, 11 October 2016, Ljubljana, Slovenia : volume E / uredili, edited by Bojan Blažica, Ciril Bohak, Franc Novak. - El. zbornik. - Ljubljana : Institut Jožef Stefan, 2016 Način dostopa (URL): http://library.ijs.si/Stacks/Proceedings/InformationSociety/2016/IS2016_Volume_E%20-%20HCI.pdf ISBN 978-961-264-101-6 (pdf) 1. Gl. stv. nasl. 2. Vzp. stv. nasl. 3. Blažica, Bojan 1537200067 PREDGOVOR MULTIKONFERENCI INFORMACIJSKA DRUŽBA 2016 Multikonferenca Informacijska družba (http://is.ijs.si) je z devetnajsto zaporedno prireditvijo osrednji srednjeevropski dogodek na področju informacijske družbe, računalništva in informatike. Letošnja prireditev je ponovno na več lokacijah, osrednji dogodki pa so na Institutu »Jožef Stefan«. Informacijska družba, znanje in umetna inteligenca so spet na razpotju tako same zase kot glede vpliva na človeški razvoj. Se bo eksponentna rast elektronike po Moorovem zakonu nadaljevala ali stagnirala? Bo umetna inteligenca nadaljevala svoj neverjetni razvoj in premagovala ljudi na čedalje več področjih in s tem omogočila razcvet civilizacije, ali pa bo eksponentna rast prebivalstva zlasti v Afriki povzročila zadušitev rasti? Čedalje več pokazateljev kaže v oba ekstrema – da prehajamo v naslednje civilizacijsko obdobje, hkrati pa so planetarni konflikti sodobne družbe čedalje težje obvladljivi. Letos smo v multikonferenco povezali dvanajst odličnih neodvisnih konferenc. Predstavljenih bo okoli 200 predstavitev, povzetkov in referatov v okviru samostojnih konferenc in delavnic. Prireditev bodo spremljale okrogle mize in razprave ter posebni dogodki, kot je svečana podelitev nagrad. Izbrani prispevki bodo izšli tudi v posebni številki revije Informatica, ki se ponaša z 39-letno tradicijo odlične znanstvene revije. Naslednje leto bo torej konferenca praznovala 20 let in revija 40 let, kar je za področje informacijske družbe častitljiv dosežek. Multikonferenco Informacijska družba 2016 sestavljajo naslednje samostojne konference: • 25-letnica prve internetne povezave v Sloveniji • Slovenska konferenca o umetni inteligenci • Kognitivna znanost • Izkopavanje znanja in podatkovna skladišča • Sodelovanje, programska oprema in storitve v informacijski družbi • Vzgoja in izobraževanje v informacijski družbi • Delavnica »EM-zdravje« • Delavnica »E-heritage« • Tretja študentska računalniška konferenca • Računalništvo in informatika: včeraj za jutri • Interakcija človek-računalnik v informacijski družbi • Uporabno teoretično računalništvo (MATCOS 2016). Soorganizatorji in podporniki konference so različne raziskovalne institucije in združenja, med njimi tudi ACM Slovenija, SLAIS, DKZ in druga slovenska nacionalna akademija, Inženirska akademija Slovenije (IAS). V imenu organizatorjev konference se zahvaljujemo združenjem in inštitucijam, še posebej pa udeležencem za njihove dragocene prispevke in priložnost, da z nami delijo svoje izkušnje o informacijski družbi. Zahvaljujemo se tudi recenzentom za njihovo pomoč pri recenziranju. V 2016 bomo četrtič podelili nagrado za življenjske dosežke v čast Donalda Michija in Alana Turinga. Nagrado Michie-Turing za izjemen življenjski prispevek k razvoju in promociji informacijske družbe bo prejel prof. dr. Tomaž Pisanski. Priznanje za dosežek leta bo pripadlo prof. dr. Blažu Zupanu. Že šestič podeljujemo nagradi »informacijska limona« in »informacijska jagoda« za najbolj (ne)uspešne poteze v zvezi z informacijsko družbo. Limono je dobilo ponovno padanje Slovenije na lestvicah informacijske družbe, jagodo pa informacijska podpora Pediatrične klinike. Čestitke nagrajencem! Bojan Orel, predsednik programskega odbora Matjaž Gams, predsednik organizacijskega odbora i FOREWORD - INFORMATION SOCIETY 2016 In its 19th year, the Information Society Multiconference (http://is.ijs.si) remains one of the leading conferences in Central Europe devoted to information society, computer science and informatics. In 2016 it is organized at various locations, with the main events at the Jožef Stefan Institute. The pace of progress of information society, knowledge and artificial intelligence is speeding up, but it seems we are again at a turning point. Will the progress of electronics continue according to the Moore’s law or will it start stagnating? Will AI continue to outperform humans at more and more activities and in this way enable the predicted unseen human progress, or will the growth of human population in particular in Africa cause global decline? Both extremes seem more and more likely – fantastic human progress and planetary decline caused by humans destroying our environment and each other. The Multiconference is running in parallel sessions with 200 presentations of scientific papers at twelve conferences, round tables, workshops and award ceremonies. Selected papers will be published in the Informatica journal, which has 39 years of tradition of excellent research publication. Next year, the conference will celebrate 20 years and the journal 40 years – a remarkable achievement. The Information Society 2016 Multiconference consists of the following conferences: • 25th Anniversary of First Internet Connection in Slovenia • Slovenian Conference on Artificial Intelligence • Cognitive Science • Data Mining and Data Warehouses • Collaboration, Software and Services in Information Society • Education in Information Society • Workshop Electronic and Mobile Health • Workshop »E-heritage« • 3st Student Computer Science Research Conference • Computer Science and Informatics: Yesterday for Tomorrow • Human-Computer Interaction in Information Society • Middle-European Conference on Applied Theoretical Computer Science (Matcos 2016) The Multiconference is co-organized and supported by several major research institutions and societies, among them ACM Slovenia, i.e. the Slovenian chapter of the ACM, SLAIS, DKZ and the second national engineering academy, the Slovenian Engineering Academy. In the name of the conference organizers we thank all the societies and institutions, and particularly all the participants for their valuable contribution and their interest in this event, and the reviewers for their thorough reviews. For the fourth year, the award for life-long outstanding contributions will be delivered in memory of Donald Michie and Alan Turing. The Michie-Turing award will be given to Prof. Tomaž Pisanski for his life-long outstanding contribution to the development and promotion of information society in our country. In addition, an award for current achievements will be given to Prof. Blaž Zupan. The information lemon goes to another fall in the Slovenian international ratings on information society, while the information strawberry is awarded for the information system at the Pediatric Clinic. Congratulations! Bojan Orel, Programme Committee Chair Matjaž Gams, Organizing Committee Chair ii KONFERENČNI ODBORI CONFERENCE COMMITTEES International Programme Committee Organizing Committee Vladimir Bajic, South Africa Matjaž Gams, chair Heiner Benking, Germany Mitja Luštrek Se Woo Cheon, South Korea Lana Zemljak Howie Firth, UK Vesna Koricki Olga Fomichova, Russia Mitja Lasič Vladimir Fomichov, Russia Robert Blatnik Vesna Hljuz Dobric, Croatia Aleš Tavčar Alfred Inselberg, Israel Blaž Mahnič Jay Liebowitz, USA Jure Šorn Huan Liu, Singapore Mario Konecki Henz Martin, Germany Marcin Paprzycki, USA Karl Pribram, USA Claude Sammut, Australia Jiri Wiedermann, Czech Republic Xindong Wu, USA Yiming Ye, USA Ning Zhong, USA Wray Buntine, Australia Bezalel Gavish, USA Gal A. Kaminka, Israel Mike Bain, Australia Michela Milano, Italy Derong Liu, Chicago, USA Toby Walsh, Australia Programme Committee Bojan Orel, chair Andrej Gams Vladislav Rajkovič Grega Nikolaj Zimic, co-chair Matjaž Gams Repovš Franc Solina, co-chair Marko Grobelnik Ivan Rozman Viljan Mahnič, co-chair Nikola Guid Niko Schlamberger Cene Bavec, co-chair Marjan Heričko Stanko Strmčnik Tomaž Kalin, co-chair Borka Jerman Blažič Džonova Jurij Šilc Jozsef Györkös, co-chair Gorazd Kandus Jurij Tasič Tadej Bajd Urban Kordeš Denis Trček Jaroslav Berce Marjan Krisper Andrej Ule Mojca Bernik Andrej Kuščer Tanja Urbančič Marko Bohanec Jadran Lenarčič Boštjan Vilfan Ivan Bratko Borut Likar Baldomir Zajc Andrej Brodnik Janez Malačič Blaž Zupan Dušan Caf Olga Markič Boris Žemva Saša Divjak Dunja Mladenič Leon Žlajpah Tomaž Erjavec Franc Novak Bogdan Filipič iii iv KAZALO / TABLE OF CONTENTS Interakcija človek-računalnik v informacijski družbi / Human-Computer Interaction in Information Society .................................................................................................................................................................... 1 PREDGOVOR / FOREWORD ................................................................................................................................. 3 PROGRAMSKI ODBORI / PROGRAMME COMMITTEES ..................................................................................... 4 Remote Interaction in Web-Based Medical Visual Application / Bohak Ciril, Lavrič Primož, Marolt Matija ........... 5 3D Serious Games for Parkinson’s Disease Management / Blažica Bojan, Novak Franc, Biasizzo Anton, Bohak Ciril .......................................................................................................................................................... 9 Designing Visual Interface for Nutrition Tracking of Patients with Parkinson’s Disease / Novak Peter, Koroušić Seljak Barbara, Novak Franc ............................................................................................................ 13 Redesign of Slovenian Avalanche Bul etin / Blažica Vanja, Cerar Janez Jaka, Poredoš Aleš ............................ 17 Improving the Usability of Online Usability Surveys with an Interactive Stripe Scale / Pesek Matevž, Isaković Alja, Strle Gregor, Marolt Matija ......................................................................................................... 21 Evaluation of Common Input Devices for Web Browsing: Mouse vs Touchpad vs Touchscreen / Malečkar Andrej, Kljun Matjaž, Rogelj Peter, Čopič Pucihar Klen ................................................................................... 25 Wizard of Oz Experiment for Prototyping Multimodal Interfaces in Virtual Reality / Gombač Blaž, Zemljak Matej, Širol Patrik, Deželjin Damir, Čopič Pucihar Klen, Kljun Matjaž ............................................................. 29 Towards the Improvement of Guard Graphical User Interface / Kopušar Žiga, Novak Franc ............................. 33 Towards Affordable Mobile Crowd Sensing Device / Pavlin Gal, Pavlin Marko ................................................... 37 I Was Here: A System for Creating Augmented Reality Digital Graffiti in Public Places / Šimer Erik, Kljun Matjaž, Čopič Pucihar Klen .............................................................................................................................. 40 Interactive Video Management by Means of an Exercise Bike / Štrekelj Jan, Kavšek Branko ............................ 44 Indeks avtorjev / Author index ................................................................................................................................ 49 v vi Zbornik 19. mednarodne multikonference INFORMACIJSKA DRUŽBA – IS 2016 Zvezek E Proceedings of the 19th International Multiconference INFORMATION SOCIETY – IS 2016 Volume E Interakcija človek-računalnik v informacijski družbi Human-Computer Interaction in Information Society Uredili / Edited by Bojan Blažica, Ciril Bohak, Franc Novak 11. oktober 2016 / 11 October 2016 Ljubljana, Slovenia 1 2 PREDGOVOR Interakcija človek–računalnik v informacijski družbi je konferenca, ki jo organizira Slovenska skupnost za proučevanje interakcije človek–računalnik. Namen konference je zbrati raziskovalce, strokovne delavce in študente in ponuditi možnost izmenjave izkušenj in raziskovalnih rezultatov, kakor tudi navezave stikov za bodoče sodelovanje. Zadani cilj, da bi organizirali konferenco vsako leto, smo doslej le delno uresničili. Vendar pa kljub temu v Sloveniji narašča zanimanje za področje interakcije človek-računalnik, o čemer priča število prispevkov na letošnji konferenci in različne smeri opravljenih raziskav. Poleg že uveljavljenih tem, kot so uporabnostno testiranje, vizualizacija in snovanje grafičnih uporabniških vmesnikov, so predatavljeni tudi primeri aktualnih smeri, ki vključujejo nadgrajeno resničnost in aplikacije osnovane na množični mobilni komunikaciji. FOREWORD Human-Computer Interaction in Information Society is a conference organized by the Slovenian HCI community. The conference aims to bring together researchers, practitioners and students to exchange and share their experiences and research results, as well as to provide an opportunity for establishing contacts for collaboration. We have set ourselves the objective of organizing an annual event and have so far only partially succeeded. On the other hand, it is apparent that the interest in HCI in Slovenia is increasing as evidenced by the number of submitted papers to the conference this year and different areas of the reported research. Beside the established approaches such as usability testing, visualization or GUI design, examples from emerging topic areas including augmented reality and mobile crowd sensing are elaborated. Bojan Blažica, Franc Novak, Ciril Bohak 3 PROGRAMSKI ODBOR / PROGRAMME COMMITTEE Bojan Blažica Franc Novak Ciril Bohak Matevž Pesek Špela Poklukar Jože Guna 4 Remote Interaction in Web-Based Medical Visual Application Ciril Bohak, Primož Lavrič, Matija Marolt Faculty for Computer and Information Science University of Ljubljana Večna pot 113 1000 Ljubljana, Slovenia ciril.bohak@fri.uni-lj.si ABSTRACT is presented in [1]. In this paper we present a novel integration of four remote collaboration modalities into an existing web-based medical A collaboration system with broad specter of features is pre- data visualization framework: (1) visualization data sharing, sented in [4]. The telemedicine system integrated many fe- (2) camera view sharing, (3) data annotation sharing and atures of collaboration such as cooperative diagnostics and (4) chat. The integration of remote collaboration modalities remote analysis of digital medical imaging data, audio-visual was done for two reasons: for getting the second opinion on discussions as well as remote computing support for data diagnosis or for getting a diagnosis from the remote medical analysis. specialist. We present an integration of these modalities and a preliminary evaluation by the medical expert. In conclu- Where there are no radiologists available, hospitals can make sion we show that we are on the correct track of integrating use of remote diagnostic services. In such service the com- collaboration modalities into the visualization framework. panies offer to make diagnosis based on radiological data at distance. In such case the hospital staff still has to send Categories and Subject Descriptors the data to the company which then makes a diagnostic H.4 [Information Systems Applications]: Miscellane- process offline. Such example is a Canadian company Real- ous; J.3 [Computer Applications]: Life and Medical Sci- time Medical, which assures privacy, data security and fast ences processing of requests via their PACS/RIS-neutral workflow management platform DiaShare1. General Terms Another example of remote collaboration allows radiology Vizualization, Collaboration specialists to guide and direct the technicians at the dis- tance. Such system iMedHD22 was presented by Remote Keywords Medical Technologies and consists of two parts: (1) a wea- medical visualization, remote collaboration rable telemedicine device, a hands-free secure live HD stre- aming device, and (2) Tele-Ultrasound system, which provi- 1. INTRODUCTION des multi-participant real-time sharing of images, annotati- It is generally accepted fact, that collaboration yields better ons, snapshots and moreover, secure connection. The users results in most of the fields. It is even more so for the case of can join the sessions in the web browser. medical diagnosis, where doctors are commonly looking for the second opinions of colleagues with more experiences or An image based viewer for tablets was presented by Khan- with different view on the problem. Since doctors with same dheria [5]. Primarily the image viewer is intended for face- expertise are often not in the same institution or even coun- to-face consultation with colleagues, but offers a remote try the collaboration between them can be slow or requires access to radiological images as well. The application in- lots of resources. tegrates web-based PACS viewer and real-time audio/video teleconferencing with remote users. Medical collaboration applications have already been pre- sented in different forms. Such cloud based solution is pre- Researchers have also investigated what is the acceptance sented in [3] where authors claim, that such solution might of the Web-based distribution of radiology services. Such reduce the storage costs of increasing volume of radiologi- study for regional and remote centres of Western Australia cal data being produced on daily basis. While radiologists is presented in [8]. still need to transfer the data back to their devices for their examination it is the first step towards remote collaboration. This paper addresses different communication modalities for remote collaboration. In the next section we present the Early examples of remote collaboration in reviewing of ul- Med3D framework – a web-based framework for viewing me- trasound images using low-cost voice and video connections 1http://www.realtimemedical.com/ 2http://www2.rmtcentral.com/tag/real-time- radiology/ 5 dical volumetric data. In section 3 we present the integra- tion of novel remote collaboration modalities in the Med3D framework. Section 4 presents a preliminary evaluation of integrated collaboration modalities, followed by discussion in section 5. In last section we present conclusions and pos- sible future work. 2. THE MED3D FRAMEWORK A web-based visualisation framework Med3D [6], an adap- tation of Java based visualization framework NeckVeins [2], was developed with purpose of platform independent tool for visualization of medical volumetric data. The framework is developed using WebGL 2.0 library for exploiting the hard- ware accelerated graphics rendering in the browsers. While currently the framework allows indirect visualization of vo- lumetric data through the use of Marching Cube algorithm [7] for transformation to polygonal mesh model of the data, it is designed for integration of direct volumetric rendering algorithms such as ray casting as well. Med3D also allows users to annotate the data they are vie- wing with 3D position based annotations and save the an- notations for later review. We have also implemented a su- pport for remote collaboration enabling specialists, mostly radiologists, getting second opinions form colleagues over the Internet or getting diagnosis from remote specialists at Figure 2: The communication schema during remote all. The data can be viewed locally by individual user it collaboration. In top left is the session host, who sha- can be uploaded and shared with designated users of the res the scene with other users of Med3D application. framework, or it can be directly shared with an individual The host in the schema has already synchronised the or group of users. The framework user interface is displayed scene with the server (bottom right) and sends the in Fig. 1. updates of the shared scene, that is then sent to all the subscribers and updates the local copy of scene as well. In the top middle is the guest (subscriber) to she session, which has already transferred the scene from the server and is receiving updates from the host. Top right is the new client who transfers the most recent version of the scene from the server. The sharing of visualization data between users over network is not a novel idea. However, to the best of our knowledge, we do not know the implementation of the idea in such form. The data between users in Med3D can be shared in two ways. First option allows users to upload their data and make it available to other users of the framework. This is a common implementation done in multiple web-based collaborative applications. In this way the data is stored Figure 1: The Med3D framework with loaded 3D on the server and shared with selected users. The second model of medical data. Figure also shows annotations approach, also implemented in our framework allows users pinned to the exact locations on the model. to share data from current session. A user can share their session and define data sharing. Other users can connect to shared session and obtain the shared data in same form as 3. INTEGRATION OF REMOTE COLLABO- original users has them. Such scheme is presented in Fig. 2. RATION The main contribution of this paper is integration of remote 3.2 Camera view sharing collaboration in the web-based visualization tool Med3D. While data sharing is quite common in many applications it The remote collaboration includes four different modalities: is not very common to have an ability of sharing your view (1) sharing visualization data, (2) sharing camera view, (3) of the data as well. There are some examples of such col- sharing annotations and (4) integrated chat between con- laborations in form of collaborative document editing (e.g. nected users. Google Drive). There are also applications that allow users to share their computer screen or single application window. 3.1 Visualization data sharing But this still differs from our aim, where we wanted to ena- 6 ble user to have her own view on the data, but also have an option of seeing a view of a remote user. We implemented this by sharing user’s camera transforma- tions with other users. Each user has an option to share her view and other users can attach to their shared view, thus sharing their viewing experience in real time, while still be- ing able to switch back to their own view at any point in time. In our case this gives the users option to better explain their decisions and also to show which portion of the data they are currently studying. Due to small amount of data being distributed between users there is no major latency between screen view synchronisa- tion. The synchronisation speed is dependant on the latency of network itself between users and Med3D server. The di- stribution of camera transformations between users is also Figure 3: Integrated chat service for real-time dis- shown in Fig. 2. cussions on displayed data. 3.3 Annotation sharing Previously presented 3D position dependant annotations can During the interview with a medical expert we got a good also be shared with other users. Here we only share the insight into desired workflow and features that allow doctors content of the annotations and their anchoring position on to improve their current work. The medical expert respon- 3D model, but not the position of actual annotation window ded very positive to our implementation of remote camera in user interface. Each shared annotation is displayed in synchronisation which enables collaborators in-depth study the middle of the screen upon its first display, but saves its of the data from same point of view. position for individual user afterwards. This is done due to different sizes and aspect rations of individual screens (we The medical expert has also pointed out that Med3D with do not want to put annotations outside the visible area for well annotated data collection could also be used for educa- users with smaller screens). tional purposes with support for students from experts with use of integrated remote collaboration modalities. Each user can decide whether she wants to share her an- notations or not. In the future we will also implement the 5. DISCUSSION option of sharing individual annotation. List of local and With integration of remote collaboration into medical visua- shared annotations is displayed in left side of the Med3D lization framework has proved as good idea according to the user interface in Fig. 1. results of previous studies as well as from a positive feedback we got from the medical expert. We decided to integrate the 3.4 Chat remote collaboration option in an early stage of develop- Fourth collaboration modality is group chat integrated into ment of Med3D framework, which gives us the possibility of Med3D framework. Such collaboration is not new but gives blending remote collaboration features with the single user participating users option of communication. We implemen- workflow, making the features easier to learn and to use. ted interactive chat because of low bandwidth consumption. The chat in framework is available to all the the participants Our decisions were confirmed and supported with a preli- in same session. An example of chat is displayed in Fig. 3 minary evaluation interview with medical expert who gave We are also planning on integrating voice and video confe- us positive feedback with pointers on what and how to im- rence support in later versions which were originally omitted prove in the future. Medical expert also pointed out that due to their high bandwidth consumption. the data visualization itself is very important and should be done well. 4. EVALUATION We have done preliminary evaluation with a medical expert 6. CONCLUSIONS AND FUTURE WORK who uses radiological data on everyday basis for diagno- In this paper we have presented an integration of remote sis and preparations for medical procedures. The medical collaboration modalities into an existing web-based 3D vi- expert tried out the Med3D application and implemented sualization framework Med3D. We have presented each in- workflow. He also tried out the presented remote collabo- dividual collaboration modality, presented results of a preli- ration features and pointed out that the implementation of minary user evaluation and highlighted the pros and cons of collaboration is done well, but could use further improve- presented collaboration modalities. The future work inclu- ments. First he missed integrated voice and video chat, the des extension and specialization of each individual collabora- feature that is already planned for future implementation, tion modalities, such as per user and per group permissions and second, he missed an option of adding hand drawn an- of collaboration options. We are also planning on introdu- notations on desired view. This option allows doctors to cing additional collaboration options in form of voice and better plan the procedure with visual annotations. We have video group calls between the users of the framework. added the proposed collaboration modalities to our list of future improvements. 7 7. ACKNOWLEDGES We would like to thank medical expert for great feedback on implemented features and guidelines for future improve- ments of the framework. 8. REFERENCES [1] D. V. Beard, B. M. Hemniger, B. Keefe, C. Mittelstaedt, E. D. Pisano, and J. K. Lee. Real-Time Radiologist Review of Remote Ultrasound Using Low-Cost Video and Voice. Investigative Radiology, 28(8), August 1993. [2] C. Bohak, S. Žagar, A. Sodja, P. Škrlj, U. Mitrović, F. Pernuš, and M. Marolt. Neck veins: an interactive 3D visualization of head veins. In Proceedings of the 4th International Conference World Usability Day Slovenia 2013, 25 Nov, Ljubljana, Slovenia, E. Stojmenova (Ed.), pages 64–66, 2013. [3] C. Bolan. Cloud pacs and mobile apps reinvent radiology workflow. Applied Radiology, 42(6):24–29, 2013. [4] H. Handels, C. Busch, J. Encarnaç˜ ao, C. Hahn, V. Kühn, J. Miehe, S. Pöppl, E. Rinast, C. Roßmanith, F. Seibert, and A. Will. Kamedin: a telemedicine system for computer supported cooperative work and remote image analysis in radiology. Computer Methods and Programs in Biomedicine, 52(3):175 – 183, 1997. [5] P. Khandheria. Integrating Tablet-Based Videoconferencing with an Image Viewer and a Shared PACS Session to Provide a Platform for Remote Consultation for Radiology Studies. In Society for Imaging Informatics in Medicine, Long Beach, CA, 2014. [6] P. Lavrič, C. Bohak, and M. Marolt. Web based visualisation framework with remote collaboration support. In Proceedings of 25th International Electrotechnical and Computer Science Conference ERK 2016, to appear in september 2016. [7] W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3D surface construction algorithm. Proceedings of the 14th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’87, 21(4):163–169, 1987. [8] P. Tually, C. Stavrianou, and J. Walker. User acceptance of the web-based distribution of radiology services in regional and remote centres of western australia. Journal of Telemedicine and Telecare, 11(2):93–95, December 2005. 8 3D serious games for Parkinson’s disease management Bojan Blažica Franc Novak, Anton Biasizzo Ciril Bohak Jožef Stefan Institute Jožef Stefan Institute University of Ljubljana PBM3 d.o.o. Jamova cesta 39, 1000 Ljubljana Faculty of Computer and Information Jamova cesta 39, 1000 Ljubljana franc.novak@ijs.si Science Tovarniška 3b, 5270 Ajdovščina anton.biasizzo@ijs.si Večna pot 113, 1000 Ljubljana bojan.blazica@ijs.si, ciril.bohak@fri.uni-lj.si bojan@olok.eu ABSTRACT Given the above, it is no surprise that several research The aim of the article is to show how off-the-shelve equipment projects have been funded to advance our knowledge of PD can be used to develop serious games for an affordable tele- (Rempark1, Sense-Park2, Cupid3, Neurotremor4). The work medicine solution for Parkinson’s disease management. Two presented in this article is part of the PD_manager project, which games have been developed aimed at assessing and training aims to build and evaluate an innovative, mHealth, patient-centric patient’s reach of upper limbs (using Kinect v2) and fine motoric ecosystem for Parkinson’s disease management. More specifically skills of fingers (using Leap motion). The games collect player the aim of PD_manager is to: data in terms of score achieved and full kinematics of movement 1. model the behaviors of intended users of PD_manager during gameplay. The data is stored online and made available to (patients, caregivers, neurologists and other health-care therapists and doctors through a secure connection. The games providers), have been tested with patients within the Soča rehabilitation institute as well as at their homes. 2. educate patients, caregivers and healthcare providers with the focus on occupational and speech therapies and Categories and Subject Descriptors 3. propose a set of unobtrusive, simple-in-use, co- Categories and subject descriptors: H.1.2 [User/Machine operative, mobile devices that will be used for Systems]: Human factors; J.3 [Life and Medical Sciences]: symptoms monitoring and collection of adherence data Health; (smartphone, sensor insole, smart pillbox, wristband with sensors for acceleration, heart rate, etc.) [5]. General Terms The games presented form a small subset of the devices used Measurement, Documentation, Performance, Design, Human within the project for monitoring of patients and their adherence Factors. to treatment. As their main purpose is not entertainment, the developed games fall in the category of serious games [7]. Keywords 3D interaction, serious games, Parkinson’s disease, rehabilitation, 2. REQUIREMENTS tele medicine The basic idea behind the presented systems is to (1) encourage patients with Parkinson’s disease to put more time into 1. INTRODUCTION rehabilitation through the use of gamification concepts, and (2) allow tracking the performance of individual patients that use the Parkinson's disease (PD) is a long-term disorder of the central system. Performance tracking is created by recording of patient’s nervous system that mainly affects the motor system. It belongs to activity both, at the rehabilitation center as well as at the patient’s a group of conditions called motor system disorders, which are the home. result of the loss of dopamine-producing brain cells. The four primary symptoms of PD are tremor, or trembling in hands, arms, The recorded performance track is also available to the doctors legs, jaw, and face; rigidity, or stiffness of the limbs and trunk; who have the possibility of tracking the progress of all patients bradykinesia, or slowness of movement; and postural instability, that use the system via a web-based application. The web-based or impaired balance and coordination. As these symptoms become application is intended for doctors’ use to asses and track more pronounced, patients may have difficulty walking, talking, individual patient’s performance and plan his/hers rehabilitation or completing other simple tasks [1]. There are 10 million patients remotely. worldwide (1.2 million in the EU [2]). Their lives are dependent The system therefore consists of three parts: a client part on others and there is no cure, we can only postpone the onset of application for patients, a server for gathering data and settings symptoms or treat their severity. “The combined direct and and a web-based client for doctors and caregivers. The client part indirect cost of Parkinson's, including treatment, social security payments and lost income from inability to work, is estimated to be nearly $25 billion per year in the United States alone. 1 http://www.rempark.eu/ Medication costs for an individual person with PD average $2,500 2 http://www.sense-park.eu/ a year, and therapeutic surgery can cost up to $100,000 dollars per 3 patient. ” [2] http://www.cupid-project.eu/ 4 http://www.car.upm-csic.es/bioingenieria/neurotremor/ 9 is the most demanding in terms of system requirements, as it has relative to the users coordinate space (originating in the center of to enable the smooth and comfortable use by the user as well as the user’s torso) and translating the PHIZ above shoulder height allow undisturbed capture of the data about patients’ performance. (Figure 2). Difficulty levels were then defined based on how far The systems used in the presented work have the following the user needs to stretch to reach an apple; the higher the level, specifications: Intel i7 – 4770R processor. 8 GB RAM, 120 GB the more apart the apples are. The game progresses to the next SSD Hard Drive, Microsoft Windows 8.1, Microsoft Kinect V2, level when a patient successfully collects 15 apples 3 times in a Leap motion, Mini PC form factor (GigaByte Brix and Zotac row. This protocol was defined after initial user tests. These test ZBOX used), mouse and keyboard for standard input at system also revealed the possibility to cheat. The users could wait for an boot. The system connects to any modern television with an apple to fall near the basket, grab it then and put it in the basket, HDMI input. which defeats the purpose of the game (to reach out with the hands). This was corrected by making the apples not draggable 3. IMPLEMENTATION once they start falling of the tree. Another issue raised from user The games have been developed with the Unity 3D5 game engine, testing was selecting the proper player as the sensor used can the choice of sensors to use was done according user track 6 bodies simultaneously. specifications from Table 1. Table 1: Sensor selection based on game requirements Task Reach Fine motor skills Stimulate user to Stimulate user to move hands above Requirements use fine motoric shoulder blades up skills of fingers and outwards Sensor selected Kinect v2 Leap motion 3D tracking of 26 skeletal joints @30 Relevant sensor detailed 3D tracking Hz, seated mode, specifications of fingers@115 Hz hand pose tracking (open, closed palm) Figure 2: Original PHIZ (blue) and PD adjusted PHIZ (green) originating at the player’s shoulder to stimulate proper exercising of upper limbs. For the second game, focused on preserving the user’s fine Figure 1: ‘Fruit picking’ game for exercising the reach above motoric skills, we decided to switch to the Leap Motion hardware shoulder level. as it would not be possible to achieve the desired accuracy with The first game, aimed at preserving the range of movement of the the Kinect sensor (some recent literature exist on how to process patient’s arms was developed with Microsoft’s Kinect V2 sensor. Kinect data to achieve accurate finger tracking [10], but the The game consists of one scene in which the patient collects current available solutions proved to be too inaccurate for the apples growing on a tree and puts them in a basket (Figure 1). task). The task of the user in the second game is to pick small Despite the game’s simplicity its’ development was not so cubes with his fingers and put them in a box (Figure 3). The result straightforward. One of the most important aspects of such a game is the amount of blocks collected in two minutes and the time left is the ‘feeling’ the user has when interacting, how smooth the in case he collects all boxes. Both games communicate with the interaction is, and the fidelity with which his movements are server using secure SSL communication with self-signed translated in the game. From a technical standpoint, this means certificates. Games settings, i.e. difficulty level, are retrieved from filtering the raw input signal from the Kinect sensor and fine- the server and controlled by the medical personnel remotely tuning the filtering parameters. Additionally, with the health (Figure 4), while game results (score achieved and number of practitioners involved in the project we defined the physical apples collected) and kinematic data of the user (rotation in Euler interaction zone (PHIZ) of the game so it reflects the constraints angles and quaternions and position of tracked joints) are that the domains of use imposes, i.e. mapping user movements anonymously stored online. The game also stores these data locally in case of problems with the internet connection at the patient’s home. 5 http://www.unity3d.com/ 10 3.1 PHIZ While we could use the Leap motion SDK out of the box, we needed to make some adjustments when using the Kinect v2 sensor. The physical interaction zone (PHIZ) of the Kinect intended for normal use is defined as a cube originating in the player’s torso as shown in Figure 2 left, while the constraints that the domains of use imposes, imply a different PHIZ. The change demands the mapping of user movements in the original PHIZ to one translated above shoulder height as shown in Figure 2 right. Figure 5 shows the PD adjusted PHIZ in action during testing. Figure 3: ‘10 cubes’ game for exercising fine motoric skills. Figure 5 Testing with ‘Fruit picking’ game. 3.2 Kinematic data collected In the first game, the data collected by the Kinect sensor is collected at 30 FPS and consists of the position vector (x,y,z) and quaternion orientation (w,x,y,z) of all joints of all detected players (layer). The recorded joints are: left ankle, right ankle. left elbow, right elbow, left foot, right foot, left hand, right hand, tip of the left hand, tip of the right hand, head, left hip, right hip, left knee, right knee, neck, left shoulder, right shoulder, base of the spine, middle of the spine, spine at the shoulder, left thumb, right thumb, left wrist, right wrist. See [6] for details. The second game records kinematic data from the Leap motion controller data at 115 FPS. The data is described as follows: in each frame, there can be one or more hand objects. The hand object reports the physical characteristics of a detected hand. It includes a palm position and velocity; vectors for the palm normal and direction to the fingers; properties of a sphere fit to the hand; and lists of up to five attached fingers (identified by number, from 0 for thumb to 4 for pinky finger). The anatomy of each finger is further described with four bones ordered from base to tip, indexed from 0 to 3: 0 for metacarpal, 1 for proximal, 2 for intermediate, 3 for distal). Finally, each bone is described with its length, width, center position, orientation, next and previous joint [4]. For two minutes of gameplay, the data gathered amounts to approximately 5 MB and 100 MB for game 1 and 2 respectively. 4. DISCCUSION AND CONCLUSION Figure 4: The interface for doctors and caregivers: patient- According to the review and the proposed classification of serious specific game settings (top), patient data (middle), exercises games for health presented in [8], our games can be classified as schedule (bottom). follows: purpose – for health, application area - motor, interactive tool – 3D cameras, interface – 2D/3D, players – single, genre - exergame, adaptability – yes, progress monitoring 11 – yes, feedback – yes, portability – yes, engine – Unity3D, 4.2 Future work – trials, evaluation platform – PC, connectivity – on. There were two other games The virtual reality supported physiotherapy starts with inpatients mentioned in the review dealing with PD. One aimed at cognitive and lasts for 4 weeks and each individual continues at home for capabilities and the second for motor skills. The latter is additional 2 weeks. 18 inpatients, aged between 54 and 80, were comparable to our games with the exception that it does not recruited for testing and validation, 5 patients tested the system provide feedback nor connectivity. Additionally, we can compare also in their homes after admission. Physiotherapists assess the our games against the guidelines for serious games for PD patients’ condition at the time of recruitment at the time of described in [9]. We can see that most were met: admission and at the end of home therapy. Although testing with  accuracy – yes, the sensors used provide data that is additional patients and validation in a bigger pilot with 200 accurate enough to be analyzed to evaluate the patients is subject of ongoing work, we can say that in general, the performance and progress of the patient, system is well accepted by patients.  home-based solution – yes, the system is commercially available and affordable, 5. ACKNOWLEDGMENTS Our thanks to URI Soča rehabilitation center for their contribution  real-time biofeedback – yes, the system gives feedback during the development and testing of the system with real about how the patient is doing to therapists as soon as a patients. session is finished (if connection is available),  customized games – the games enable visual cues, and 6. REFERENCES adjustable level of difficulty that can monitored [1] "Parkinson's Disease Information Page". NINDS. June 30, remotely by the therapist, 2016. Accessed August 2016.  PD rehabilitation protocol – yes, the addition of new http://www.ninds.nih.gov/disorders/parkinsons_disease/parki mini-games is possible, nsons_disease.htm  automated system calibration – yes, the Kinect sensor’s [2] “European Parkinson’s disease association” website, skeleton tracking with the modified PHIZ acts as an accessed August 2016. automatic calibration system that matches the range of http://www.pdf.org/en/parkinson_statistics movement of the patient with the range of movement [3] “Parkinson’s disease foundation”, website, accessed August required by the virtual game player, 2016http://www.epda.eu.com/en/contact-us/press-  feedback/reward system – yes, the games stimulate the room/epda-videos/epda-help-us-to-change-attitudes/ user by constantly giving feedback on the progress of [4] Leap motion controller SDK reference: the game and after the game is finished to increase the https://developer.leapmotion.com/documentation/csharp/api/ engagement and involvement of the player with the Leap.Bone.html?highlight=bone#csharpclass_leap_1_1_bon game and reduce the risk of abandonment of the game e_1ae6e994fa39239cd3c745266694cdc0f3 Retrieved 1 and physiotherapeutic treatment. August 2016. 4.1 Lessons learned [5] “PD_manager project”, website. Accessed August 2016. Connectivity is often overlooked. Two examples: first, the http://www.parkinson-manager.eu/ GigaByte Brix has no external WiFi antenna, which proved to be [6] Kinect V2 SDK reference: https://msdn.microsoft.com/en- a problem when operating in the hospital as the room in which the us/library/microsoft.kinect.jointtype.aspx Retrieved 1 August therapy takes place has poor signal and second, PD patients are 2016. elderly people with often-outdated TV sets without HDMI input. [7] Susi, T., Johannesson, M., & Backlund, P. (2007). Serious Other players of the system such as grandchildren must be taken games: An overview. http://www.diva- in consideration. On the one hand, they make the whole tele- portal.org/smash/get/diva2:2416/FULLTEXT01.pdf medicine experience nicer for the patients and can help with [8] Wattanasoontorn, V., Boada, I., García, R., & Sbert, M. system adoption and troubleshooting but on the other hand can (2013). Serious games for health. Entertainment Computing, bring noise in the data collected if the system has no option to 4(4), 231-247. discriminate between patient and other player. This is why we introduced the warm-up mode of gameplay, where data is not [9] Paraskevopoulos, I. T., Tsekleves, E., Craig, C., Whyatt, C., recorded online. & Cosmas, J. (2014). Design guidelines for developing customised serious games for Parkinson’s Disease Ease of use for both patients and therapists is equally important rehabilitation using bespoke game sensors. Entertainment as both spend a lot of time with the system, but from a different Computing, 5(4), 413-424. perspective. For the patient, the ease of use is determined by how [10] Toby Sharp, Cem Keskin, Duncan Robertson, Jonathan the game feels while playing, while for the therapist use of use is Taylor, Jamie Shotton, David Kim, Christoph Rhemann, Ido about the simplicity to set up the system, to switch between Leichter, Alon Vinnikov, Yichen Wei, Daniel Freedman, patients using the game and how much help the patients need Pushmeet Kohli, Eyal Krupka, Andrew Fitzgibbon, and when using the system at home. Shahram Izadi. 2015. Accurate, Robust, and Flexible Real- Giving feedback is not always positive as some patients suggested time Hand Tracking. In Proceedings of the 33rd Annual that knowing that they are near the goal makes them anxious, ACM Conference on Human Factors in Computing Systems which in turn makes it harder for them to actually reach the goal. (CHI '15). ACM, New York, NY, USA, 3633-3642. DOI: http://dx.doi.org/10.1145/2702123.2702179 12 Designing visual interface for nutrition tracking of patients with Parkinson’s disease Peter Novak Barbara Koroušić Seljak Franc Novak Jožef Stefan Institute Jožef Stefan Institute Jožef Stefan Institute Jamova cesta 39, Ljubljana Jamova cesta 39, Ljubljana Jamova cesta 39, Ljubljana +386 1 4773 386 +386 1 4773 363 +386 1 4773 386 peter.novak@ijs.si barbara.korousic@ijs.si franc.novak@ijs.si ABSTRACT There are numerous studies addressing both interfaces on normal In this paper, we describe the design of a visual interface of a displays [1], [2] and touch-screens [3], [4] to name just a few. mobile a mobile app for tracking nutrients and foods consumed by Extensive and in-depth research of user design guidelines for patients with Parkinson’s disease. The interface should enable the smartphone applications for people with Parkinson’s Disease has patients to recognize objects on the screen, easily perceive their been done by Nunes et al [5]. Their study featured literature function and interact with them thus providing an efficient way of review of disease symptoms, interviews with care-givers and entering the dietary intake data. The app has been validated by usability testing experiments with (39) patients. They concluded five patients and the preliminary results are encouraging. that the patient’s interaction with smartphones may be directly influenced by their: motor symptoms (bradykinesia, rest tremor, Categories and Subject Descriptors muscle rigidity, postural instability and gait impairment), non- H.5.2 [User Interfaces]: Graphical user interfaces (GUI), motor symptoms (sensory symptoms, cognitive disorders, Prototyping, User-centered design dementia) and on/off phenomenon (the variety between the symptoms when the medication is acting in a great way and when not). They evaluated the performance of four touch gestures: tap – General Terms is accurate with the large target size; swipe – should be used Design, Human Factors without activation speed; multiple taps – are comfortably performed; drag – are not preferred (are better replaced with Keywords multiple tap controls). They furthermore constructed the User interface, Design, Food and nutrition tracking, dietary information display guidelines, that included: the use of high assessment, mobile app, Parkinson’s disease. contrast colored elements, carefully selected information to display, the presence of indication of location, the avoidance of time-dependent controls, the use of multi-modality and also the 1. INTRODUCTION application of guidelines for older adults. There exist different methods for dietary intake assessment, which are used to explore eating habits of individuals by measuring Another study [6] carried out pilot questionnaires to (22) nutrients and foods. Information about dietary intake is needed for patients with their caregivers – trying to understand requirements both risk prediction and dietary treatment of chronic diseases. for designing the user interface for them. The PD-diary Dietary assessment is possible using either open-ended surveys, application for big touch-screens was designed based on such as dietary recalls or records, or closed-ended surveys assumptions about the patients concluded from the interviews. including food frequency questionnaires. They suggested that most potential users are older than 60 years and are not computer-literate. Patients have been using only a few Continued efforts have been done to improve the accuracy of electronic devices (a mobile phone e.g.) and are not good with these methods as inaccurate dietary assessment may be a serious computer peripherals (such as mouse and keyboard) and may obstacle of understanding the impact of dietary factors on disease. have negative associations with such equipment (because they Recently, the technology for image detection and recognition by don’t use it often). The answers indicated that it may be helpful to using Deep neural networks has developed significantly, enabling use the GUI logic that is well known to the patients (such as nine its application for automatic dietary assessment as well. The button numeric keyboard from the cash machine). The results also technology could not only provide automatic recognition of food conformed with previously mentioned research guidelines in and drinks but also enable estimation of volume and nutritional suggesting the use of large objects (buttons, labels, etc.), high values. contrast (bright objects on dark background and vice versa) and While the need of tracking dietary intake is well recognized, multi-modality (visual information combined with voice the problem of acquiring the dietary intake data remains a announcements and sound effects). challenge issue. In practice, it appears that patients with While previously mentioned research [5] included only Parkinson’s disease, such as the older adults, often have problems testing of general touch gestures, recently also some usability handling electronic devices. Consequently, designing user testing of applications with a specialized purpose for PD patients interfaces for this population has been quite well researched topic. has been done. For example, Barros et al [7] designed the medical 13 application for patients’ following their medical schedule based Design choices not only incorporate previously described on the interviews with doctors, patients and care-givers. They guidelines from the research of designing interfaces for patients performed usability tests with (12) patients that wasn’t familiar with Parkinson’s disease but are furthermore grounded in other with medication managing application. They used the application design principles of graphical user-interfaces and visual on a smartphone, while tactile information was being recorded communications. and task performance was being observed. The results indicated We used color and shape in a way that utilizes specific that: there were some problems with tapping the buttons with characteristics of visual variables – selective and associative icons placed very close to the boarders; swiping gestures on perception. We determined the same color for objects with the buttons with the arrows were observed; tapping on the check- same functionality, making it easy for users to recognize, locate boxes wasn’t very accurate; patients did not always understand and isolate them – grouping them into categories (e.g. static and the additional step of confirming the input. Otherwise the interactive objects). Within the main categories, we use the researchers observed that recorded errors weren’t severe, the difference of shape to enable users to differentiate between sub- patients grasped the main concept and quickly learned how to use categories, while still preserving the perception of the main the application. categories (e.g. icons of functions and input suggestions – both Several studies researching the design of rehabilitative interactive objects). We established visual hierarchy by designing exergames (digital exercise based games) have been done (e.g. a few instances of different brightness of information and [8], [9]) and also the design of self-management applications for increasing the difference between them, making it easier for users the patients to manage their diaries has been documented (e.g. [6], to process them. Furthermore, we used semiotic principles to [10]). The mentioned research was taken into account when communicate different functions of buttons and provide the designing a mobile application for tracking the nutrition of feedback of successfully completed tasks. patients – as part of PD_manager project, briefly presented in Designed visual language was unified and used throughout Chapter 2. While the previously described guidelines can here be the whole app, which makes the interface predictable and seen applied in practice, the paper also presents new specific ways consequently allows users to quickly learn how to use the app. It for making an application more user-friendly for the patients that should be also noted, that the visual style differs from the ones interact with it. The focus of the study was how to make the usually found in mobile applications in its boldness, strong use visual language of user interface as easy to understand as possible contrast and the presence of clear, emphasized elements. At some for the focus group (Parkinson’s disease patients). The results in points the aesthetic value was compromised for making sure that form of design solutions, presented in Chapter 3, can therefore be the interface as evident as possible for the users from the focus useful for others designing user interfaces for patients with group, which may have problems with their sight. Parkinson’s Disease (and also older adults in general). 2. NUTRITION TRACKING OF PATIENTS 3.1 Differentiation of interactive objects WITH PARKINSON’S DISEASE We enabled users to quickly see, on what they can tap on and on In the European funded project PD_manager, we have developed what not, by determining a distinctive color hue for interactive a mobile app for tracking nutrients and foods consumed by and static objects. We colored all the interactive objects blue and patients with Parkinson’s disease. The app provides two modules, all the static ones gray. That means that all the buttons and input which are used by experts and patients. The module for patients is information are designed to have a blue color, while all the simple and enables food recording based on images. Patients take category titles and input field labels are gray. For example, the photos of food, which are tagged by food names either in an user can easily recognize every button by blue color and every automatic way or by the patients or their caregivers. Tagged field input label by gray color (Figure 1a). images are uploaded on the server of the Open Platform for Clinical Nutrition (OPEN), where detailed analysis of the food diary is performed. The results of the analysis are sent to the 3.2 Emphasis of prioritized information PD_manager Decision Support System and to the patient’s experts (dietitian, physician, logopedist), who perform education We guided users’ attention to parts of the screen, that are most and, if needed, nutritional and logopedic therapy. important in given step by applying a bigger contrast to such parts. We determined two instances of brightness of the objects 3. SPECIFIC VISUAL LANGUAGE (for both the interactive and static ones) – we applied a lower brightness to objects with prioritized information and a higher While establishing an information structure (that helps users brightness to others. For example, the user automatically focuses understand the system) and designing an interaction (that makes it the attention first on the active row which is dark, while all the easy for them to finish a given task) were also a part of designing other passive text input rows are bright (Figure 1b). the user interface, this paper focuses on designing an adjusted visual language. The goal was to design it specificity in a way, 3.3 Special text style for input that enables users to quickly recognize the objects on the screen – consequently making the whole experience more user-friendly. We made it easy for users to differentiate, which text is a label and which is an input, by choosing a different font (from the same Designed visual language provides an easy way for the typeface family) for each of them. We chose a serif font for text patients to: locate interactive elements on screen, pay attention to input (Roboto Slab) and sans serif for other information (Roboto). the most important information, differentiate between input text For example, the user can recognize every input text without and instructions, understand which functions are available to them reading it from observing a serif font alone and similarly he/she and stay aware of the current activity that they are participating can recognize every field labels by a sans serif font (Figure 2a). in. 14 3.5 Icons for functions We helped users to perceive, what functionality provide certain buttons, by representing it in the form of pictogram icons. We designed icons of the functions to have minimal amount of details and a unified look amongst them. For adding a new meal with a photo we chose the plus sign in front of the camera and for adding a new meal without photo a plus sign in front of a blank page. For options we chose the icon of a gear, for switching between opened meals we used the icon of an arrow. For editing past input we chose the icon of a pencil and for completing the tagging process the icon of a check. For example, the user can understand in a moment (without reading any indicating text) that pressing the icon of plus and a camera will add a new photo in a gallery with meals (Figure 3a). 3.6 Feedback for task completion We reassured that users know, when they have completed the task, by giving them visual feedback in the form of a green color. We indicated successfully finished tasks with a dark green background and a bright green check icon. For example, the user can be sure that he/she successfully performed all the steps of (a) (b) tagging the meal, when seeing the background of text turning green and the check icon appearing over the picture of the meal Figure 1: (a) Example of button recognition, (b) example of (Figure 3b). guiding focus by higher/lower brightness 3.4 Special text style for instructions We made it simple for users to recognize, which text is addressing them directly (instructions and questions) and which not, by choosing a different font (from the same typeface) for each category. We chose Italic font for the instructions and Regular font for other information. For example, the user can swiftly recognize the log out question without reading it by its Italic font and similarly he/she can recognize the buttons by their Regular font (Figure 2b). (a) (b) Figure 3: (a) Example of pictogram icons, (b) example of feedback for task completion 3.7 Indication of current activity We assisted users in recognizing, which task are they currently performing, by assigning a distinctive background color to different types of tasks. We chose a dark background for the task of adding a new meal (and also for viewing the options) and a bright background for the task of editing them. For example, a (a) (b) user can rapidly recognize that he/she is in the process of adding anew meal just by observing the dark background of the screen Figure 2: (a) Special text style for input, (b) special text style for (Figure 4a). Similarly, he/she can recognize the process of tagging instructions a meal by a white screen background alone (Figure 4b). 15 Human Factors Approaches, Second Edition ( Human Factors and Ageing Series), 2nd edn. CRC Press, New York. [2] Moreno L. and Martínez P. 2012. A review of accessibility requirements in elderly users' interactions with web applications. In Proceedings of the 13th International Conference on Interacción Persona-Ordenador (INTERACCION '12). ACM, New York, NY, USA, Article 47, 2 pages. DOI=10.1145/2379636.2379682. [3] Silva, P., Holden, K., Jordan, P. 2015. Towards a list of heuristics to evaluate smartphone apps targeted at older adults: a study with apps that aim at promoting health and well-being. In 2015 48th Hawaii International Conference on System Sciences (HICSS), pp. 3237–3246. DOI=10.1109/HICSS.2015.390. [4] Motti L. G., Vigouroux N., Gorce P. 2013. 2013. Interaction techniques for older adults using touchscreen devices: a literature review. In Proceedings of the 25th Conference on l'Interaction Homme-Machine (IHM '13). ACM, New York, NY, USA, Pages 125, 10 pages. DOI=10.1145/2534903.2534920. (a) (b) [5] Nunes F., Silva P. A., Cevada J., Correia B. A., Teixeira L. 2015. User interface design guidelines for smartphone Figure 4: Examples of indication of current activity applications for people with Parkinson's disease. In Universal Access in the Information Society, 2015, Pages 1- 4. CONCLUSION 2. DOI=10.1007/s10209-015-0440-1. In this paper, we reviewed the literature on designing user- [6] Maziewski P., Suchomski P., Kostek B., Czyzewski A. 2009. interfaces for patients with Parkinson’s disease and presented the An intuitive graphical user interface for the Parkinson's designed visual language for the interface of mobile application Disease patients, In 2009 4th International IEEE/EMBS for tracking the nutrition of patients. The designed solution is Conference on Neural Engineering, Antalya, pp. 14-17. based on: differentiating between interactive and static objects, DOI=10.1109/NER.2009.5109223. emphasizing of prioritized information, differentiating between input information and instructions, communicating of available [7] Barros A. C., Cevada J., Bayés À., Alcaine S., Mestre B. functions, giving feedback for task completion and indicating 2013. Design and Evaluation of a Medication Application for current activity. As it was designed in a way to make it easy for People with Parkinson’s Disease, In Mobile Computing, patients to recognize objects on the screen, perceive their function Applications, and Services: 5th International Conference, and know how to interact with them, the results can come in MobiCASE 2013, Paris, France, November 7-8, 2013, handy for others designing user interfaces for people with Revised Selected Papers, pages=273-276. Parkinson’s disease. DOI=10.1007/978-3-319-05452-0_22. [8] McNaney R., Balaam M., Holden A., Schofield G., Jackson While not included in this paper, a specific information D., Webster M., Galna B., Barry G., Rochester L., Olivier P. structure of application was also constructed (for enabling users to 2015. Designing for and with People with Parkinson's: A easily understand the system – by breaking tasks in several steps Focus on Exergaming. In Proceedings of the 33rd Annual for example), and appropriate touch interaction was designed (for ACM Conference on Human Factors in Computing Systems making it easy for users to effortlessly complete the tasks – by (CHI '15). ACM, New York, NY, USA, 501-510. reducing the number of required taps for example). The studies DOI=10.1145/2702123.2702310. were done as part of user-interface design for a mobile application for nutrition tracking of patients with Parkinson’s disease (a part [9] Assad O., Hermann R., Lilla D., Mellies B., Meyer R., PD-manager project). Shevach L., Siegel S., Springer M., Tiemkeo S., Voges J., Wieferich J., Herrlich M., Krause M., Malaka R. 2011. Motion-Based games for parkinson's disease patients. In 5. ACKNOWLEDGMENTS Proceedings of the 10th international conference on The work was supported by the PD_manager project, within Entertainment Computing (ICEC'11), Springer-Verlag, the EU Framework Programme for Research and Innovation Berlin, Heidelberg, 47-58. DOI= 10.1007/978-3-642-24500- Horizon 2020, under grant number 643706, and TETRACOM 8_6. TTP 48, Personalized Nutrition Control Aid for Insulin Patch [10] Barros A. C., Cevada J., Bayés À., Alcaine S., Mestre B. Pump, Contract no: 609491. 2013. User-centred design of a mobile self-management solution for Parkinson's disease. In Proceedings of the 12th 6. REFERENCES International Conference on Mobile and Ubiquitous Multimedia (MUM '13). ACM, New York, NY, USA, Article [1] Fisk, A. D., Rogers, W. A., Charness, N., Czaja, S. J., Sharit, 23, 10 pages. DOI=10.1145/2541831.2541839. J. 2009. Designing for Older Adults: Principles and Creative 16 Redesign of Slovenian Avalanche Bulletin Vanja Blažica Janez Jaka Cerar Aleš Poredoš Slovenian Environment Agency Slovenian Environment Agency Slovenian Environment Agency Vojkova 1b Vojkova 1b Vojkova 1b Ljubljana, Slovenia Ljubljana, Slovenia Ljubljana, Slovenia +386 1 478 4116 +386 1 478 4412 +386 1 478 4144 vanja.blazica@gov.si jaka.cerar@gov.si ales.poredos@gov.si ABSTRACT bulletin issued in test phase in the beginning of April 2016. The We present the redesign of the Slovenian avalanche bulletin, results from the test phase and user feedback will be used for published regularly during the winter season to warn against additional improvements for the winter season of 2016/2017. avalanche danger and to provide specific information for advanced users. The former version included an estimation of 2. BULLETIN BEFORE REDESIGN danger on a scale from one to five with supporting text for the Before the redesign, the bulletin consisted of the danger level for whole country, while the new one offers an additional graphical the next few days and accompanying text describing in detail the description, specified for several geographical regions. The snow conditions and danger situation along with the forecast for redesign profoundly influenced the work of avalanche forecasters the next few days (Fig. 1). by introducing a new interface, additional input and database storage. At the same time, users welcomed the additional information, international comparability and user friendliness of the new bulletin. Categories and Subject Descriptors D.3.3 [Information interfaces and presentation (e.g., HCI)]: Miscellaneous General Terms Design, Standardization Keywords Avalanche bulletin; official warnings; risk communication; danger awareness; usability testing 1. INTRODUCTION Depending on the snow and avalanche situation, avalanche bulletins are issued for the majority of the planet’s mountainous terrain. Their purpose is to warn inhabitants and visitors of avalanche-prone areas of the current estimated danger and to provide them with additional information (e.g. type of avalanche, reason for triggering). As winter mountaineering and ski touring become more mainstream, they are increasingly accessible to less experienced people, whose lack of knowledge and skills can result in injury or death. Therefore, there is an increasing need for user- friendly, easily understandable warnings with a clear message of the dangers one is exposing himself to when visiting avalanche- prone terrain [1]. The avalanche warning services in Europe have followed this need (in accordance with their varying resources) by upgrading their bulletins [2, 3] and by agreeing on an international set of Figure 1: Bulletin before redesign icons for danger level and avalanche situations [4]. Although the text itself is very informative, usability testing The Slovenian Environment Agency publishes avalanche bulletins indicated that it is more favored among experienced users, while regularly throughout the winter season [5]. These are the official novices have trouble comprehending the content due to lack of warnings for the entire Slovenian area. To improve service and avalanche knowledge and experience. This predominantly textual adhere to international standards, a complete redesign of the form is also difficult for analyses and comparisons with previous bulletin was undertaken in winter of 2015/2016 with the new seasons and other avalanche services from neighboring countries. 17 3. REDESIGNING THE BULLETIN The first step of the redesign was an extensive study of other European avalanche bulletins as well as the bulletins on other continents, to find examples of good practice and examples of visualization options. In the second step, the extent of the information to be presented in the new bulletin had to be decided. The bulletin needed to be as informative as possible while avoiding information overload and balancing the resources needed to provide the data, e.g. data availability and human resources needed to process the data. Based on the agreed extent of information, several drafts of the new bulletin were prepared and user tested on several target groups. To reach a final decision, we considered guidelines from previous work analyzed in step one with the addition of an online survey and usability testing based on initial paper prototypes (Fig. 2 - 4). Figure 4: Graphical part of prototype no.3 User testing showed that users prefer prototype no. 3 because it presents the information for each region separately, although the table was not understood by everyone. Prototype no. 1 was confusing because it shows two problems at the same time while prototype no. 2 was well accepted. Therefore the new design is a modified prototype no. 2 with the possibility to select a particular region. Other findings included:  More experienced users rely more on the textual part Figure 2: Graphical part of prototype no.1 and decide on the danger level themselves, while for novices the danger level is the most important information;  The name “avalanche bulletin” does not stress enough that this is a warning against danger, particularly to novices;  The danger level for the following days is not clearly presented;  The entire scale for danger levels should be presented and the icons should also be numbered from 1 to 5;  Regions should be named with proper names, not R1 – R3;  The weather forecast is a useful piece of information, although it is not a always part of similar bulletins. Based on usability testing, a near-final version was designed with the final set of information to be included in the bulletin. This was the necessary input for the third step. Figure 3: Graphical part of prototype no.2 18 The third step was to design a new database and interface to support forecasters’ new workflow (Fig. 5). The interface was tested internally with the forecasters to achieve a user-friendly and effective design. Figure 5: Two screenshots of the new interface for data input Figure 6: Redesigned bulletin – more details in graphical part Figure 7: Redesigned bulletin – less details (main view) 19 The fourth step was to achieve further improvements by asking For users, easier dissemination and easier understanding mean stakeholders (mountain rescue service, mountain guides, alpine increased awareness and consequently improved safety. This is association etc.) for comments on the near-final version. particularly true for novices who had difficulty understanding the The bulletin was issued in the new version for a test period in the content of the previous bulletin. In the survey conducted after last part of winter. The next steps will include fine-tuning based publishing the new bulletin in the test period, none of the 69 on the evaluation of the test period in terms of user acceptance participants described the new bulletin as worse than before and and impact on the forecasters’ workflow. the majority of users (65%) agreed that the bulletin has been significantly improved. 4. BULLETIN AFTER REDESIGN The new bulletin (Figs. 6 and 7) puts more emphasis on graphical 6. REFERENCES information for easier comprehension. The graphical content is [1] Burkeljca, J. Shifting audience and the visual language of presented for four geographical regions for the current and next avalanche risk communication. In: Proceedings ISSW 2013. two days. The avalanche situation is graphically explained with International Snow Science Workshop, Grenoble, France, pp. international icons (e.g. type of avalanche) and additional custom 415-422. made icons (e.g. change of danger within the day). The new [2] Martí, G., Pujol, J., Fleta, J., García, C., Oller, P., Costa, O., bulletin is more comparable to bulletins from other countries, and Martínez, P. A new iconographic avalanche bulletin for the which makes comprehension easier for foreigners as well as for Catalan Pyrenees: a beginning for a future avalanche forecasting Slovenians going abroad. database. In: Proceedings ISSW 2009. International Snow Science Workshop, Davos, Switzerland, pp. 361-365. 5. IMPACT OF THE REDESIGN [3] Winkler, K., Bächtold, M., Gallorini, S., Niederer, U., For the Slovenian avalanche service, the most important Stucki, T., Pielmeier, C., Darms, G., Dürr, L., Techel, F., Zweifel, achievement is the improvement in the quality of their service B. Swiss avalanche bulletin: automated translation with a when informing and warning the public. Additionally, the new catalogue of phrases. In: Proceedings ISSW 2013. International database with more numeric information enables easier analysis of Snow Science Workshop, Grenoble, France, pp. 437–441. the season and the performance of the service as well as improved [4] comparability with other services. The new interface was designed EAWS web page. so that the number of geographical regions can be easily changed http://www.avalanches.org/eaws/en/main_layer.php?layer=basics should the service decide for more (or less) detail. Similarly, the &id=5 number of parameters can also be modified, making the bulletin [5] Slovenian avalanche bulletin. adjustable. Although not yet in use, the data is prepared with improved dissemination in mind (xml format, widget). The http://www.meteo.si/pozor/plaz presented graphical information also enables automatic translation of a large part of the information to other languages, which also remains to be implemented. 20 Improving the usability of online usability surveys with an interactive Stripe scale Matevž Pesek, Alja Isaković, Gregor Strle, Matija Marolt University of Ljubljana Faculty of Computer and Information Science Laboratory for Computer graphics and Multimedia {matevz.pesek,matija.marolt}@fri.uni-lj.si ABSTRACT use [12]. There are many standard methodology tools avail- The paper introduces Stripe, an interactive continuous scale able for measuring various usability aspects, ensuring the for online surveys that makes it easy to compare multiple validity and comparability of results gained by a method- answers on a single screen. The Stripe is evaluated as an ologically sound and well-structured approach. The tools alternative to the n-point Likert scale, which is commonly vary in size and scope, but they all commonly use the Likert used in online usability questionnaires like the System Us- scale as the de facto standard for user-feedback gathering. ability Scale (SUS). The paper presents the results of a user study, which confirmed the validity of results gained with the The NASA-Task Load Index (NASA-TLX) is a multi-dim- proposed Stripe interface by applying both the Stripe and ensional scale designed to obtain workload estimates from the Likert interface to an online SUS questionnaire. Addi- a user performing a specific task [9, 10]. The ATTRAKD- tionally, the results of our study show that the participants IFF questionnaire [11] is often used for qualitative evalua- favor the Stripe interface in terms of intuitiveness and ease tion of the pragmatic and hedonic aspects of a product or of use, and even perceive the Stripe interface as less time service. For measuring the usability aspects, the Software consuming than the standard Likert scaled interface based Usability Measurement Inventory (SUMI), Questionnaire for on radio buttons. User Interaction Satisfaction (QUIS), System Usability Scale (SUS) and Usability Metric for User Experience (UMUX) Categories and Subject Descriptors are commonly used [13]. SUMI is a 50-item Likert scale H.5.2 [User Interfaces]: Miscellaneous questionnaire that measures five aspects of user satisfaction and scores them against expected industry norms. QUIS Keywords consists of a 27-item Likert scale and is similar to SUMI, but measures attitude towards 11 interface factors. SUS [2] user interfaces, feedback gathering, human computer inter- is a 10-item Likert scale questionnaire measuring the us- action, system usability score, user study, measurement scales ability and an overall satisfaction with a product or service. Finally, the UMUX [6] is a 4-item Likert scale questionnaire 1. INTRODUCTION used for a subjective assessment of perceived usability. Questionnaires are a common tool for usability evaluation in HCI research. For the purposes of our own usability testing, For the purpose of testing new user interfaces for surveys, we developed Stripe, a more interactive and compact scale the SUS provides the right balance between length and pre- that fits on smaller screens and supports the comparison of cision with its 10 questions. Like other standard usability answers across different questions. Knowing that the design measurement methodology approaches, the SUS was care- of a user interface can affect the gathering procedure, and fully constructed from the beginning in order to achieve high in some cases influence (or bias) the results [6, 8], we per- reliability, validity and repeatability of results [2]. The result formed a user study that compared the validity of the newly of the SUS is a single score, between 0 and 100. proposed Stripe interface with the standard Likert scale. The user study tested both user interfaces on the System Us- 2.1 Scales used in online questionnaires ability Scale (SUS) questionnaire for two well-known prod- Paper-based questionnaires have a long history of experi- ucts. This gave us the ability to compare the SUS scores at- mentation with different styles of rating scales, especially in tained through both user interfaces to SUS scores reported the field of psychology. Visual analogue scales (VAS) ap- by other studies. To further evaluate the potential of Stripe, peared back in 1921 and were improved upon by graphic we also performed a usability survey on both interfaces. rating scales (GRS) in 1923 [4]. Both scales include an an- chored horizontal line, with extreme values of the measured 2. RELATED WORK property listed at each end [4]. The user can place a mark Usability is defined as the extent to which a product can be anywhere along the continuous line. used by specified users to achieve specified goals with effec- tiveness, efficiency and satisfaction in a specified context of In 1932 psychologist Rensis Likert introduced his own scale, which limits the number of available options to 5 in the original scale and no longer provides a continuum of choices along the line [5]. Since then, the Likert scale has been 21 adapted to different types of questionnaires, including online of information gathered by a radio button matrix (for ex- versions that use standard HTML input radio buttons. ample, a set of 5-point scales) commonly used to capture similar information In contrast, continuous line-based scales have not been sup- ported by the HTML standard until recently. HTML5 in- The Stripe and its extended version were already used in troduced a new “range” input type, which creates a slider an online survey on multi-modal perception of music [14], scale with a handle that can be moved along the line to and later evaluated in terms of usability, using a modified select a value1. The slider can be configured to support version of the NASA TLX questionnaire [15]. However, in discrete steps or to act as a continuous scale. A potential order to fully evaluate the potential of Stripe, it is necessary problem with this approach is that the initial slider position to compare it with the standard multi-point Likert scale can influence the response and can even lead to a different approach, typically used in online surveys. response distribution when compared to traditional scales based on radio buttons [7]. Luckily, the wide adoption of the JavaScript programming language in modern web browsers offers new opportunities for more interactive user interfaces that can bypass the limitations of standard HTML input types. Research on online survey interfaces tends to focus on the validity of results and user performance (completion time), but fails to evaluate other usability aspects of alternative interfaces. For example, Couper et al. [4] compared online questionnaires that used VAS to ones with different styles of radio buttons and surveys with numeric input fields. Their experiment found that while VAS surveys took longer to complete and contained more missing data, they produced the same response distributions as other types. Cook et al. [3] compared a slightly different style of online graphic rat- Figure 1: The Stripe interface. The statements are ing slider scales with surveys based on radio buttons and shortened into phrases for improved readability, but found that both provided reliable scores, but also noticed the full statement for each label is shown on ’mouse- that sliders took a bit longer to manipulate. User satisfac- over’. tion and subjective perceptions were not evaluated in these studies, which calls for more HCI research that takes a wider 4. EVALUATION range of usability aspects into account when evaluating new The goals of our experiment were: 1) to evaluate the va- interfaces for online surveys. lidity of SUS scores gathered with the Stripe interface using the Likert scale as control and 2) to compare the usability of In the following Section we propose the Stripe, an alterna- the Stripe interface with the usability of the standard Likert tive to the Likert scale that aims to take advantage of the scale. The Stripe interface was designed for online question- benefits of continuous scales while offering a more compact naires, so the experiment was conducted online. The user interactive user interface that makes it easy for users to com- study was conducted on 2 different groups of participants, pare answers, even on a smaller screen. two months apart, to provide additional verification of the results. 3. THE STRIPE: A DYNAMIC INTERFACE The Stripe is a user interface developed to provide an inter- 4.1 Participants active and intuitive continuous-scale alternative to the stan- A total of 68 participants, recruited from students and fac- dard multi-point scale interfaces. It is implemented as a can- ulty members at the University of Ljubljana, fully completed vas with one horizontal dimension (Figure 1). The dimen- the user study. Only participants with previous experience sion represents the presence of a variable, ranging between with the subject of the SUS were included in the survey, to two extremes (e.g. negative/positive, absent/significantly obtain feedback from users with pre-existing experience and expressed, completely disagree/agree). This is similar to the regular interaction with the chosen SUS subject. standard VAS scale. But unlike the VAS or the Likert scale, the Stripe interface accommodates drag-and-drop function- For the first, Gmail survey, we collected feedback from 41 ality for multiple labels, as well as annotation of multiple participants, 12 were male and 29 were female. Their av- categories on the same canvas. In its simplest form, the user erage age was 29.4 years, with 7.1 standard deviation. For is provided with a set of labels describing different nominal the second, Microsoft Exchange survey, we collected 27 re- values of the variable. By dragging the labels onto different sponses from 21 female and 6 male participants. The average positions of the canvas, the user marks their perception of participant’s age was 31.5 years, 7.9 standard deviation. each individual label on a continuous scale. The positions of placed items can subsequently be quantized to discrete val- ues, if so desired. The amount of information retrieved by 4.2 Experiment procedure the Stripe interface is therefore at least equal to the amount The user study was conducted online, with participants fill- ing in all questionnaires on their own, using their own com- 1http://www.w3schools.com/html/htmlforminputtypes.asp puters and their web browser of choice. At the beginning, 22 each participant was asked to confirm their familiarity with from the other for α = 0.01. The ANOVA shows no statis- the product being evaluated in the SUS: Gmail for the first tical differences between values gathered by both interfaces group of participants and Microsoft Exchange for the second for both services, Gmail and Microsoft Exchange (p = 0.32). group. Participants that passed this initial step continued to Furthermore, the ANOVA shows no statistical difference in filling is the SUS questionnaire twice. The formal Slovenian variances between both services (p = 0.44). translation of the SUS questionnaire was used [1]. The study also included questions on how both interfaces The website randomly assigned either the Stripe or Likert compare in terms of intuitiveness and comprehension, time version of the SUS first, followed by the other version, dis- perception and difficulty. The results showed that the par- played on a separate page. The user interface used in the ticipants found the Stripe more intuitive and comprehensible SUS questionnaire was the independent variable, the two with the average values of 4.54 on a 7-point scale. In terms configurations were the Stripe interface and the 5-point Lik- of time perception, the Stripe was rated as slightly less de- ert scale. The resulting SUS score was the dependent vari- manding than the 5-point Likert scale with an average value able. This part of the experiment lasted on average ap- of 3.79 (Figure 2). Finally, the participants rated the Stripe proximately 7 minutes per participant, no time limits were interface as slightly easier for expressing opinions, with the imposed. average score of 3.42 on a 7-point scale (1 - the Stripe was easier than the Likert interface, 7 - the Stripe was more After the SUS evaluation, the participants were presented difficult than Likert). with 3 additional usability questions on a 7-point scale: • By comparing both, the Stripe and the 5-point scale interfaces, which of the interfaces was more intuitive and comprehensible? (1 - 5-point scale, 7 - Stripe) • By comparing both, the Stripe and the 5-point scale interfaces, which of the interfaces takes more time to fill-in? (1 - 5-point scale, 7 - Stripe) • Is it easier or more difficult to express your opinion with the Stripe interface (due to the visual comparison of your answers)? (1 - easier, 7 - more difficult) Basic demographic data (age and gender) of participants with optional written feedback was also collected during the final step. All questions were asked in Slovenian language. Figure 2: Comparison of the Stripe and Likert scale shows that the 5-point scale is perceived as more 5. RESULTS AND DISCUSSION time consuming. The scores of the SUS questionnaire for both groups of par- ticipants and both interfaces are shown in Table 1. For both Overall, the results favor the Stripe interface over the 5-point experiments, results indicate consistent responses gathered Likert scale: mostly in terms of intuitiveness, slightly less in with each interface. However, the standard deviation of re- terms of simplicity of expressing an opinion. An unexpected sponses gathered by the Stripe interface is smaller. This result was the finding that participants found the 5-point is due to the use of a continuous scale, which allows for a Likert scale, which was implemented with standard radio more fine-grained positioning of the labels, unlike restricted buttons, as slightly more time consuming than the graphical options on traditional n-point scales. When we performed and interactive Stripe interface. This result is at odds with a quantization of continuous responses into a 5-point scale research that shows that graphical scales take more time (row 3 in Table 1), the scores were very similar for both in- to fill-in than radio button scales, which leads us to the terfaces. The average SUS score for Gmail was close to the conclusion that the participants found the Stripe interface average SUS score of 83.5 from [3], further confirming the more enjoyable and engaging than the standard Likert scale roboustness of the SUS questionnaire and the validity of our interface. results for both interfaces. 6. CONCLUSION To further explore the consistency of results for both inter- Usability questionnaires like the SUS are still widely based faces, we performed a two sample t-test for each question on the traditional n-point Likert scale, which has also been given in the Stripe and the 5-point Likert interface. The adopted in online surveys due to its simple implementation variances for all 10 SUS questions appear statistically consis- with HTML radio buttons. And while there is some existing tent within each pair of variables for a given question. Thus, research that compares Likert scales with continuous scales, we rejected the null hypothesis of unequal variances for each most research focuses on time performance and reliability pair of question variables for α = 0.01. Consequently, we of results. For this reason, we decided to conduct a user performed the analysis of variance for the cumulative scores. study that would also evaluate the usability of an alternative The variances for both services appear not to differ signifi- user interface for online usability surveys. In addition to cantly. No group has marginal means statistically different providing the benefits of a continuous scale, the proposed 23 Table 1: Comparison of average SUS scores and their deviations for the 5-point Likert scale and Stripe interfaces. Gmail Exchange User interface Avg. SUS score σ of SUS scores Avg. SUS score σ of SUS scores (1) 5-point Likert SUS 79.88 18.03 72.03 20.32 (2) Continuous Stripe SUS 79.02 16.61 70.03 21.44 (3) 5-point Stripe (quantized) 80.55 17.27 70.37 22.36 ∆ 1 vs. 2 0.86 1.42 2.00 1.12 ∆ 1 vs. 3 1.67 0.76 1.66 2.05 Stripe scale also aims to provide a more compact alternative [9] S. G. Hart. Nasa-task load index (nasa-tlx); 20 years that could work well across different devices and smaller later. Proceedings of the Human Factors and screens. Ergonomics Society Annual Meeting, 50:904–908, 2006. [10] S. G. Hart and L. E. Staveland. Human Mental The results of the user study, which was conducted online Workload, chapter Development of NASA-TLX (Task on two separate groups of participants, show that both the Load Index): Results of empirical and theoretical Stripe and Likert scale interfaces provide consistent SUS research, pages 139–183. North Holland Press, scores, confirming the Stripe interface as a viable alterna- Amsterdam, 1988. tive. The Stripe interface was favored in terms of intuitive- [11] M. Hassenzahl, M. Burmester, and F. Koller. ness and chosen as easier for expressing opinions. The most Attrakdiff: A questionnaire to measure perceived surprising result was seeing the Stripe interface score slightly hedonic and pragmatic quality. Mensch & Computer, better in terms of perceived time. While surveys based on 2003. graphical interfaces like the Stripe usually take more time to [12] Iso/Iec. ISO/IEC, 9241-11 Ergonomic requirements complete, the participants in our study rated the standard 5- for office work with visual display terminals (VDT)s - point Likert scale as taking slightly more time. Overall, our Part 11 Guidance on usability, 1998. results show that the Stripe interface was the participant’s [13] A. Madan and S. K. Dubey. Usability evaluation favorite interface across all tested usability aspects. methods: a literature review. International Journal of Engineering Science and Technology, 4:590–599, 2012. [14] M. Pesek, P. Godec, M. Poredos, G. Strle, J. Guna, 7. REFERENCES E. Stojmenova, and M. Marolt. Introducing a dataset [1] B. Blažica and J. R. Lewis. A slovene translation of of emotional and color responses to music. In the system usability scale: The sus-si. International Proceedings of the International Society for Music Journal of Human-Computer Interaction, Information Retrieval, Taipei, pages 355–360, 2014. 31(2):112–117, January 2015. [15] M. Pesek, P. Godec, Poredoš, G. M. Strle, J. Guna, [2] J. Brooke. Sus: A ’quick and dirty’ usability scale. E. Stojmenova, and M. M. Capturing the mood: Usability Evaluation in Industry, 189(164):7, 1996. Evaluation of the moodstripe and moodgraph [3] C. Cook, F. Heath, R. L. Thompson, and interfaces. In Management Information Systems in B. Thompson. Score reliability in webor internet-based Multimedia Art, Education, Entertainment, and surveys: Unnumbered graphic rating scales versus Culture (MIS-MEDIA), IEEE Internation Conference likert-type scales. Educational and Psychological on Multimedia & Expo (ICME), pages 1–4, 2014. Measurement, 61(4):697–706, August 2001. [4] M. P. Couper, R. Tourangeau, F. G. Conrad, and E. Singer. Evaluating the effectiveness of visual analog scales: A web experiment. Social Science Computer Review, 24(2):227–245, 2006. [5] R. Cummins and E. Gullone. Why we should not use 5-point likert scales: The case for subjective quality of life measurement. In Proceedings, Second International Conference on Quality of Life in Cities, pages 74–93, Singapore, 2000. National University of Singapore. [6] K. Finstad. Response interpolation and scale sensitivity: Evidence against 5-point scales. Journal of Usability Studies, 5:104–110, 2010. [7] F. Funke. A web experiment showing negative effects of slider scales compared to visual analogue scales and radio button scales. Social Science Computer Review, 34(2):244–254, April 2016. [8] F. Funke and U. D. Reips. Why semantic differentials in web-based research should be made from visual analogue scales and not from 5-point scales. Field Methods, 24:310–327, 2012. 24 Evaluation of common input devices for web browsing: mouse vs touchpad vs touchscreen Andrej Malečkar Matjaž Kljun Peter Rogelj andrej.maleckar@student.upr.si matjaz.kljun@upr.si peter.rogelj@upr.si Klen Čopič Pucihar klen.copic@famnit.upr.si University of Primorska, FAMNIT Glagoljaška 8 Koper, Slovenia ABSTRACT 1. INTRODUCTION With the ever increasing connectivity to the Internet the use Today, nearly half of the world’s population is connected of the web has spread from static environments of desktop to the Internet1. According to Global Web Index, users computers to mobile context where we interact with the web spend up to 6 hours on the internet a day, of which 2-3 though laptop computers, tablet computers, mobile phones hours are spent on social networking sites 2. These figures and wearable devices. Recent studies have shown that young show that users spend a lot of time interacting with internet people access the web using various devices and input tech- services, among which, the world wide web (WWW or web niques and spend on average more than 20 hours a week on from hereon) is most prominent. the web. In this paper we plan to investigate which input technology is most usable or preferred for performing differ- Browsing the web can be carried out on a wide range of ent tasks on the web. We decided to compare and evaluate computer-based products (e.g. smart phones, smart TVs, the usability of the three most used input devices for web desktops, laptops, tablets, game consoles, e-book readers) browsing, namely: a computer mouse and a touchpad on a and various input devices (e.g. mouse, touchpad, touch- laptop, and a touchscreen on a smartphone. For this pur- screen, pointing stick, trackball, game and remote con- pose we have built a custom web page where users had to trollers). Users are facing different interaction modes with perform seven common tasks on web: open a URL address, various input devices when carrying out the same tasks on copy/paste a URL address, copy/paste text, scroll up-down, different systems. As an example, let us assume that the scroll left-right, zoom in the context of a web page, and nav- we want to increase the size of the content displayed on the igate a map. The results show that the mouse is still a pre- screen (zoom). On a computer we can achieve this with a ferred input device with shortest completion times, followed mouse wheel or with a combination of keys on the keyboard. by the touchscreen interface even if it performed slower at On the touchpad or touchscreen we can use a combination of some tasks compared to touchpad, which was marked as fingers touching and moving on the surface (pinch gesture) least preferred. of these input devices. Moreover, interaction is implemented with subtle differences on different operating systems, on dif- Categories and Subject Descriptors ferent hardware solutions, and nonetheless, in different web H.5.2 [Information interfaces and presentation]: User browsers. Even if at first glance these slight differences look interfaces—Input devices and strategies (e.g., mouse, touch- insignificant, they can lead to confusion and negative user screen) experience. The objective of the research presented in this paper was Keywords to evaluate and compare the three most commonly used in- input devices, performance, web browsing, evaluation put devices in carrying out the same tasks on the web us- ing different computers systems. These three devices are a mouse, touchpad, and a touch screen. The aim of the re- search was to gain qualitative and quantitative information about user interaction while browsing the web, to determine which tasks are difficult to perform with a specific input device, which input device causes problems and why, and reveal areas where these devices could be improved to lead to better user experience. 1http://www.internetworldstats.com/stats.htm 2http://www.globalwebindex.net/blog/daily-time- spent-on-social-networks-rises-to-1-72-hours 25 2. LITERATURE REVIEW For the purpose of the study we have built a web page con- The literature features an abundance of comparisons and sisting of seven consequent tasks. Before starting each tasks, evaluations of input devices for various computer tasks. An users had to read short instructions and had a possibility to early comparison has looked at how mouse, trackball and train with currently selected input device. When they were stylus perform during pointing and dragging tasks [7]. The comfortable enough they had to press on the Start button results show that pointing tasks produce less errors and are to start the task. The web page for each tasks was made completed in less time than dragging tasks, stylus performed in a simple linear fashion (with instructions, Start button, better when pointing, and mouse better when dragging when tasks content and the button for the next task following one compared to the other two. Moreover, it has been shown another from top to bottom) for the web page to look as that both tasks can be modeled by Fitts’ law, which states similar as possible on the wide screen of the laptop and on that the time required to move to a target is a function of the phone’s screen. We have thus not used any navigation the ratio between the distance to the target and the width (except for the button leading users onto the next task) or of the target [4]. complex layout that would need responsive design and af- fect the layout of elements on the page. We have also used It has been argued that target acquisition covered by Fitt’s Boostrap3 for the text to remain readable on both screen law is not the only performed task with input devices. We sizes. Because the page looked the same on both screens we often perform trajectory based tasks (such as drawing, writ- did not have to build a separate page for each screen size ing, and navigating hierarchical menus), which can been in order to be able to compare the results and avoid that described and modeled by steering law [1]. The law is a different designs affected users’ performance. predictive model predicting the speed as well as the time a user needs to navigate a pointing device through a con- The web page recorded task completion times. Each user fined path on the screen. Comparing input devices when completed all seven tasks with each input device. After fin- performing linear and circular steering tasks has shown that ishing tasks with each device users filled in the questionnaire. for the overall performance the tablet and a mouse surpassed The order of input devices was randomised. trackpoint, touchpad and trackball. However, depending on the nature of the tasks, some devices performed better than The seven tasks users had to complete were: (i) open (click others [1]. on, tap) a URL link, which opened within the page (iFrame), (ii) copy a URL of an image on the page and paste it into Other tasks have also been investigated such as remote the text field on the page, (iii) copy the text on the page pointing input devices for smart TVs [6], operating input de- and paste it in a text box on the page, (iv) scroll a long vices in 3D environments [3], or comparing mouse vs biman- text down and up again, (v) scroll a wide text left and right ual touch interaction on tabletops [5]. The latter has shown again, (vi) zoom in on an image as much as possible and better mouse performance for single-user single-hand tasks, zoom out to a normal size, (vii) and move from one location while touch has proved better for both-hand and multi-user on a map (university’s building) to another location (a well interaction. Returning to everyday tasks, a recent study known park in the town) – both locations initially visible on compared performance of three input devices (the finger, a the map – and zoom in on the park as much as possible. stylus, and a mouse) in three pointing activities (bidirec- tional tapping, one-dimensional dragging, and radial drag- We recruited 32 users (11 female and 21 male) with the ging or pointing to items arranged in a circle around the snowball and convenience sampling. Participants were on cursor) [2]. The study confirmed that finger tapping is average 28 years old, and had used: (i) a mouse on average faster but more inaccurate with small targets than stylus 15.25 years, (ii) a touchpad on average 7.9 years, and (iii) and mouse. While the latter performed better in dragging touchscreen on average 4.9 years. The average number of tasks. years using touchscreen coincides with the mass emergence of these devices on the market. The number of years of us- In contrast to the presented studies, our research focused ing the mouse is higher than the number of years of using on the real world tasks users often perform while browsing touchpad. This can be explained by the fact that users in the web. For this purpose we have built a regular web site primary and secondary schools do not need mobility pro- and logged users’ performance in finishing predefined tasks. vided by laptops. They buy their first laptop when they Additionally, our study focused on how users perceive the become students. Considering the average age of users (28), input devices and explores their opinions and preferences in our average user became students 9 years ago. This coin- using them. cides with the use of the touchpad (7.9 years). 4. RESULTS AND DISCUSSION 3. METHOD Mouse interface was ranked by users as the easiest and We conducted a study comparing three different input de- fastest interface among the three, whilst touchpad was rated vices while performing common tasks when browsing the as hardest (Figure 1 right). The majority of users high- web. We selected most frequently used input devices as lighted that they have started using computers with the users are familiar with them: a mouse (connected to a HP mouse and that mouse continues to be their main input ProBook 4530s laptop), touchpad (on a HP ProBook 4530s device when working with computers which may be one laptop) and a touch screen (on a Samsung Galaxy S6 Edge). of reason for such result. System Usability Scale (SUS)4 For completing the tasks we used the latest Google Chrome browser (v 49.0.2623) for Windows 8.1 and Android 6.01 3http://getbootstrap.com/ operating systems at the time of the study. 4See http://www.measuringu.com/sus.php 26 Figure 1: Left: tasks rated as easiest. Centre: tasks rated as hardest. Right: input devices rated as easiest, hardest, and fastest by number of participants. results partially confirm this trend (mouse scored 82.89%, The easiest task for all three devices was Task 1 (opening touchscreen 80.31%, and touchpad 2.8 (64.92%) and high- the link) and Task 4 (scrolling the text up and down) as seen light that only touchpad scored under the usability thresh- on the left in Figure 1. This confirms the results of previous old of 68%. Touchpad was described as impractical, quite studies described in literature review, which found that the imprecise, slow and by 25 out of 32 users as the most diffi- pointing task is fastest performed on pointing input devices cult interfaces (Figure 1 right). The main reason for such a (finger, stylus), but not difficult with the mouse either (de- turnout is probably the fact that users do not use touchpads scribed as the most precise device of the three by users). on their laptops as their main input device. Another rea- The second easiest task was Task 4. This result can be at- son can also be a capacitive sensing technology that requires tributed to the fact that scrolling is commonly performed; stronger pressure (compared to touch screens) creating po- especially with sites such as social networking sites (SNS) tential discomfort for casual touchpad users who are mainly that present the content on an “infinite” scrollable timeline. using mouse and touchsreen interfaces. Moreover, users also The fact that users spend between two and three hours a day stated that the size of touchpad is limited and does not al- on SNS also confirms the commonality of scrolling. Never- low for fine and precise interaction. Different manufacturers theless, some users selected scrolling tasks as hardest, which also implement touchpad’s interaction differently (two users we attribute to inexperience based on years of use of only claimed that their touchpad works differently), which may one particular device. lead to further confusion and the relatively bad results for the touchpad modality could be due to the specific imple- mentation in the instrumentation used (HP Probook 4530s). Users experienced most problems when completing Task 6 (zooming on an image) with mouse and touchpad interfaces and Task 3 with touchscreen (Figure 1 centre). Task 6 was rated hardest by 16 out of 32 users for both mouse and touchpad interfaces. It is interesting to note that no one of these 16 users used the mouse wheel to accomplish the task and that more than half of the users did not know about the zooming method with Ctrl Key and mouse wheel / two finger touchpad drag. This was observed despite the fact that users had the possibility to practice the task. Therefore, it appears that zooming functionality is not commonly used when browsing the web on personal computers. On the other hand, zooming on mobile devices is more com- Figure 2: Average time completion with standard de- mon due to small screen real estate on which desktop only viation for mouse, touchpad, touchscreen interface. websites are being browsed. Therefore, it is not surprising that users did not experience any problems while execut- The graph in Figure 2 shows that mouse is the fastest of the ing Task 6 with touchscreen interface. The hardest task three interfaces, followed by touchpad and touchscreen inter- for touchscreen was Task 3 (copying the text) which was face. Comparing means presented on Figure 2 with repeated also second hardest for touchpad interface (Figure 1 cen- measures ANOVA with homogeneity of variances showed tre). Both touchscreen and touchpad were described as very that at least one mean is significantly different (p <0.001). imprecise and impractical and users claimed that certain Post-hock testing using the Bonferroni correction identified tasks (e.g. copy/paste) are badly implemented (small but- that actually all three mean values are significantly different tons that lead to errors). (touchscreen vs mouse – p <0.0001, touchpad vs mouse p <0.001, touchscreen vs trackpad – p = 0.002). Compared 27 to ranking results of task and device difficulty and speed for touchpad did not take significantly more time than other (Figure 1 right) time results confirm dominance of mouse two input devices. This shows that perceiving something as interface as it is identified as the fastest interfaces. How- hardest, fastest, or easiest (comparing Figure 3 with Figure ever, contrary to previous result where users ranked touch- 2) is not only related to time spent for a certain task, but it screen as less difficult to use and faster, time analysis showed depends on several factors such as perceived sense of quality, touchpad was significantly faster than touchscreen interface. control over a device, responsiveness and other as mentioned by users in questionnaires. 5. CONCLUSION In this paper we have explored difficulties users encounter using the three most common input devices (mouse, touch- pad and touchscreen) when browsing the web. Similar to previous studies the results indicate a significant preference of using a mouse over other input devices [7, 1, 2]. How- ever, as these input devices require different interaction for different tasks, it is inevitable that some tasks are faster performed on least preferred device (e.g. touchpad outper- formed touchscreen in copy/paste tasks), or times are at least comparable with the most preferred device (mouse). This has also been the case mentioned in the literature [2]. It also seems that the preference depends on how famil- iar users are with a particular input device, which is where mouse leads. Other factor that may affected user preference is implementation of interaction for a particular task (e.g. touchpad and touchscreen are not precise enough when it comes to text selection or positioning the cursor), the per- Figure 3: Average times in seconds for each task by ceived quality, responsiveness, etc. This work has singled input device. out which of the commonly performed tasks are hard to complete on each input device. Since all these input devices The average time completion in seconds for each individ- are here to stay the community should look into ways of ual task is shown in Figure 3. The graph shows that the how to make certain tasks easier, and how to standardize touchscreen is the slowest interfaces in all but zooming tasks interaction to improve usability of these devices. (Task 6 and 7) whilst mouse stays the fastest interface in all tasks. When analysing time completion of individual tasks 6. REFERENCES ANOVA showed the differences between different interfaces [1] J. Accot and S. Zhai. Performance evaluation of input are significant for all tasks except for Task 6 (zooming in devices in trajectory-based tasks. In CHI ’99, pages on an image – p=0.158). For tasks with significant ANOVA 466–472, New York, USA, 1999. ACM Press. score we run post-hock testing with Bonferroni correction. [2] A. Cockburn, D. Ahlström, and C. Gutwin. This test showed that significant difference between all pos- Understanding performance in touch selections: Tap, sible pairs is not reached only in case of mouse vs. touchpad drag and radial pointing drag with finger, stylus and for Tasks 1, 2, and 7, and for touchpad vs. touchscreen in mouse. International Journal of Human Computer all but Tasks 3. Studies, 70(3):218–233, 2012. [3] N. T. Dang, M. Tavanti, I. Rankin, and M. Cooper. A The graph on Figure 3 also shows that the major time dif- comparison of different input devices for a 3D ference happened in Task 2 (copying and pasting a URL), environment. International Journal of Industrial Task 3 (copying and pasting text), and Task 7 (navigating Ergonomics, 39(3):554–563, may 2009. the map). Task 2 and Task 3 took longest on touchscreen [4] P. M. Fitts. The information capacity of the human and were also marked as the hardest to complete with touch- motor system in controlling the amplitude of screen (see middle graph on Figure 1 ). One explanation for movement. Journal of Experimental Psychology, this observation is that these two tasks required precise in- 47(6):381–391, 1954. teraction as well as the knowledge of the exact procedure of [5] C. Forlines, D. Wigdor, C. Shen, and R. Balakrishnan. how to complete them. Direct-touch vs. mouse input for tabletop displays. In Proceedings of the CHI ’07, page 647, New York, New The performance of mouse interface drastically drops in case York, USA, 2007. ACM Press. of Tasks 6 and 7. This is in line with ranking results of task [6] I. S. MacKenzie and S. Jusoh. An Evaluation of Two difficulty, where users marked task 6 and 7 as difficult to per- Input Devices for Remote Pointing. In M. R. Little and form with mouse. In these two tasks touchscreen overtook L. Nigay, editors, 8th IFIP Conference, EHCI, pages touchpad interaction for the first time. 235–250, Boston, MA, 2001. Spinger. Despite the fact that the touchpad was faster than touch- [7] I. S. MacKenzie, A. Sellen, and W. A. S. Buxton. A screen for five out of seven tasks (only Task 6 and Task 7 comparison of input devices in element pointing and took less time to finish on the touchscreen), users still pre- dragging tasks. In CHI ’91, pages 161–166, New York, ferred touchscreen. Additionally, Task 6 as the hardest task USA, 1991. ACM Press. 28 Wizard of Oz experiment for Prototyping Multimodal Interfaces in Virtual Reality Blaž Gombač Matej Zemljak Patrik Širol blaz.gombac@student.upr.si matej.zemljak@student.upr.si parik.sirol@student.upr.si Damir Deželjin Klen Čopič Pucihar Matjaž Kljun ddezeljin@student.upr.si klen.copic@famnit.upr.si matjaz.kljun@upr.si University of Primorska, FAMNIT Glagoljaška 8 Koper, Slovenia ABSTRACT of Oculus Rift1, Google’s investment in MagicLeap2 and Ex-In recent years the field of virtual reality has witnessed a peditions3, Samsung’s development of Galaxy Gear4, and rapid growth with significant investments in both hardware Microsoft’s development of HoloLens5. and software development. It has several potential appli- cations for entertainment, education and enterprise where Virtual reality offers immersion into virtual environments users benefit from being immersed into virtual worlds. VR capable of producing a stereoscopic view into a virtual world headsets are available in several forms and price ranges from that is usually coupled with audio. The stereo image is simple and inexpensive Google Cardboard to more complex delivered by a head-mounted display (HMD) with sensors products such as Oculus Rift. Nevertheless, designing fully that track users’ movements allowing the system to change operational virtual reality applications for researching new the view accordingly. There are two types of HMDs: (i) complex multimodal interaction possibilities (e.g. mid-air the fully featured HMDs designed for use with gaming con- gesture, voice, haptics, etc.) may be difficult to implement, soles or PCs and (ii) composite HMDs designed to hold a costly and time consuming. For this reason we have looked smart phone or a tablet computer. Fully featured devices into ways of rapidly prototyping virtual reality interactions. are expensive and can cost between a couple of hundred to Our approach consists of the Wizard of Oz experiment in a couple of thousand euros excluding the cost of a console which subjects interact with a computer system believing or PC. While in composite HMDs the mobile device (com- to be autonomous, but is in reality operated by researchers. monly accessible among the population) acts as a display The presented system allows non-technical designers to ex- and processing unit, which reduces the cost of HDMs bellow plore various multimodal interactions with rapid prototyp- hundred euros. ing of VR environments. Both types of HMDs offer various VR experiences with a Categories and Subject Descriptors varying degree of immersion. The latter partly depends also on the quality of the VR environment being projected on the H.5.2 [Information interfaces and presentation]: Mul- screen and partly on other data produced for other senses. timedia Information Systems—Prototyping However the illusion most often remains incomplete, in that not all senses are catered for and natural ways of interact- Keywords ing in real world, such as with spoken and body language, Wizard of Oz, virtual reality, rapid prototyping, multimodal are not supported. The work presented explores ways of interaction supporting non-developeres to explore various multimodal interactions (including for example mid-air hand gestures, voice, haptics) in rapidly prototyped VR environments. For 1. INTRODUCTION this purpose we designed and built a VR test-bead based The majority of big computer companies recently identified on the Wizard of Oz (WoZ) metaphor. The test-bead en- a big potential in Virtual and Augmented Reality (VR, AR) ables screen sharing between desktop computer and HMD technology. This has lead to massive investments in hard- where the researcher acts as a wizard detecting and exe- ware and software development, such as, Facebook takeover cuting users’ commands (e.g. hand gestures) on a desktop computer creating the illusion of a working prototype. In order to evaluate the test-bead the paper presents a short user study which was carried out using our fast prototyping 1https://www.oculus.com/ 2https://www.magicleap.com/ 3https://www.google.si/edu/expeditions/ 4http://www.samsung.com/global/galaxy/gear-vr/ 5https://www.microsoft.com/microsoft-hololens/ en-us 29 3. PROTOTYPE DESCRIPTION There are three main hardware components used in our prototype: Android based smartphone, Google Cardboard, and a Windows based computer. The user interface is then streamed in real time to the phone from a desktop computer using TrinusVR6 application as seen in Figure 2. Depending on the configuration, either the full screen or only the active window is streamed to the HMD device. The appli- cation was designed to transform any Android or iOS device into an affordable HMD to be used by gamers when playing 3D games on their computers. The application also features a lens correction system aimed to improve user experience by minimising distortion inducted by Google Cardboard’s lenses. The communication between desktop computer and used mobile devices works both via USB cable or WiFi. The later is particularly interesting as it enables researcher to Figure 1: The conduct of the experiment. The ex- place the wizard into another room observing users via web perimenter controls the stream to the HDM based on cam and executing users commands. participant’s mid-air hand gestures or voice controls. technique. 2. LITERATURE REVIEW The Wizard of Oz (WoZ) experiments in human-computer interaction have a long tradition and were pioneered in 1983 by Gould et. al. who simulated an imperfect listening type- writer to explore users’ reactions when giving dictations [4]. The method was used in numerous studies since. It was for example used in prototyping speech user interfaces when AI agents were not so capable [5] or for studying a mixed reality application for enriching the exploration of a historical site with computer generated content [2] WoZ is nowadays commonly used for rapid prototyping of systems that are costly to build, or as means of exploring what people require or expect from systems that require novel or non existent technology [8]. However, Maulsby et. al. have warned that researchers need to understand what Figure 2: TrinusVR streaming the computer desktop limitations need to be implemented on the Wizard’s intel- to a mobile phone to be used in Google Cardboard. ligence, and need to base behavior and dialog capabilities on formal models in order to ensure consistent interaction, 4. METHOD keep simulation honest, and to prevent inappropriate, opti- To test our test-bed we have designed an interaction scenario mistic results [7]. Nevertheless, as demonstrated by numer- comparing two different interaction techniques: namely mid- ous studies employing WoZ, the observation of users’ using air finger gesture and voice based interaction. For this pur- such systems can lead to qualitative data that could not be pose we have created minimum viable product – two simple otherwise acquired. linear presentations in a presentation program. Each slide of the presentation featured a screenshot of the user Interface Furthermore, the HCI community has pointed out that there (UI) for a particular step towards completing a task. Users is a great need for easy to use, rapid prototyping tools (such performed generic phone tasks such as initiate a call, take as WoZ) [6, 3] and that any medium (including VR and a picture, browse files. In order for the linear presentation AR) can reach its potential only when put into the hands of to work in each step participants had only one possible op- designers who can further develop it and define its popular tion to choose from. In Figure 3 both gesture based (left) forms. Such tools have already been developed to research and voice based (right) user interfaces are displayed. In ges- AR usability and interactions [1]. Our contribution to exist- ture based UI users had to bend the appropriate finger to ing work is providing an affordable, easy to use and intuitive trigger one of the available actions (e.g. bending pointer fin- set of tools and procedures for rapid prototyping user inter- ger opened a folder named “Camera” as seen on the left in faces in VR. We evaluate the prototyping tool by running a Figure 3) while in voice based UI users had to name avail- small user study comparing voice and mid-air gesture inter- able options visible on the screen (e.g. reading aloud the face while wearing HDM. name of the folder “Camera” framed in red (right in Figure 3) opened it). After users initiated an action, the exper- 6http://trinusvr.com/ 30 imenter switched to the next slide in the presentation in technique (see Figure 1) users had to answer SUS question- order to show the next screen on the HDM. naire. One of the issues we had to deal with is how to provide in- 5. RESULTS AND DISCUSSION structions for mid-air gesture interaction. The provision of As mentioned, our scenario was a minimum viable study to gesture controls is almost indispensable at the beginning un- test our test-bed. It involved handling the Android operat- til users get familiar with interaction. This is also the case ing system UI by presenting users with sequence of images with current HDM controllers that come in sets of two (one including browsing photos and controlling music player. We for each hand) and each has several buttons and ways to have thus not used a 3D virtual world, which is a limitation control the VR worlds and tools. Until one gets accustomed of our evaluation and which makes it difficult to general- to controls in a particular VR environment, the instructions ize the results in context of virtual world interactions. Due can be overlaid over the controllers. In our study all avail- to virtual representation of mobile device in our study, it able options were always visible on the screen. The limita- is also not possible to generalize results in the context of tion of our mid-air finger gesture set is bending five fingers mobile phone interaction. Nonetheless, the pilot provides only, which limited us to have five options only in each step. practical insights into how the designed test-bed could be However, as w had a linear presentation with two options effectively used as a rapid prototyping tool for exploring dif- at most (back and forward) this was enough for our study. ferent interaction possibilities within VR environments. Users have also not had any troubles using the system and did not find instructions intrusive. Since anything can be streamed from a desktop computer to the HDM, designers and non-technologists can use any While mid-air (hand, finger) gesture interfaces are not so available software to create such interactive environments. common (yet) on mobile devices, voice recognition and in- For example, navigating a 3D information environment can telligent agents or personal assistants (such as Siri, Cortana be simulated in non-linear and zooming presentation soft- and Alexa7) are a part of all major mobile operating sys- ware such as Prezi8, or 3D worlds could be simulated by tems and many users are accustomed to use them to com- creating them in computer-aided design (e.g. AutoCAD) or plete certain tasks or services. Exploring natural language 3D computer graphics software (e.g. Blender9). interactions thus did not present the same issues as mid-air gesture interactions. In our scenario users just had to read 5.0 aloud the text on the screen or use controlling words such 4.6 4.6 4.7 as “up”, “down”, “left”, “right”, “select”, etc. 4.0 4.0 3.6 3.0 2.1 2.0 1.9 1.6 1.3 1.4 1.0 0.0 q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 Figure 3: A sample screen from the scenario. Left: a Figure 4: SUS scores by question for the gesture con- mid-air finger gesture based interface where available trolled interaction. options are visible over fingers and bending a finger with available option triggered the appropriate action. The results of SUS questions10 from our questionnaire for Right: a voice based interface where available options each interaction technique are visible in Figures 4 and 5. are framed in red and reading aloud their names trig- Even questions (colored in blue on the graphs) are about gered the appropriate action. positive aspects of the system, while odd questions (colored in red on the graphs) regard negative aspects. We can see We have used a convenience and snowball sampling to re- that in both cases negative aspects scored low while positive cruit participants. We have recruited 10 participants. The aspects scored high. Average SUS score for mid-air gesture average age was 22.3 (SD = 3). All participants were ei- interaction was 83.18 (SD 13.04), whilst voice interaction ther students (8 participants) or faculty members (2 par- scored 81.46 (SD 8.08). Both scored above the threshold ticipants) from our department. Each participant has tried where users are more likely to recommend the product to both voice and gesture based interactions where the order friends. However, since this was just a minimum viable was randomised. Before commencing the designed scenario study we can just say that users were intrigued with how participants tested each interaction mode in order to make a phone’s UI can be interacted with and SUS scores are sure they understood the principles of how to control each provided for informative purposes only. navigation. After completing the scenario of each interaction 8https://prezi.com/ 7 9 https://en.wikipedia.org/wiki/Intelligent_ https://www.blender.org/ personal_assistant 10See http://www.measuringu.com/sus.php 31 5.0 4.8 4.7 4.8 aware or stereo cameras, gesture based controllers, etc. Ex- perimenters can use any software designers are familiar with 4.0 to create VR worlds, such as standard non-linear presenta- 4.0 tion, CAD or 3D graphics software. Or can simply create a 3.3 sequence of UI screens that users can navigate through with 3.0 interactions beyond mouse and keyboard. In the future we 2.3 plan to further evaluate the test-bed (i) by running a user 1.9 1.9 2.0 study in a 3D VR environment involving more participants, 1.6 (ii) including other metrics such as timing tasks, interviews, 1.3 coding transcriptions of filmed evaluations, etc. and (iii) 1.0 by placing the wizard into a separate room creating a more convincing illusion of a working system. 0.0 q1 q2 q3 q4 q5 q6 q7 q8 q9 q10 7. REFERENCES [1] G. Alce, K. Hermodsson, and M. Wallerg˚ ard. Wozard: Figure 5: SUS scores by question for the voice con- A wizard of oz tool for mobile ar. In Proceedings of the trolled interaction. MobileHCI ’13, MobileHCI ’13, pages 600–605, New York, NY, USA, 2013. ACM. [2] S. Dow, J. Lee, C. Oezbek, B. MacIntyre, J. D. Bolter, Despite the fact, that we have not used a virtual world in and M. Gandy. Wizard of oz interfaces for mixed reality our study, we have tested the prototype as a test-bed for VR applications. In CHI ’05 Extended Abstracts, CHI EA interaction with a minimum viable product. We believe that ’05, pages 1339–1342, New York, NY, USA, 2005. ACM. our approach can open novel possibilities to explore, further [3] M. Gandy and B. MacIntyre. Designer’s augmented develop and define popular forms of such medium since there reality toolkit, ten years later: Implications for new is no need for designers to know any programming language media authoring tools. In Proceedings of the UIST ’14, except how to use designing software, which they should be UIST ’14, pages 627–636, New York, NY, USA, 2014. familiar with already. ACM. 6. CONCLUSION [4] J. D. Gould, J. Conti, and T. Hovanyecz. Composing letters with a simulated listening typewriter. Mid-air gesture and voice interaction provide for richer ex- Communications of the ACM, 26(4):295–308, apr 1983. perience than touch screen user interfaces (UI). Especially [5] S. R. Klemmer, A. K. Sinha, J. Chen, J. A. Landay, in virtual and augmented environments, where interaction N. Aboobaker, and A. Wang. Suede: A wizard of oz with common paradigms (e.g. mouse + keyboard or touch prototyping tool for speech user interfaces. In screen) is challenging or inadequate. This introduced a need Proceedings of the UIST ’00, UIST ’00, pages 1–10, for new interaction metaphors designed particularly for these New York, NY, USA, 2000. ACM. new worlds. One such example are mid-air gesture and voice [6] B. MacIntyre, M. Gandy, S. Dow, and J. D. Bolter. interaction which can facilitate greater immersion into vir- Dart: A toolkit for rapid design exploration of tual environments. While there are fairly inexpensive depth augmented reality experiences. In Proceedings of the camera and gesture sensors available for end users, program- UIST’ 04, UIST ’04, pages 197–206, New York, NY, ming for these can be challenging and time consuming par- USA, 2004. ACM. ticularly for non-technical people such as designers limiting their ability to contribute, further develop and define popu- [7] D. Maulsby, S. Greenberg, and R. Mander. Prototyping lar forms of such medium. an intelligent agent through Wizard of Oz. In Proceedings of the CHI ’93 and INTERACT ’93, pages In this paper we present an affordable easy to use rapid 277–284. ACM, 1993. prototyping tool for creating VR environment and explore [8] J. Wilson and D. Rosenberg. Rapid prototyping for different interactions with the Wizard of Oz (WoZ) experi- user interface design. In M. Helander, editor, Handbook ment. With the introduction of wizard we remove the need of Human-Computer Interaction, pages 859–875. for additional hardware setup such as wired gloves, depth Elsevier Science Publishers, 1988. 32 Towards the improvement of GUARD graphical user interface Žiga Kopušar Franc Novak Guardiaris d.o.o. Jožef Stefan Institute Podjunska ulica 13, Ljubljana Jamova cesta 39, Ljubljana +386 1 230 30 30 +386 1 4773 386 ziga.kopusar@gmail.com franc.novak@ijs.si ABSTRACT effects accompanied by real-time weather and time-of-day In this paper, we describe a case study of usability testing of the changes. GUARD brings indoor training to the edge of real GUARD Control Desk graphical user interface, which is a part of combat awareness and moves digital training borders towards real the GUARD simulator and is used for exercise planning, battlefield perception. The 3D real-time simulations reflect execution and evaluation in soldier training. The usability testing situations from the real word. Which objects take place in the 3D was performed in the development phase of a new version of user scene, what is the nature of the 3D scene and how the objects interface. behave within the scene is a matter of the information recorded in the script. Categories and Subject Descriptors H.5.2 [User Interfaces]: Graphical user interfaces (GUI), Prototyping, User-centered design General Terms Design, Human Factors, Verification. Keywords Usability testing, user interface, military training system 1. INTRODUCTION According to [1], usability is the extent to which a system, product, or service can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use. The latter terms are defined in [2] as follows; effectiveness represents the accuracy and completeness with which users achieve specified goals, efficiency refers to the resources expended in relation to the accuracy and completeness with which users achieve goals, while satisfaction reflects freedom from discomfort, and positive attitudes towards the use of the product. The importance of usability has been early recognized in different aspects. While Don Norman in his famous Figure 1. GUARD Control Desk book The Design of Everyday Things [3] places usability side by side with aesthetic beauty, reliability and safety, cost, and The GUARD Control Desk (shown in Figure 1) is an all-in- functionality, Jakob Nielsen in his earlier work [4] focusses on the one solution for instructors and trainees and can be added to any design of software systems and provides general usability Guardiaris product. It is fully interoperable, interactive and guidelines. features user-friendly interface for exercise planning, execution and evaluation. A large multi-touch screen and touch user The design of complex systems such as military training interface guarantee efficient exercise planning and a best-in-class simulators requires careful analysis of customers’ needs and After Action Review procedure. requirements in order to provide tailor-made product fulfilling their expectations. For the GUARD simulation system referred in The simulator package includes four possible display modes this paper, user-centered design approach is therefore imperative. of the interface. Each of them offers a different set of user In this paper we summarize our experience and results of the controls and operations running on a specific type of device. The usability testing of a new version of graphical interface of the user interface in the editor mode, which is the subject of usability GUARD Control Desk. testing presented in this paper, offers the user several operations with maximum number of controllers and windows. Its task is to 2. GUARD SIMULATION SYSTEM read, write, set and edit 3D scene. It is usually performed on developer computers with fairly strong hardware support. The GUARD simulation system is a military training system that allows photorealistic 3600 VR environments, audio and visual 33 3. USABILITY TESTING OF GUARD 3.2 Usability testing plan CONTROL DESK GUI The main goal of usability testing was to verify the adequacy of the conceptual design of the fully renewed appearance of the user 3.1 GUI description interface. Consequently, the performed tests should check the The library of controllers written in C++ provides means for ease of use, the perception of the individual sets of operations, the controlling objects that are included in a given scene. Each appropriateness of the composed sequence of operations, logic controller carries information about object dimensions, the operations alone, feasibility of transformations on objects, as well relative or absolute location on the screen, and about (if any) as the ease of performing the actual flow of individual steps of the graphical icons, symbols or text with a particular meaning for the required test scenario. For this purpose, three step testing scenario user. The user interface is used to place objects in the 3D scene. (shown in Figures 3, 4 and 5) has been prepared. Once placed in the scene the object becomes a part of the script. All kinds of physical properties, including the basic gravity, the speed of movement, etc., are associated with an object. The interface offers integration of operations between objects, classical processes for storing, loading, cleaning the scene and operational controls ("play", "pause", "stop"). Operations for introducing objects to a scene, hiding of certain types of objects, or excluding the possibility of selecting certain types of objects can be performed via user interface. Previous interface, has been based solely on interaction via computer mouse and keyboard. Modern technologies require completely different approaches, dealing with multi-touch displays. and other devices that are able to run graphically Figure 3. First step of testing scenario demanding 3D environment, but do not use conventional computer input devices. Consequently, the new user interface In the first step (Figure 3) the participant introduces objects should support multi-touch display and additional features such as in the scene. Their positions must be reasonably set into a whole. the possibility of independent setting and editing scripts via an The participant thus gets acquainted with the concepts of lists of additional software package or a dedicated application. The new objects and groups, and with manipulator controller, which allows concept, generated through a series of brainstorming sessions and transformations (move, rotate, resize), and other operations design iterations resulted in a new user interface, which has been (delete object, cloning facility, reset the position of the object in evaluated with the performed usability testing. The working the initial position). prototype of the new user interface is shown in Figure 2. Figure 4. Second step of testing scenario Figure 2. Working prototype of the new GUI In the second step (Figure 4) the participant introduces new When setting up the concept of usability testing, we types of objects and becomes familiar with the operation of followed the methodology presented by J. Rubin, and D. Chisnell association between two objects. This enables integration of the [5] which comprises three basic test techniques: exploratory (or "vehicle" object with the "waypoint" object, which in practice formative), assessment (or summative), and validation (or means that when the script starts, the vehicle heads towards the verification) tests at a high level. The above tests are associated location of the "waypoint" object. with specific points in the product life cycle. In our particular case, the exploratory phase has been performed by the above The main issue of the third step is the facility "trigger". The mentioned brainstorming sessions and design iterations. The participant needs to properly connect all the objects among each actual usability tests have been performed on a fully functional other. In addition, in this step, certain attributes are assigned to prototype of the new user interface and can be regarded as a the objects. combination of assessment and validation tests. The implementation of usability testing followed the guidelines presented in [6]. 34 and programmers. None of them have had any previous experience with the new version of the user interface, which was the subject of the usability testing. 3.3 Conducting the test sessions Testing took place in one working day and passed without major concerns or complications. Implementation of each test, on average lasted about forty-five minutes. Occasionally it was necessary to restart the editor, because it stopped working due to unexpected gestures and touches of the participants. Fortunately, there weren't many cases like this. Yet, we carefully registered any jam and placed it on the list of future urgent or less urgent Figure 5. Third step of testing scenario. corrections. Usability test plan and supporting documents were prepared following the usability test guidelines [6]. For the testing 3.4 Usability testing results environment we used the room with a working station which is Test results were classified in four categories: normally used for running and exercising the latest versions of • Opinions about appearance, suggestions on improvements. software. The screen of the working station is shown in Figure 6, • Utilization, logical inconsistencies of the editor. and the whole environment prepared for usability testing in Figure • Quality of the editor instructions 7, respectively. • Programming errors and bugs in the operation of the editor or in general of the interface kernel. About ten mistakes, opinions or suggestions for possible improvement of the appearance or functionality of the editor referred to the first and second category. Almost all participants were disturbed by imperfect control of the camera with the particular gestures. We have found that it was not the problem with gestures or users, but in the program code. Participants’ comments also justified our concern about the manipulator controller. A quarter of the participants intuitively wanted to use it in a another (and always the same) way, different than the established one. This was not a malfunction of the software code, but the problem is in a completely different Figure 6. The screen of the working station used for presentation of an operation in a 3D scene displayed on a 2D usability testing screen. Most of the participants did not like the automatic display of menus. They would prefer more clever automatic solution, which somehow recognizes user needs and reacts accordingly. In the last category, about fifteen problems have been identified. Some are minor in nature, such as the improper refreshing of certain components, while others will require a more thorough investigation. In most cases in this category we deal with functional errors, or rather the requirement to change the software code at the expense of the operations of the editor. 4. CONCLUSION Usability Testing results have proven to be very useful. In Figure 7. Usability testing environment addition to the detected bugs, comments of the participants on the existing design and suggestions for improvements were very The selection and acquisition of participants whose valuable. In the future, more effort will be given toward background and skills are representative of those that will use the systematic planning of individual phases of usability testing product is a crucial element of the testing process [5].. Selecting within the complete product life cycle. We are aware that the participants involves identifying and describing the relevant iterative nature of usability testing requires extending the product behavior, skills, and knowledge of the person(s) who will use development life cycle however with proper scheduling of testing your product. Within the company we managed to collect twelve within the design phases the benefits will be prevail. Another participants, with different backgrounds that could be roughly issue is selection of participants. A well-known fact is that one categorized in three groups: a group from the hardware should focus its efforts on recruiting participants who are department, a group of participants of administrative nature, and a representative of the product’s target users. In our case, in-house group originating from software industry with artists, designers 35 personnel has been employed, which might have biased the [3] Norman, A. D. 2002. The Design of Everyday Things, Basic results to some extent. Involving a wide range of representative Books; Reprint edition users at the early stages of the development cycle is fundamental [4] Nielsen, J. 1993. Usability engineering, Morgan Kaufmann for early identification of usability problems. Academic Press [5] Rubin, J., Chisnell, D. 2008. Handbook of Usability Testing: 5. REFERENCES How to Plan, Design, and Conduct Effective Tests (Second [1] ISO 9241-210:2010, Ergonomics of human-system edition) interaction — Part 210: Human-centred design for interactive systems [6] www.usability.gov [2] ISO 9241-11:1998, Ergonomic requirements for office work with visual display terminals (VDTs) — Part 11: Guidance on usability 36 Towards affordable Mobile Crowd Sensing device Gal Pavlin Marko Pavlin University of Ljubljana, Faculty of Electrical Engineering Institute “Jožef Stefan” Trzaska 25 Jamova 39 1000 Ljubljana 1000 Ljubljana +386 31 255 312 ++386 31 754 910 gal@pavlin.si marko@pavlin.si ABSTRACT when using people-centric mobile phones as sensory devices: In this paper, we describe first prototype of mobile crowd sensing reliability of the sensed signals, lack of actuators and feedback, device. The device serves as a source for signals in the potential awkward use and lack of wider use due to relatively high cost of crowd sensing studies. Presented device has no intention to the currently available devices. In this paper we describe our first compete with the existing mobile devices, such as mobile phones, step towards many new opportunities in crowd sensing area, the but to complement them where they lack of the features like affordable hardware platform which cover all three [2] groups of affordability, simple use and new opportunities in different the mobile crowd sensing process: environmental, infrastructure, segments of our lives. Our main goal was to develop a device, and social sensing. Our platform will try to overcome the two which can cover all aspects of mobile crowd sensing and at the primary technical obstacles in mobile phone centric MCS: the same time to keep the device cost at very affordable level. The noisy data and lack of useful and effective feedback to the MCS described device is capable of integration into most widely users. Same barriers were also identified with authors in [6], available sunglasses. The complete device consisting of two where they as such prevent the new applications to “advance separate “lenses” forms distributed ecosystem serving as source quickly, acting as a disruptive technology across many domains for sound, light, acceleration and temperature signals while at the including social net-working, health, and energy.” same time providing actuator function with integrated LED matrix display. Table 1. Overview of some MCS application providers Keywords e n le Mobile crowd sensing, wearable interface, affordable electronics. o l h ass ity ab p era xim t rd 1. INTRODUCTION ertia fo Device PS In Com G Microp Cam Pro Ligh Af Mobile crowd sensing (MCS) is a new paradigm [1]. The signals iPhone 6 ✓ ✓ ✓ ✓ ✓ ✓ ✓ in the crowd sensing studies are transferred from pervasive mobile devices. The collected data serves as the source for numerous Asus Zenfone 3 ✓ ✓ ✓ ✓ ✓ ✓ ✓ largescale applications, which can be classified into three groups: Nexus 6P ✓ ✓ ✓ ✓ ✓ ✓ ✓ environmental, infrastructure and social. Each group has own requirements and operating conditions. Typical environmental HTC 10 ✓ ✓ ✓ ✓ ✓ ✓ ✓ mobile crowd sensing application is pollution monitoring [3]. The MPS device [7] ✓ ✓ ✓ ✓ MCS application is a two-step process: to assign sensing tasks to users and to wait for results [3]. The interaction relies on active Our MCS device ✓ ✓(1) ✓(1) ✓ ✓ ✓ participation of each individual in the process loop, which can be (1) External module sometimes cumbersome. Example of the infrastructure MCS application is use of the data in the emergencies where individuals try to support the actions of emergency services and volunteers, Lack of affordable, low cost, almost disposable device enabling especially in time-critical situations [4]. MCS applications encouraged us to develop new type of wearable interface. Our main goal was to develop a device, which can cover Main obstacle in such situations is the availability of the existing all three groups of mobile crowd sensing. At the same time our infrastructure, which is usually compromised or even completely goal was to keep the device cost at very affordable tag, which can destroyed (in environmental disasters). Such situations void also be of paramount importance. mobile phones useless. The social MCS applications enable individuals to share sensed information among themselves. The majority of the social MCS applications seem to be limited to 2. CROWD SENSING DEVICE social media and networks. There are applications where such Development of new MCS device started with design requirement applications improve the quality of life in elderly people by specifications. The complete list of requirements is shown in collecting biological sensor and activity data to adjust and gain Table 2. the comfort condition [5]. The main issue in such application is limited use of smart phones with elderly people which prevents Our main goal was to develop small and affordable system, which sensing possibilities. can be integrated into existing environment. To keep the device affordable we were not able to add sensors for all signals listed in Sensing individuals in a large group can be achieved by using Table 1, however the device has all interfaces which can provide existing smart phones and several sensors available in such communication with external modules when needed, e.g. GPS, mobile devices. The overview of some devices providing MCS Bluetooth or WiFi module. applications is listed in Table 1. However there are several issues 37 Table 2. List of design requirements Scope Details Displaying Display short text messages on the device messages surface Detect activity Motion detected with accelerometer sensor and/or GPS Environment Integrated microphone, illuminance sensor sensing and thermometer Communication Standard interfaces provided on-board Low power Power consumption limited with careful Figure 2. Perforated substrate of the MCS device used to component selection replace the sunglasses lenses. Affordability Keep total cost of the device as low as possible 2.2 MCS Device The device is divided into two modules: left and right “lens”. Block diagram is shown in Fig. 3. Both modules communicate 2.1 Integration into sunglass internally via I2C bus exchanging data from sensors and other The device was designed to integrate into existing sunglasses peripheral devices. which are widely available at very low cost. After some investigation we concluded the almost all low cost sunglasses have same lens shape (Fig. 1). This and the fact that such sunglasses can cost a bargain motivated us to use the shape of such lenses for our base modules. Figure 3. MCS device block diagram. Lens from wayfarer style sunglasses were taken out of the frame and replaced by circuit boards with 69 LEDs on each. Every LED can be individually controlled much like pixels on 5×7 display. Both left and right circuit boards have a low power Cortex M0 micro-controller for driving the LEDs wired in a matrix. Among Figure 1. Most common “wayfarer style” sunglasses. with the LEDs there are also other peripherals. The I2C bus Another reason to use sunglasses was the ability to integrate the enables both sides of the MCS device to communicate with each wearable display, providing personal interaction from one MCS other. Programmable MEMS motion sensor (accelerometer) tracks device to another user. head movements or other user motion activity. Tiny microphone detects environment sounds and provides MCS device to react on Since the printed circuit board is not transparent we were faced sound stimuli. Light sensor measures environment illumination. with one obstacle: how to prevent the MCS device users to User can select modes of operation with one button. Device is become blind when wearing such eyewear. One option was to use powered by external lithium batteries residing on the frame. On- transparent substrate. Electronics can be integrated onto the glass board LiPo/LiIon battery charger provides proper charging and is substrate. This would definitely void the affordability goal. powered from micro USB connector, which also provides Another option was to drill array of holes into the substrate. After connectivity with personal computer or other mobile device some experimentation we discovered the perforated substrate in applications. Finally, an extra ADC input is provided for front of the eye doesn’t interfere with normal vision. Based on experimenting with future sensors. External modules can be experiments on subjects at different ages and gender we defined connected via USART interface such as Bluetooth or GPS. the perforation parameters which are most acceptable for all users (Fig. 2). After short introduction time the subject became comfortable with the eyewear. 38 The presented MCS device is in no way limited only to the mobile crowd sensing, but can also provide some useful ways of use in everyday life at many levels. Display capability of the eyewear can e.g. provide new opportunity for disabled persons. One possibility is to provide personal contact with deaf(-mute) person when another person can’t understand the sign language. The eye contact is more personal than e.g. writing or sketching on the paper or typing on the mobile phone screen. The experience could be completely different when “conversation” is held by maintaining eye contact. Our hope is to spread the device to targeted crowds and use it as a tool for real world crowd sensing studies in this new and promising area. Figure 4. Working prototype of MCS device. 4. ACKNOWLEDGMENTS Our thanks goers to Texas Instruments and STM for providing us As expected few mistakes were made at designing the first free samples of some components used in our project. prototype such as wrong component placement/connection and a faulty fabrication of the circuit board. After minor workarounds the MCS device became functional as expected albeit the initial design and fabrication mistakes. The first prototype looks very 5. REFERENCES promising. A second version has already been designed. [1] Ma, H., Zhao, D., & Yuan, P. (2014). Opportunities in mobile crowd sensing. IEEE Communications Magazine, 2.3 MCS Device technology challenges 52(8), 29-35. The perforated substrate leaves not much room for the [2] Ganti, R. K., Ye, F., & Lei, H. (2011). Mobile crowdsensing: components between the holes. The LED matrix which is placed current state and future challenges. IEEE Communications on tiny bridges required smallest vias, tracks and tracks spacing. Magazine, 49(11), 32-39. This can be derived from the mass market mobile phone production at very low expense. There are technologies providing [3] Cardone, G., Foschini, L., Bellavista, P., Corradi, A., Borcea, micro vias and ultra-low design sizes, which could enable even C., Talasila, M., & Curtmola, R. (2013). Fostering larger holes in the substrate perforation. Unfortunately this would participaction in smart cities: a geo-social crowdsensing result in higher production costs. During first tests we found out platform. IEEE Communications Magazine, 51(6), 112-119. there is no big issue with that and we could produce the MCS [4] Ludwig, T., Reuter, C., Siebigteroth, T., & Pipek, V. (2015, device with existing technology and geometry. April). Crowdmonitor: mobile crowd sensing for assessing physical and digital activities of citizens during emergencies. 3. CONCLUSION In Proceedings of the 33rd Annual ACM Conference on The field of mobile crowd sensing has recently evolved from the Human Factors in Computing Systems (pp. 4083-4092). availability of vast sensing opportunities in modern mobile ACM. devices. Despite availability the existing commercial mobile [5] Nugroho, L. E., Lazuardi, L., & Non-alinsavath, K. (2015, devices lack of features in some aspects. October). Ontology-based context aware for ubiquitous home care for elderly people. In 2015 2nd International The paper presents the device to provide base for growing Conference on Information Technology, Computer, and ubiquity of personal connected devices creating the opportunity Electrical Engineering (ICITACEE) (pp. 454-459). IEEE. for a range of applications which may fit into sensed signals and generated visual effects. The sensing requirements set by future [6] Lane, N. D., Miluzzo, E., Lu, H., Peebles, D., Choudhury, applications will probably evolve over time very dynamically. The T., & Campbell, A. T. (2010). A survey of mobile phone future expansion will depend on the evolving interest in different sensing. IEEE Communications magazine, 48(9), 140-150. types of data gathered by presented MCS device based on [7] Choudhury, T., Borriello, G., Consolvo, S., Haehnel, D., different contextual factors. Hopefully the device will provide Harrison, B., Hemingway, B., ... & LeGrand, L. (2008). The new approaches to modeling and programming multi-modal mobile sensing platform: An embedded activity recognition sensing applications with enhanced modularity and high system. IEEE Pervasive Computing, 7(2), 32-41. affordability. 39 I was here: a system for creating augmented reality digital graffiti in public place Erik Šimer Matjaž Kljun Klen Čopič Pucihar erik.simer@student.upr.si matjaz.kljun@upr.si klen.copic@famnit.upr.si University of Primorska, FAMNIT Glagoljaška 8 Koper, Slovenia ABSTRACT 1. INTRODUCTION Since ancient times travelers and tourists try to leave their Graffiti are a form of visual expression that can be carved marks in places they visit. However, carving or writing on or painted on walls or other surfaces. They can take many historic landmarks can cause irreversible damage on such forms from simple written messages to elaborate drawings sites. One possible solution are digital graffiti. These can and are considered either as acts of vandalism [5] or admired for example be created through projection mapping where as an art form [12]. They exist since ancient times [1, 2] beams of light wrap the object with the digital graffiti cre- and can carry political, social, artistic or any other message. ated by users so everyone at the site can see them. How- Graffiti are primarily associated with different subcultures ever this may disturb other visitors being there at the same such as hip-hop youth or street art movements. However, time. In this paper we explore an alternative solution for there is a group of graffiti makers that is often forgotten – creating digital graffiti by utilizing Mobile Augmented Re- tourists. ality (MAR) technology. We developed a mobile application which allows users to: (i) select an object or a building, (ii) Since ancient times travelers and tourist leave marks and map a 3D mesh onto it in order to prepare its 2D plane , and writings on sites they visit. This is manifested across cul- (iii) draw a graffiti on this plane. After completing the draw- tures and covers simple inuksuit built by Inuit peoples mark- ing the application wraps the object or the building with a ing routes or sites for navigation and as a point of reference, modified 2D texture creating an illusion of digital graffiti. In to scribbled messages on the walls of ancient buildings de- order to (i) evaluate the social acceptance of placing digital noting ones presence and appreciation of the site. The later graffiti onto historic landmarks and to (ii) evaluate if the form can be seen for example (i) on the walls of the Church use of our prototype is socially acceptable in public spaces, of the Holy Sepulchre in Jerusalem scribbled by the cru- we carried out a small reflective user study. We created a saders and pilgrims, (ii) on the Mirror wall in an ancient couple of simple graffiti on different historic buildings and village of Sigiriya in Sri Lanka featuring over 1800 pieces of posted them on social networking site Facebook. Despite prose, poetry and commentary written by ancient tourists amateur appearance, posted photos received attention and between 600AD and 1400AD [2], or (iii) scribbled names of generated some positive responses and questions. Greek and later Roman soldiers, merchants, and travelers in Egypt [8]. Categories and Subject Descriptors H.5.1 [Information interfaces and presentation]: Mul- In a similar way, today’s tourists also exhibit the tendency to timedia Information Systems—Artificial, augmented, and vir- leave their mark in places they visit. For example the breast tual realities of the statue of Juliet in Verona is showing prominent signs of wear by years of groping, or the Blarney Stone in Ireland that gets kissed by visitors. Even more personal example of Keywords expression is leaving a chewing gum (e.g. the Market The- augmented reality, digital graffiti, public spaces, mobile in- atre Gum Wall in Seattle) or a D lock with declarations and teractions, handheld augmented reality messages on bridges in cities all over the globe (e.g. the Butcher’s bridge in Ljubljana). While these are “socially ac- cepted” marks and often become (together with local graffiti) tourist attractions themselves, some tourists also carry out unacceptable acts by today’s standards. For example scrib- bling ones initials on a brick of the Roman Colosseum [9] or signing one’s name on an ancient Egyptian’s statue [13]. Both acts resulted in an outrage of masses on social media. While ancient tourists’ graffiti are a source for history re- search and debate such as searching for Herodotus signa- ture [8], the majority of today’s graffiti are not seen as art or valuable (except for studying them as a social phenomena [11]). One possible solution to prevent permanent marks on 40 series of screenshots. Users are presented with four options as seen in the left most screenshot: three geometric bodies and a building. When selecting any of the three geometric bodies the applications expects markers to be placed under the objects for (i) marker based tracking (see Figure 2), and when selecting the building a (ii) markerless tracking (see Figure 3) also known as instant tracking is used. Marker based tracking was designed for drawing on smaller objects, whereas markerless solution was used for outdoor scenarios. Whilst markerless solution is obviously more flexible, due to the fact that it works in unprepared environments, it is prone to tracking failures, especially in cases when track- ing surfaces are not optimal (varying illumination, no hard edges, low contrast, etc.) After selecting a body (and consequently the marker based Figure 1: This is screenshot of the mobile application tracking method) users are presented with the view of the projecting digital graffiti onto the wall of fortress in camera and the virtual mesh of the selected body on previ- Split. ous screen. This virtual mesh needs to be adjusted to the surface of the selected physical object placed on the paper with markers. This is visible on the second screenshot from historic landmarks is to allow tourists to leave their foot- the right in Figure 2. Users have the possibility to expand prints in a digital form and project it on a desired location or shrink the selected mesh in all directions by selecting the of the historic site [7]. However, this approach can disturb direction of size adjustment and pinch gestures (marked by other visitors being there at the same time. Our idea in- two fingers on the screen in Figure 2). In our example we are cludes placing a graffiti on a particular object or a building adjusting the size of the cuboid mesh to the white cardbox by Augmented Reality (AR) paradigm where only the user placed in the centre of the paper with markers. All sides of owning a device can see their graffiti when looking through the cuboid are marked with numbers, which becomes useful the camera lens of their mobile devices, which can bee seen in the next step. in Figure 1. This opens up interesting questions such as: is the process of making digital graffiti in public places and the When the virtual mesh is wrapped around the physical ob- end result placed on historic landmarks socially acceptable? ject users can press the colour palette icon in the bottom left To answer this questions we carried out a preliminary user corner of the screen to start drawing. The drawing surface, study. Within the study we created a couple of simple graf- visible on the second screenshot from the right in Figure 2, fiti on different historic buildings and posted them on social is a 2D texture that represents a 3D virtual object. The networking site Facebook to harvest the feedback. numbers on the sides of the virtual object are also visible in the 2D texture. This enables users to know where the In the next section a description of the prototype developed graffiti will be drawn on the physical object. In our exam- is presented, followed by a method section describing the ple the sides 1 and 2 are facing us, hence we decided to process of the evaluation. Section 3 presents the results and draw on these. However, we can draw on any side, although includes the discussion of these. The paper finishes with these might not be possible to visualise (e.g. the bottom conclusions and future work. side of the cuboid). The 2D texture features a simple draw- ing application where colours and the size of the brush can 2. PROTOTYPE DESCRIPTION be selected. In addition, there are predefined drawings that Our prototype uses a mobile platform as a medium for Aug- can be placed on the surface. mented Reality visualisation (AR) – the concept better known as Mobile Augmented Reality (MAR). Mobile devices have When tapping on the green check mark on the screen, users become ubiquitous in the last two decades and with the as- get back to the AR view where the drawing made on previous cent of powerful smart phones coupled with quality cameras, screen is placed on the physical object. It is possible to take a AR for the first time emerged as consumer product. This picture of the graffiti or to return to the drawing screen. The development also enabled researchers to explore AR in var- markerless tracking can be seen in Figure 3. The sequence ious domains [4]. One of the advantages of MAR is that of screens is similar to the one with the marker. However, it can be visible to device owner only. We have used this we do not need to place a paper with markers under selected fact in developing our prototype as digital graffiti visible to physical object in order to track it. everyone (e.g. projection mapping of user generated digital graffiti on walls) may not be appreciated by everyone at the 3. METHOD site. To answer our research questions (social acceptance of cre- ating and created digital graffiti at/on historic landmarks) We have built a mobile application prototype as a means to we run a reflective user study with the lead author. Such explore the feasibility of our idea. We have used the Metaio approach is often used to reflect upon the prototype devel- SDK1 for tracking and rendering 3D objects. The interac- oped, shed light on interactions details, provide a glimpse tion with the prototype can bee see in Figure 2 and 3 as a into the subtle affordances the prototype can offer [6], or 1http://www.metaio.eu/ when developing for a peak experience for a small number 41 Figure 2: Prototype interface with marker-based tracking. Figure 3: Prototype interface and interaction process with markerless tracking. of users (even a single user) [3]. Using the developed proto- nent examples of technology failures such as GoogleGlass2, type the author created 10 digital graffiti on various historic which may partly be blamed also on social acceptance . The sites in historic city of Split during high season when many key problem with GoogleGlass was caused by the visibility tourists are visiting these attractions. He then posted some of the camera placed on the frame of glasses, which posed an of his creations captured from various angles on his Face- obvious danger of intrusion into privacy of passers by or peo- book timeline. We chose social networking service site to ple being talked to. However, there are examples of success- reach a wider audience compared to sharing graffiti with ful camera based products, which continuously record users’ (individual) contacts. surroundings. One such example are life-logging cameras usually worn around one’s neck, such as Microsoft SenseCam [10], Vicon Autorgapher3, and Narrative Clip4 . Similar to 4. RESULTS AND DISCUSSION mentioned products, in order to create digital graffiti one The picture editor we used in our application allowed us to needs to point the phone’s camera at historic landmarks, create simple drawings and captions only. Drawing on a 2D which may unintentionally record passersby or people in the plane on a small screen proved difficult, especially as the vicinity. Our observations confirmed that despite the pro- developed prototype did not provide zooming functionality. longed use of the phone’s camera, during which the author In addition to this, currently implemented drawing tools are created digital graffiti, none of the passersby or people in the primitive and did not allow for drawing in multiple layers as vicinity seemed to be bothered by author’s actions. This is all draw actions are fully opaque. As a result, only a couple probably due to the fact that so many people nowadays use of attempts ended with desirable results. However, to our their phones and tablet computers to take photos and film surprise, the alignment of virtual mash to the 3D object was their undertakings on holidays, that holding up the phone not as difficult as initially expected. The author quickly got for a prolonged amounts of time does not seem unusual. a feel for it and managed to precisely map the virtual mesh to the real object as can be observed on third screenshot The last part of our evaluation focused on exploring if plac- from the left on Figure2 and forth screenshot from the left ing digital graffiti on historic landmarks is socially accept- on Figure3. 2https://en.wikipedia.org/wiki/GoogleGlass Another issue we wanted to look at was social acceptance of 3http://www.autographer.com/ using the prototype in public. There are a couple of promi- 4http://getnarrative.com/ 42 able. The author posted three curated digital graffiti images domain allows for easy sharing which may provide indirect (such as seen on Figure 1) on his Facebook’s timeline. Even advertisement for local communities and promote touristic if graffiti were of primitive nature (e.g. the graffiti were places to a wider public. How effective are such practices in mainly composed of text and simple shapes) the published this context should also be further studied in the future. pictures attracted attention from author’s social network. Comments were ranging from questions about the technol- 6. REFERENCES ogy used, questions about the source of the pictures, to com- [1] M. Balzani, M. Callieri, M. Fabbri, A. Fasano, ments on the appeal of particular digital graffiti. Based on C. Montani, P. Pingi, N. Santopuoli, R. Scopigno, the fact that none of the comments in our pilot study high- F. Uccelli, and A. Varone. Digital representation and lighted that placing digital graffiti on historic landmarks is multimodal presentation of archeological graffiti at controversial or disrespectful, our preliminary study suggests Pompei. In VAST 2004, pages 93–103, 2004. that digital graffiti using MAR technology are socially ac- [2] J. N. Cooray. The Sigiriya Royal Gardens; Analysis of ceptable. However, to make this conclusion final, a more the Landscape Architectonic Composition. A+BE | comprehensive study including quantitative data capture Architecture and the Built Environment, would need to take place. Due to the fact that posted pic- 2012(06):1–286, 2012. tures did not cause a massive hype, we were not able to [3] A. Dix. Human–computer interaction: A stable collect other statistically significant metrics such as number discipline, a nascent science, and the growth of the of likes, shares etc. long tail. Interacting with Computers, 22(1):13–27, jan 2010. 5. CONCLUSION [4] Feng Zhou, H. B.-L. Duh, and M. Billinghurst. Trends Whilst ancient graffiti are seen as a valuable window into in augmented reality tracking, interaction and display: the lives of past generations, many current graffiti made by A review of ten years of ISMAR. In 2008 7th tourists or travellers are considered as acts of vandalism. IEEE/ACM International Symposium on Mixed and However, digital graffiti concept we presented in this pa- Augmented Reality, pages 193–202. IEEE, sep 2008. per may be able to provide sustainable means of fulfilling [5] M. Halsey. ’Our desires are ungovernable’: Writing tourists’ wish for marking a place they have visited. The graffiti in urban space. Theoretical Criminology, concept is based on MAR technology as a method of gener- 10(3):275–306, 2006. ating and viewing digital graffiti. We implemented this con- [6] J. Hardy. Experiences: a year in the life of an cept into a prototype by building a mobile application that interactive desk. In Proceedings of the Designing enables users to create digital graffiti on arbitrary objects Interactive Systems Conference on - DIS ’12, DIS ’12, of a predefined shape. By mapping a virtual mesh onto ob- page 679, New York, New York, USA, 2012. ACM jects, the application can generate an appropriate 2D plane Press. of the mesh on which users can draw digital graffiti. [7] M. Kljun and K. Č. Pucihar. “I Was Here”: Enabling Tourists to Leave Digital Graffiti or Marks on Historic In order to evaluate the feasibility and social acceptance of Landmarks. In Proceedings of the 15th IFIP TC 13 creating and placing digital graffiti on historic landmarks, International Conference INTERACT ’15, pages the paper presents a preliminary self reflecting study. The 490–494, 2015. study was based on creating digital graffiti of various historic [8] J. G. Milne. Greek and roman tourists in egypt. The landmarks, which we published on the authors Facebook Journal of Egyptian Archaeology, 3(2/3):76–80, 1916. timeline. The results show that: (i) due to primitive draw- [9] B. Neild. Russian tourist fined $24,000 for Colosseum ing functionality of the prototype only basic graffiti could be graffiti. http://edition.cnn.com/2014/11/24/ created, (ii) contrary to expectations author quickly became travel/italy-colosseum-graffiti/, 2014. very skilled in mapping the virtual mesh to real objects, (iii) even after prolonged use in public space the application did [10] A. J. Sellen, A. Fogg, M. Aitken, S. Hodges, not provoke unwanted attention or reactions from passersby, C. Rother, and K. Wood. Do life-logging technologies and (iv) despite amateur appearance, posted photos received support memory for the past?: An experimental study attention and generated some positive responses and ques- using sensecam. In Proceedings of the SIGCHI tions from author’s social network. The results of prelim- Conference on Human Factors in Computing Systems, inary study suggests that digital graffiti and the proposed CHI ’07, pages 81–90, New York, NY, USA, 2007. concept are socially acceptable. Based on the results of the ACM. presented short self-reflecting study we are planning a more [11] K. Thirumaran. Managing Graffiti at Tourist comprehensive study in order to confirm our findings. This Attractions. In P. Mandal, editor, Proceedings of the will include more in-depth measuring of acceptance of digi- International Conference on Managing the Asian tal graffiti through social reach (number of likes, shares on Century, pages 575–581. Springer Singapore, social networking sites, etc) and through downloads of the Singapore, 2013. app in the app repository (Google Play). Before embarking [12] M. Von Joel. Urbane Guerrillas: street art, graffiti & this route, the current prototype also needs to be improved. other vandalism. State of Art, 1(4), 2006. For example the drawing interface needs to be expanded [13] H. Wong. Netizen outrage after Chinese tourist with zooming functionality, transparent layers, wider range defaces Egyptian temple. http://edition.cnn.com/ of brushes, etc. Finally, transferring the graffiti in the digital 2013/05/27/travel/china-egypt/, may 2013. 43 Interactive Video Management by means of an Exercise Bike Jan Štrekelj Branko Kavšek University of Primorska University of Primorska Faculty of Mathematics, Natural Sciences Faculty of Mathematics, Natural Sciences and Information Technologies and Information Technologies Glagoljaška 8, Glagoljaška 8, 6000 Koper, Slovenia 6000 Koper, Slovenia +386 5 6117 570 +386 5 6117 654 jan.strekelj@gmail.com branko.kavsek@upr.si ABSTRACT On the other hand, cycling on an exercise bike while staring on a This paper describes the concept of virtual reality and the use of wal or even watching TV and/or listening to music, can become this technology in practice. The main part of the work is about boring after some time. This happens because the indoor reviewing the various stages of the development of a prototype environment is static and, unlike cycling on an outdoor track, system for interactive video management by means of an exercise offers lit le or no “real” motion. bike. Systems, that are currently available on the market, are, due It is this lack of motion and motional feedback of the exercise to their ease of use, closed units, for which upgrades are not bike that motivated us to develop a prototype of a system that possible or come at a great expense. The main advantage of the would offer the user a more realistic experience. We got the idea presented prototype, besides affordability, is a simple option to from the field of virtual reality, where the aim is to offer the user upgrade the system by adding sensors and/or modules; this al ows the illusion of reality. In our case, to make cycling on an exercise us to extend the system in every stage of development. A low-cost bike look more like outdoor cycling. computer (Raspberry Pi) is used as a processing unit, for calculating the speed of the wheel and sending this information to For this purpose, we filmed a bike ride on an outdoor track and the control unit. The control unit processes the received data and used a bike’s speed sensor to interactively control the playback sets the playback speed of the video clip accordingly. There is time of the video that the user is looking at. great potential for improvement on the developed prototype. Next section introduces the reader to the concepts of virtual Thus, ideas for further development are presented in the reality and its use in the field of indoor exercise. In section 3 the concluding section. prototype system for interactive video management by means of an exercise bike is presented in detail – the first three subsections Categories and Subject Descriptors present its architectural, hardware and software parts, while the H.5.1 [Multimedia Information Systems]: Augmented and last, fourth subsection outlines its operation. Section 4 gives virtual realities – interactive video control. conclusions and lists possible directions for further work. H.5.2 [User Interfaces]: Prototyping – speed reading / video displaying prototype. 2. VIRTUAL REALITY Virtual reality is a computer technology that uses software- General Terms generated realistic images, sounds and other sensations to Algorithms, Management, Measurement, Performance, Design, replicate a real environment or an imaginary set ing, and simulates Reliability, Experimentation, Human Factors. a user's physical presence in this environment to enable the user to interact with this space. Keywords Let us skip the history of virtual reality and just say that virtual Raspberry Pi, exercise bike, virtual reality, open-source, Java. reality is now in its mature years. The more and more affordable prices of computer hardware and other devices (sensors, 1. INTRODUCTION embedded systems, …) make virtual reality appear nowadays in a In the fast-paced life of a modern working man the time for wide variety of applications which include: architecture, sport, recreation is very scarce. Not everyone can take the time to go medicine, the arts, entertainment, and many more [15]. outdoors, even going to a fitness can be either too expensive or For the purpose of this paper, we focus on the application of too crowdy, and hence introduces unwanted stress. Hence, virtual reality in sports. On websites that sel fitness equipment, purchasing a piece of indoor fitness equipment (such as an we can find many indoor systems that enable interactive exercise bike) seem to be the perfect solution for those who experience. The first such systems to appear on the market were cannot afford to go to a fitness regularly or have no time or wil to adapted for exercise bikes. These virtual reality systems differ exercise outdoors. mainly in the way the user controls the video clip. 44 The first category consists of dedicated bike systems, adapted for drawbacks by being both relatively cheap (under hundred euro) use with virtual reality technology. Such systems are custom made and open-source. and embedded in exercise bikes (in the gym) and upgrade the exercise bike with the necessary sensors that al ow the user to 3. THE PROTOTYPE control the video clip that is displayed on a pre-mounted display at ached to the bike. An example of such a bike is shown in This section presents in detail the prototype system and al its Figure 1. parts. Subsection 3.1 is dedicated to the architecture, subsections 3.2 and 3.3 describe the hardware and software part of the system respectively. Subsection 3.4 presents how al the prototype parts work together. 3.1 Architecture The architecture of the prototype system can be divided into four parts, namely the external or input unit, the processing unit, the control unit, and the display or output unit. Figure 3 depicts the block-schema of the system, showing the connections between the units. The external unit (the exercise bike with the speed sensor at ached) is connected to the processing unit via a 2-wire cable (twisted pair) and sends 0/1 signals to it. The task of the processing unit (a Raspberry Pi computer) is to transform this signals to the actual speed of the bike’s wheel. It then sends this speed data to the processing unit (a common laptop) via the Figure 1. A dedicated “virtual reality” indoor Ethernet cable using TCP/IP protocol. The processing unit is exercise bike (source [10]). responsible for control ing the speed of the video clip that is sent to the display unit (a computer monitor) via the HDMI cable. The second category of systems aims at bringing the cycling experience even closer to reality. This category includes devices The four units that make up the prototype system are described in designed for use with a real road bike. Such upgrade normal y more detail in subsection 3.2 Hardware. consists of two units: each one is placed under the rim of the wheel, as shown in Figure 2. The unit under the rear wheel is responsible for control ing the cycling speed and therefore the speed of video playback. At the same time, it acts as a resistor in cases where we want to simulate cycling uphil . The unit, which is mounted under the front wheel, can be reserved exclusively for ensuring the stability of the bike, the more advanced systems can record the angle of the steering wheel and, via a computer system, the image on the screen is shifted accordingly. More such bikes can be connected into the same network offering the possibility of multiplayer bike races [9] or a true VR headset can be connected to the system offering a more realistic experience [16]. Figure 3. Block-schema of the prototype system architecture. Figure 2. A device under a real road bike 3.2 Hardware (source [2]). In this subsection each of the four parts that make up the prototype system are presented in more detail. Thus, the following Both types of virtual reality systems for indoor cycling available four subsubsections describe the external unit, processing unit, on the market share the same two drawbacks: they are relatively control unit, and display unit in that order. expensive (prices ranging from several hundreds or even thousands of euro) and they are proprietary – that means they 3.2.1 External unit cannot be easily upgraded is need be. Our prototype system, The external unit (also cal ed the input unit, because it provides described in detail in the next section, tries to overcome these the input signals to the system) is the exercise bike mounting a 45 speed sensor. For presentation purposes we implemented this unit as an exercise bike simulator shown in Figure 4. Figure 5. Raspberry Pi’s GPIO schema with pin numbering (source [5]). The task of the processing unit is to transform the electric pulses from the external unit to the actual speed of the wheel and send it to the control unit. 3.2.3 Control unit Figure 4. The exercise bike simulator. Since the main goal of this unit is to receive the speed of the The exercise bike simulator consists of a kids bike wheel (without wheel from the processing unit and adjust accordingly the the tire) integrated in the forks and mounted on a pedestal (a playback speed of the video clip to be displayed on the display wooden plate). The speed sensor is a magnetic switch consisting unit, this unit should be efficient and stable enough to process of two parts. The stationary part of the sensor is the switch itself Ful HD 1080p video. mounted on one of the forks. The moving part of the sensor is A laptop with a 2.2 GHz Intel Core 2 Duo CPU, 4 GB of RAM, at ached on one of the spokes of the wheel and consists of a smal and an ATI Radeon HD 4500 graphic card with 2.2 GB memory magnet with a constant magnetic field. The rotation of the wheel was used in our project. causes the magnet to pass near the switch and triggers it. The detailed operation of this type of magnetic switches can be found 3.2.4 Display unit in [3]. This unit is used to display the video clip to the user of the The magnetic switch is connected to the processing unit (via a exercise bike. In our original idea this unit should be a computer twisted-pair cable). For safety reasons and voltage adjustment of monitor connected to the control unit by an HDMI cable, but for the switch a resistance of 4.7 kΩ is connected between it and the simplification reasons, the display of the control unit was used in processing unit. the end. The task of the external unit is to generate repetitive electric pulses and sending them to the processing unit. 3.3 Software The external and display units, both being passive units with no 3.2.2 Processing unit processing capabilities, needed no software to run/process data. For processing the signal from the external unit and calculating The following two subsubsections are thus dedicated to describe the speed of the wheel we used the Raspberry Pi – a low-cost, the software used to process data by the processing and control credit card sized computer [14]. The reason for using this type of units, respectively. computer for our processing unit is its size and versatility. Everything has been developed in the Java programming language We describe here just those parts of its operation that are relevant due to its wide portability and sufficient efficiency [6]. The for the understanding of our prototype system functioning (a Eclipse programming environment has been chosen, because its detailed description of the Raspberry Pi can be found in ease of use and familiarity [4]. The additional libraries, specific to [11],[14]). each unit, are presented in the fol owing subsubsections. The Raspberry Pi comes in many flavors. In this project the 3.3.1 Software used in the processing unit Raspberry Pi 2 Model B [11] was used. This version of the Pi Raspberry Pi, being the main processing unit of the system, needs comes with a 900 MHz, 32-bit Cortex-A7 CPU with 256 KB of an efficient operating system. For the purposes of the project we L2 cache, a 250 MHz, VideoCore IV GPU capable of decoding have chosen the Linux distribution Raspbian, which is a special 1080p video, 1 GB of RAM that is shared between CPU an GPU. flavor of the Debian operating system, optimized for operation on Its input/output include 4 USB 2.0 slots, a CSI-2 (Camera Serial the Raspberry Pi platform [1]. Interface, Type 2) slot, an HDMI slot, a 3.5 mm composite 4-channel analog output slot, a MicroSD card slot, a network To process and preparing the data from this unit for the control Ethernet slot, and 40 GPIO (General Purpose Input Output) pins unit the Pi4J library of Java functions was used. Pi4J is an open that can be used to connect with al types of external devices. The source project, which aims to prepare a user-friendly environment Raspberry Pi operates at 5V of electric charge connected to it to control the input and output of the Raspberry Pi [7]. This through a MicroUSB power slot. library provides functions for Java developers that abstracts the operation of the lower level Raspberry Pi platform specifics, and The external unit is connected to the processing unit by a twisted- thus al owing them to focus on application development rather pair cable on the processing unit’s GPIO pins. Figure 5 depicts than to worry about underlying hardware performance [13]. the schema of the processing unit’s GPIO pins, clearly showing the pin numbering. Pins 10 and 16 were used to connect the The TCP/IP protocol was used to transfer the data from this unit twisted-pair cable. The “physical” connection is shown in Figure to the control unit. Since the TCP/IP protocol implementation is 6 in subsection 3.4. part of the operating system and the knowledge about its operation is part of every computer networks course, we omit al its operational detail here – these can be found in [8]. 46 3.3.2 Software used in the control unit The modularity and the fact that our system is open source gives This unit is responsible for decoding the video clip to be sent to plenty of options for modification and/or extensions, thus offering the display unit. For this purpose, VideoLAN’s VLC player was vast possibilities for further work. The first step forward wil be to used. Since video playback speed has to be dynamically adjusted implement our system on an actual exercise bike and test/evaluate to match with the data received from the processing unit, a library its operation on real end users. of functions, which offer these capabilities is needed. The expected increasing demand for such systems makes obvious Here we used the VLCj library. VLCj is an open source project by the need for standardization of protocols, connections, modules VideoLAN aimed at creating a bond between their video player that these systems use. VLC, and the programming language Java [12]. VLCj creates Java bindings to almost al the functionality provided by the VLC 5. ACKNOWLEDGMENTS player. It greatly simplifies the development of Java applications Our thanks to Assist. Prof. Jernej Vičič. His help in the software for playing different types of multimedia, streaming video content implementation phase of the project was invaluable. over the Internet connection, playback mode within server applications and the implementation of video on demand. Al software used for our prototype is open-source and available on 6. REFERENCES ht ps:/ github.com/branax/XBike. [1] About Raspbian, Raspbian. DOI= ht ps:/ www.raspbian.org/ RaspbianAbout. 3.4 Putting it all together [2] BKOOL Classic Turbo Trainer. Turbo Bike Trainer. DOI= The final product of our project is a prototype system, which ht p:/ turbobiketrainer.com/turbotrainers/bkool-turbo- al ows us to control the playback speed of a video clip with the trainer/. aid of an exercise bike simulator. [3] Campos, H. 2010. When your ICD's reed switch fails. ICD The bike simulator represents the real exercise bike. Its speed User Group. DOI= sensor (in the form of a magnetic switch) is connected to a ht p:/ icdusergroup.blogspot.si/2010/07/when-your-icds- Raspberry Pi computer by a twisted-pair cable (a resistor is added reed-switch-fails.html. to this connection for voltage adjustment purposes). Electric [4] Eclipse, Eclipse. DOI= ht p:/ www.eclipse.org/. pulses are sent to the Raspberry Pi when the simulator’s wheel [5] GPIO: Models A+, B+, Raspberry Pi 2 B and raspberry Pi 3 rotates. The Raspberry Pi is charged by a mobile phone charger, B, Raspberrypi. DOI= powerful enough to provide enough electricity for its smooth ht ps:/ www.raspberrypi.org/documentation/usage/gpio-plus- functioning. The speed of the wheel is calculated by the and-raspi2/. Raspberry Pi using the received electric pulses frequency and knowing the circumference of the wheel (at sensor radius). This [6] Mesojedec, U., Fabjan, B, 2004. Java 2: temelji speed is then sent to the control unit (the laptop described in programiranja, Ljubljana: Pasadena. subsubsection 3.2.3) through the Ethernet using the TCP/IP [7] Savage, R. Raspberry Pi with Java 8 + Pi4J. Devoxx. DOI= protocol. On the laptop an instance of the VLC player is running ht p:/ static1.1.sqspcdn.com/static/f/1127462/25655475/141 that displays the video clip. When the player receives the speed 5663597223/rpi-java-8-savage. data from the Raspberry Pi, it adjusts its playing speed [8] Štrancar, M., Klemen, S. 2005. PHP in My SQL na spletnem accordingly. The video clip is displayed directly on the laptop’s strežniku Apache, druga dopolnjena izdaja, Ljubljana: screen. Pasadena. The TCP/IP connection between the processing unit (Raspberry [9] Tacx Trainers. DOI= Pi) and the control unit (laptop) is implemented in a client-server ht ps:/ www.tacx.com/en/products/trainers/i-genius- fashion, the processing unit being the server and the control unit multiplayer. being the client. Figure 6 shows how the finished prototype looks like. [10] The Coolest Bikes on No Wheels, Expresso. DOI= ht p:/ expresso.com/Home. [11] Turner, A. 2015. Review: Raspberry Pi 2 Model B. PC & Tech Authority. DOI= ht p:/ www.pcauthority.com.au/Review/403617,review- raspberry-pi-2-model-b.aspx. [12] VLCj, Caprica Software Limited. DOI= ht p:/ capricasoftware.co.uk/#/projects/vlcj. Figure 6. The prototype system. [13] Welcome to Pi4J!, The Pi4J Project. DOI= ht p:/ pi4j.com/. 4. CONCLUSIONS AND FURTHER WORK [14] What is Raspberry Pi, Raspberrypi. DOI=: Despite the fact that there are already similar products on the ht ps:/ www.raspberrypi.org/help/faqs/#introWhatIs. market, the main advantage of the presented prototype is its simple architecture and the ability to add additional modules and [15] What is Virtual Reality? Virtual Reality Society. DOI= sensors to the processing unit. Moreover, due to its simplicity and ht p:/ www.vrs.org.uk/virtual-reality/what-is-virtual- reasonable price, our system could be accessible to different reality.html. groups of users from young to old, from amateur to professional. [16] Widerun VR Trainer. DOI= ht p:/ www.widerun.com/. 47 48 Indeks avtorjev / Author index Biasizzo Anton ............................................................................................................................................................................... 9 Blažica Bojan ................................................................................................................................................................................. 9 Blažica Vanja ............................................................................................................................................................................... 17 Bohak Ciril ................................................................................................................................................................................. 5, 9 Cerar Janez Jaka ........................................................................................................................................................................... 17 Čopič Pucihar Klen .......................................................................................................................................................... 25, 29, 40 Deželjin Damir ............................................................................................................................................................................. 29 Gombač Blaž ................................................................................................................................................................................ 29 Isaković Alja ................................................................................................................................................................................ 21 Kavšek Branko ............................................................................................................................................................................. 44 Kljun Matjaž ..................................................................................................................................................................... 25, 29, 40 Kopušar Žiga ................................................................................................................................................................................ 33 Koroušić Seljak Barbara ............................................................................................................................................................... 13 Lavrič Primož ................................................................................................................................................................................. 5 Malečkar Andrej ........................................................................................................................................................................... 25 Marolt Matija ........................................................................................................................................................................... 5, 21 Novak Franc ....................................................................................................................................................................... 9, 13, 33 Novak Peter .................................................................................................................................................................................. 13 Pavlin Gal ..................................................................................................................................................................................... 37 Pavlin Marko ................................................................................................................................................................................ 37 Pesek Matevž ............................................................................................................................................................................... 21 Poredoš Aleš ................................................................................................................................................................................. 17 Rogelj Peter .................................................................................................................................................................................. 25 Šimer Erik .................................................................................................................................................................................... 40 Širol Patrik ................................................................................................................................................................................... 29 Štrekelj Jan ................................................................................................................................................................................... 44 Strle Gregor .................................................................................................................................................................................. 21 Zemljak Matej .............................................................................................................................................................................. 29 49 50 Konferenca / Conference Uredili / Edited by Interakcija človek-računalnik v informacijski družbi / Human-Computer Interaction in Information Society Bojan Blažica, Ciril Bohak, Franc Novak Document Outline Blank Page Blank Page Blank Page 01-11.pdf HCI-IS_2016_paper_1 HCI-IS_2016_paper_2 HCI-IS_2016_paper_3 HCI-IS_2016_paper_4 HCI-IS_2016_paper_5 HCI-IS_2016_paper_6 HCI-IS_2016_paper_7 HCI-IS_2016_paper_8 HCI-IS_2016_paper_9 HCI-IS_2016_paper_10 HCI-IS_2016_paper_11 Blank Page Blank Page Blank Page