UBISS 2016: Workshops & Instructors


Home Materials Workshops & Instructors Schedule Practical matters Organizers & Sponsors

Workshop A: UBICOMP IN THE WILD: DEVELOPING AND DEPLOYING PERVASIVE DISPLAYS

Maximum number of students to be enrolled to the workshop: 24

Fueled by falling display hardware costs and rising demand, digital signage and pervasive displays are becoming ever more ubiquitous. Such displays are now a common feature of many public spaces and serve a range of purposes including signage, entertainment, advertising and information provision. Beyond traditional broadcast media, recent developments in sensing and interaction technologies are enabling entirely new classes of display applications that tailor content to the situation and audience of the display. The time is right for researchers to consider how to create the world’s future pervasive display networks.

The workshop will explore the challenges of designing, developing and deploying pervasive display systems in the wild through a combination of short lectures, discussions and hands-on activities. Students will be introduced to both technical issues (systems software, scheduling behaviours, evaluation techniques) and the human/social/ethical issues that arise from the embedding of pervasive displays in real world environments (audience behaviours, stakeholder concerns).

We welcome participants from different areas of expertise, including computer science and design and no specific hw/sw skills are required. Although the workshop will have a strong focus on the study of pervasive display systems, many of the topics covered will have applications to ubicomp research in the wild more broadly and participants will become skilled in creating and evaluating systems in challenging public contexts.
Lecture material will draw on both theoretical models and understanding, and a wealth of real-world examples and experiences from long-term display deployments.

At the end of the course students will have produced a comprehensive plan for creating a future pervasive display deployment in their domain of choice as well as having engaged in a number of development tasks as part of their studies.

Instructors: Prof. Nigel Davies & Dr. Sarah Clinch, Lancaster University, UK

Nigel Davies

Nigel Davies is a Professor in the School of Computing and Communications at Lancaster University and co-director of Lancaster’s new multidisciplinary Data Science Institute. His research focuses on experimental mobile and ubiquitous systems and his projects include the MOST, GUIDE, e-Campus and PD-NET projects that have been widely reported on in the academic literature and the popular press. Professor Davies has held visiting positions at SICS, Sony's Distributed Systems Lab in San Jose, the Bonn Institute of Technology, ETH Zurich, CMU and most recently Google Research in Mountain View, CA. Nigel is active in the research community and has co-chaired both Ubicomp and MobiSys conferences. He is a former editor-in-chief of IEEE Pervasive Magazine, chair of the steering committee for HotMobile and one of the founders of the ACM PerDis Symposium on Pervasive Displays.

Homepage

Sarah Clinch

Sarah Clinch is a post-doctoral researcher at Lancaster University, UK. She completed her PhD (Lancaster) on the appropriation of public displays and has published extensively on the topic of next generation pervasive display networks. She has been a visiting researcher at Carnegie Mellon University working on novel cloudlet systems. Sarah’s research focuses on the development of architectures for pervasive computing and personalisation in ubiquitous computing systems. She currently works on the European FET-Open RECALL project that aims to re-think and re-define the notion of memory augmentation to develop new paradigms for memory augmentation technologies that are technically feasible, desired by users, and beneficial to society. Sarah is an active member of the research community and is currently serving as publicity co-chair for both IEEE Percom and ACM HotMobile.

Homepage


Workshop B: EYEWORK: DESIGNING INTERACTIONS WITH EYE MOVEMENTS

Maximum number of students to be enrolled to the workshop: 21

In recent years, we have witnessed a revolution in eye tracking technologies. Eye trackers that used to cost tens of thousands of dollars, requiring awkward head-mounts and convoluted calibration procedures now cost less than a hundred dollars and are simple to set up and easy to use. As technology decreases in size and cost, we envision a world in which eye trackers will ship by default with interactive appliances, similarly to how any phone or laptop comes with an integrated webcam nowadays.

In this workshop, you will gain all the necessary skills to design and build systems for the exciting future of pervasive eye tracking. The sessions will consist of short lectures followed by practical hands-on activities. In the lectures you will understand how the eyes see and move; you will gain a thorough understanding of how eye tracking works; and you will learn about a wide range of interaction techniques that use the eyes only and that combine the eyes with novel input modalities, such as gestures, touch, game controllers, etc. The hands-on sessions will employ a design thinking methodology to gain empathy with users to understand how the eyes can help to solve their problems, generate creative interaction design ideas, build system prototypes using modern eye trackers such as the Tobii EyeX and the Pupil Pro, and test the prototypes to receive feedback on your designs. At the end of the week, you will have a novel, fully functional eye-controlled application that solves a problem or a game that explores an interesting new mechanic.

We welcome participants from different areas of expertise, including Computer Science, Design, and Engineering. No previous experience with eye tracking is required, but an interest in designing and building interactive systems is essential. The following skills are useful, but not required: C# programming, Unity programming, Interface design tools (e.g. Photoshop, Illustrator).

Instructors: Prof. Hans Gellersen, Lancaster University, UK & Dr. Eduardo Velloso, University of Melbourne, Australia

Hans Gellersen

Hans Gellersen is Professor of Interactive Systems at Lancaster University. Hans' research interest is in sensors and devices for ubiquitous computing and human-computer interaction. He has worked on systems that blend physical and digital interaction, methods that infer context and human activity, and techniques for spontaneous interaction across devices. In recent work he is focussing on eye movement, and leading research that breaks new ground in how we can use our eyes for interaction pervasively. Hans’ work is published in over 200 articles, and has been recognised with Best Paper Awards in CHI, Pervasive, and TEI amongst others. He is one of the founders of the UbiComp conference series, and an Associate Editor of ACM Transactions Computer-Human Interaction (TOCHI) and the Journal on Personal and Ubiquitous Computing (PUC). He holds a PhD in Computer Science from the University of Karlsruhe, Germany.

Homepage

Eduardo Velloso

Eduardo Velloso is a Research Fellow at the Microsoft Research Centre for Social Natural User Interfaces at the University of Melbourne in Australia. Eduardo holds a PhD in Computer Science from Lancaster University and a BSc in Computer Engineering from the Pontifical Catholic University of Rio de Janeiro. His research aims at creating future social user experiences combining novel input modalities such as gaze, body movement, touch gestures, etc. His latest work has investigated eye-based interaction with smart watches, multimodal combinations of gaze, and eye control of video games. He has designed and conducted multiple courses and workshops, including the EyePlay workshop at CHI Play 2014, the .NET Gadgeteer Workshop at the iCareNet Summer School 2012, at PUC-Rio, and at the Rio de Janeiro State University.

Homepage


Workshop C: COLLABORATION AND PERSONAL DEVICES AROUND INTERACTIVE DISPLAYS

Maximum number of students to be enrolled to the workshop: 20

Large displays in combination with personal devices can offer a variety of opportunities for collaboration for example in stand presentation at exhibitions, as public game platforms,or for meetings, and collective exploration of information. Designing applications for such situations requires considering collaboration practice and sought outcome, walk-up-and-use readiness, interaction design considering available interaction techniques.

The topics covered in this workshop include analysing collaborative activities and opportunities around large displays, walk-up-and-use connection of multiple devices to the web and interaction techniques across screens and devices. The workshop will be conducted in form of prototyping collaborative multi device applications at a large screen. The participants will learn about identification and analysis of collaborative scenarios around large displays, using a client-edge-server architecture (spaceify.org) for web for integrating of screen and devices, and cross device/large screen interaction design.

The workshops welcomes participants with diverse backgrounds that will be grouped into multidisciplinary project teams. Engineering students should have prior experience of several programming projects. For designers and students from social or human sciences, understanding the basics of programming will be of advantage, but not required.

Instructors: Prof. Giulio Jacucci, University of Helsinki, Finland & Petri Savolainen, HIIT, Finland

Giulio Jacucci

Giulio Jacucci is Professor of Computer Science at the University of Helsinki and director of the Network Society Programme at the Helsinki Institute for Information Technology (HIIT). He has been Professor at the Aalto University, Department of Design 2009-2010 and is co-author of “Design Things” by MIT press. His research field and competencies are in human-computer interaction including: mobile social computing, multimodal and implicit interaction, haptics and tangible computing, mixed reality, and persuasive technologies. He has chaired ACM ITS in 2013, has served as chair for program NordiCHI, full papers AVI, CHI Design subcommittee. Prof Jacucci has coordinated the european project BeAware FP7 ICT that created award winning EnergyLife featured in Euronews, a playful and pervasive application to empower families in saving energy. He currently coordinates MindSee on “Symbiotic Mind Computer Interaction for Information Seeking” He founded an international Workshop series on Symbiotic Interaction, which he chaired in 2014 in Helsinki. Recently he contributed to invent Interactive Intent Modelling a new interaction paradigm for information discovery published in Communication of the ACM, ACM CIKM and other publications and commercialised in a start up etsimo.com where he serves as a chairman of teh board. He is also co-founder and member of the board of directors of MultiTaction.com MultiTouch Ltd. the leading developer of interactive display systems, based on proprietary software and hardware designs.

Homepage

Petri Savolainen

Petri Savolainen is a researcher at Helsinki Institute for Information Technology. He is one of the inventors and lead developers of Spaceify, an edge computing ecosystem for smart spaces that fuses smart spaces together with the Web. He is also a co-founder, and CEO of Spaceify Ltd., a newly-founded startup company that aims at commercializing the Spaceify ecosystem. He is currently working in the Street Smart Retail high impact initiative project of EIT Digital, developing Spaceify Games, a zero-configuration big-screen gaming platform, where the mobile web browser acts as the game controller.

Homepage


Workshop D: NEXT GENERATION VIRTUAL REALITY: PERCEPTION MEETS ENGINEERING

Maximum number of students to be enrolled to the workshop: 36

Virtual reality (VR) is a powerful technology that promises to change our lives unlike any other. By artificially stimulating our senses, our bodies become tricked into accepting another version of reality. VR is like a waking dream that could take place in a magical cartoon-like world, or could transport us to another part of the Earth or universe. It is the next step along a path that includes many familiar media, from paintings to movies to video games. We can even socialize with people inside of new worlds, either of which could be real or artificial. One of the greatest challenges is that we as developers become part of the system we are developing, making extremely challenging to objectively evaluate VR systems. Human perception and engineering become intertwined in a complicated and fascinating way.

This class will be a week-long, condensed version of a new course offered at UIUC in recent semesters. It covers the fundamentals of virtual reality systems, including geometric modeling, transformations, graphical rendering, optics, the human vision, auditory, and vestibular systems, tracking systems, interface design, human factors, developer recommendations, and technological issues.

Students are expected to complete an implementation project that demonstrates an understanding of the fundamentals as well as following best practices recommendations. The learning outcomes are that students will know how to build a good VR experience, understand how VR works, know how to critically evaluate VR systems, and understand the fundamentals that are useful in shaping the future of VR. Students are expected to have basic engineering or computer science background, but are not expected to be at advanced levels of software engineering or mathematics. Prior experience with programming and matrix multiplication is minimally sufficient.

Instructors: Prof. Steve LaValle & Dr. Anna Yershova, UIUC, USA

Steve LaValle & Anna Yershova

Steve LaValle started working with Oculus VR in September 2012, a few days after their successful Kickstarter campaign, and was the head scientist up until the Facebook acquisition in March 2014. He developed patented, perceptually tuned head tracking methods based on IMUs and computer vision. He also led a team of perceptual psychologists to provide principled approaches to virtual reality system calibration and the design of comfortable user experiences. In addition to his work at Oculus, he is also Professor of Computer Science at the University of Illinois, where he joined in 2001. He has worked in robotics for over 20 years and is known for his introduction of the Rapidly exploring Random Tree (RRT) algorithm of motion planning and his 2006 book, Planning Algorithms.

Homepage

Anna Yershova also started at Oculus in September 2012 and became a Research Scientist there until 2014. She made fundamental contributions to the head tracking methods and core mathematical software used in the Oculus Rift and Samsung Gear VR. Since 2011, she has been a Lecturer in the Department of Computer Science at the University of Illinois, where she teaches virtual reality, C++, and data structures. From 2009 to 2011, she was a post-doctoral researcher at Duke University, working on computational geometry. In 2009, she completed a PhD in Computer Science from the University of Illinois. She has published over 20 research articles in the areas of robotics, applied mathematics, computational biology, and virtual reality. She has also co-authored math textbooks that have sold millions of copies and are used in schools throughout Russia and Ukraine.

Homepage