Nothing Special   »   [go: up one dir, main page]

Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag March 27, 2018

Multiscreen Patterns – Interactions Across the Borders of Devices

  • Dominick Madden

    Dominick Madden studied computer science at the University of Applied Sciences Mannheim where he wrote his thesis on the subjects of animation and Microinteractions in the Multi-screen context. He is now part of the User Experience and Interaction Design lab led by professor Kirstin Kohler. There he works on research projects concerning user experience in the mobile, web development and IoT fields.

    , Horst Schneider

    Horst Schneider worked as a research assistant at the University of Applied Sciences Mannheim from 2015 to 2017. In Sysplace he focused on technical requirements, opportunities, and constraints of the description and implementation of multiscreen interaction patterns. He created a model for describing gesture-based interactions and graduated with a Master’s Degree. He now works as a software developer in Heidelberg.

    and Kirstin Kohler

    Prof. Kirstin Kohler is Professor for user experience design and interaction design at the University of Applied Sciences in Mannheim. With her research group she participates in several research projects. At the university she is also responsible for an interdisciplinary course on design and innovation. She is heading the new maker space. https://de.linkedin.com/in/kirstin-kohler-b6ab3b94.

    EMAIL logo
From the journal i-com

Abstract

Design patterns are solutions for common design problems used in a number of fields including architecture, software development and user experience design. We compiled a pattern library for the usage of gesture-enabled interactions between different devices with screens, the so called multiscreen context. This library provides simple and intuitive gestures for connecting and disconnecting devices wirelessly as well as gestures for the exchange of data between these devices, like swiping a document from one’s tablet to the tablets of the surrounding colleagues in the same room. The library is the result of a 2.5 years running project in cooperation with three small and medium companies. We compiled the library by conducting an intense literature survey in parallel with several iterations of practical software projects in the multiscreen context. The literature survey inspired the practice-oriented project work. Being involved in software projects enabled us to identify challenges in the design and implementation of the gesture set. Therefore, we generated valuable insight for a comprehensive description of gestural interactions.

With our set of patterns, we aim to support interaction designers to choose the appropriate gesture for their given context. The patterns serve as inspiration by showing the different possibilities but also provide guidance how to design and implement a selected gesture for a given context. They help designing the details of an interaction by breaking it down into its smallest parts. To support the developers of these interactions our pattern descriptions are enriched with an Android library containing lifecycle events and the necessary gesture recognition logic. This paper provides an overview of the pattern library. In addition, the structure and the usage of the library is described in more detail with the means of one sample pattern. The pattern library is openly accessible.

1 Introduction

As a study from Google [6] pointed out consumers divide their daily time in front of screens between different devices like smartphones, tablets, and PCs. In the scope of this paper this is called multiscreen context. They use the different devices concurrently or in sequence to achieve their respective goals. Whereas the interactions on single devices have significantly improved during the last decade since the introduction of touch devices, interactions across the borders of devices are still complicated. For example, they often require access to file systems and networks and require action on the level of the operating system. With our work we aim to provide gestures that support the same smooth and intuitive interactions for the multiscreen context as they exist on today’s tablets and smartphones, where users can easily modify files and data by touching or moving. With this we built on Weisers’ vision of beautiful seams [2], [17] and provide aesthetic interactions between the borders of devices. We capture the knowledge about the interactions as design patterns. Design patterns are an established format for the documentation of known solutions for design problems. They are well known in various fields like user experience design [1] and software engineering [4]. We compiled a pattern library for the usage of gesture-enabled interactions between different devices with screens. Each pattern in this library describes one gesture by giving guidance for interaction designers as well as developers. In this paper we introduce the pattern library and exemplify the structure of one pattern named ‘Swipe to Give’.

2 The Multiscreen Pattern Library

2.1 Derivation of the Library

Throughout our project we identified 25 patterns in total. We derived them by an intense literature survey as well as throughout our practical work with our partnering companies, 3M5. Media GmbH,[1] AMERIA GmbH,[2] and CAS SOFTWARE AG.[3] Together with them we realized two demonstrator scenarios for the multiscreen context. One scenario supported a futuristic configuration and ordering of a new car, the other a planning process in a production floor setting. Both application scenarios had several multiscreen gestures incorporated as exemplified in Figure 1 for the car configuration and ordering scenario. The pattern library evolved during the subsequent elaboration of these scenarios. We started with a set of identified pattern candidates mainly derived from literature and further refined the description during the practical experience with the interaction design and implementation of the demonstrator scenarios according to the feedback we got from our partners.

Figure 1 
            Customer journey of a car configuration in a future concept store implemented as demonstrator scenario for a multiscreen context.
Figure 1

Customer journey of a car configuration in a future concept store implemented as demonstrator scenario for a multiscreen context.

2.2 Structure of the Library

Each pattern is named after a gesture (e. g., swipe, bump, shake) and a reaction of the devices as a consequence of the gesture (e. g., give, take, connect). ‘Bump to Exchange’ for example, describes a gesture of bumping two devices against each other and thereby exchanging data between the two devices. A possible use case for this pattern could be the exchange of business cards.

Each pattern describes the detailed interaction between user and devices and provides guidance for implementation. Therefore, the contribution of our pattern library is twofold: We support interaction designers as well as developers. An overview of all identified patterns is provided in Table 1. All comprehensive pattern descriptions including references to literature, examples and links to the GitHub repository are accessible via the web page: http://multiscreen-patterns.uxid.de/.

Table 1

Multiscreen Pattern Library.

Give Take Exchange Extend Connect
Bump to Give Stitch to Take Bump to Exchange Pinch to Extend Shake to Connect
Swipe to Give Bump to Take Exchange Through Body Stitch to Extend Stitch to Connect
Stitch to Give Approach to Take Appose to Extend Bump to Connect
Approach to Give Grab a Part Approach to Extend Approach to Connect
Throw to Give Grab an Object Leave to Disconnect
Dump Picking up an Object
Nudge
Give Through Body

A study conducted by Google [6] revealed that people use devices in the multiscreen context in two main modes: either simultaneously or sequentially. This finding is also reflected in the structure of the library. We categorized our patterns in five groups as shown in the titles of the rows of Table 1. Three of them specifying the aforementioned actions ‘Give’, ‘Take’ and ‘Exchange’ for a bidirectional data transfer between devices. Gestures of these three categories support a sequential task completion in solitary use as well as in cooperative settings, e. g. sending a document via swipe to smartphones of participants of a meeting sitting around a table. Pattern in the category ‘Extend’ allows to combine different devices in a simultaneous mode, which could be extending pictures above screens of adjacent smartphones. Gestures that allow connecting or disconnecting devices to and from each other are summarized in the fifth category named ‘Connect’. They can be combined with gestures of all other categories.

2.3 Theoretical Foundation

To reach the aforementioned goal of intuitive gestures between devices, we built on previous work in the area of tangible and natural interactions. As such we used the conceptual framework of blended interactions and reality based interactions provided by Jetter and colleagues [12]. This framework takes into consideration that humans can combines experience gained by interacting with physical, real world objects or through the communication with people in social interactions with the experience of digital interactions. It refers to the concept of ‘embodied cognition’ and integration of ‘concepts through blends’ [11]. During our work we focused on these gestures because rationalizing them in terms of the blended interaction framework makes their intuitiveness more evident. For each pattern we refer to the according image scheme [10], which defines the generic space and specify the input space according to the reality based interaction scheme in addition. Furthermore our pattern descriptions use the terminology provided by Terrenghi et al. [16] to define display types. They also specify the distance of interactions by using Halls classification of proxemics [13]. As a more comprehensive explanation of the theoretical foundation of our work would exploit the size of this article we refer to our web page for more details.

3 Pattern Walkthrough – Exemplified by ‘Swipe to Give’

Every multiscreen pattern is structured in a similar manner in order to ease designers and programmers into using the patterns for their applications. They consist of five sections, each one answering a specific question that might arise while considering the use of multiscreen interactions for a use case (What, How, When, Why). With this structure we align with other popular pattern libraries like Tidwell’s “Designing Interfaces” (http://designinginterfaces.com/). The structure of this section mirrors the components used in our pattern descriptions. In addition to a general explanation on the content of each section, one sample pattern (Swipe to Give) is picked to further clarify its meaning. This pattern, among others, was implemented at the University of Applied Sciences Mannheim for the Android ecosystem and illustrates all aspects of the patterns.

Some of the sections are aimed towards interaction designers (section How, When), others justify the pattern in a scientific context (section Why). Furthermore, in the end there is a part providing aid to the programmers implementing a multiscreen gesture. It is labeled Technical Details and is very specific for our interaction pattern library providing a bridge to the implementation.

3.1 What

Figure 2 
            Swipte to Give.
Figure 2

Swipte to Give.

The first and shortest section of each of our patterns aims to give an overview. It proposes a general problem that could be solved using the interaction described in the pattern document. The solution to the problem is described briefly to allow people to quickly decide if the pattern is applicable to their specific use case. A sequence of images showing the interaction further eases the decision making process (Figure 2).

In the case of Swipe to Give the problem posed is that a user wants to share a piece of data (e. g. an image) with a second device or user.

3.2 How

This section describes the interaction in more detail by presenting all user inputs with the corresponding reactions from all involved devices. This part aims to help designers to come up with adequate inputs and feedback to provide a functional, comprehensive and satisfying interaction for the user. In order to achieve this we broke down the interaction into its smallest components. On the first level we describe the user’s actions and the systems reactions for the sender and receiver device.

In the case of ‘Swipe to Give’ this means:

  1. User action: The ‘swipe’ on the sender device towards the border of the sender device with a specified speed.

  2. Sender device reaction: the movement of the data towards the boarder of this device following the touch and move gesture of the user.

  3. Receiver device reaction: the appearance of the data on the receiver device, entering the device from the direction of the sender device.

In the second level of detail these interactions are specified furthermore each having its own rules, triggers and feedback hooks. This decomposition of an interaction is derived from Microinteractions [15]. In order to be able to instruct designers when and where to provide feedback to the user we introduced Atomic Interactions: A way to more granularly dissect an interaction and regard every smallest part separately. By doing so we give very detailed guidance to interaction design.

In the example of ‘Swipe to Give’ we identify four Atomic Interactions:

  1. The moment the user touches the display (Touch)

  2. The movement of the finger (Move)

  3. The moment the user lifts his finger from the display (Release)

  4. The moment the data is received by the secondary device (Receive)

Each of these Atomic Interactions consists of their own triggers, rules and feedback. This allows a systematic approach to the design of the feedback needed in order to accomplish the given task (see Table 2).

Our patterns further simplify this process by providing a decomposition of the Atomic Interactions identified for the given multiscreen interaction (see Table 2). The message that the feedback needs to convey to the user is described explicitly in the rules. In addition to that a comprehensive example for adequate feedback is given. Generally, feedback serves the purpose to subtly inform the user what they have done by interacting with the application in that manner and what they need to do next to accomplish their task.

Table 2

Atomic Interactions (Swipe to Give).

Name Trigger Rules Feedback
Touch Touch down event on the screen Touch occured on data object E.g. Lift data object
Move Touch move event on the screen Touch has occured, Release has not occured E. g. Data object follows finger
Release Touch up event on the screen Swipelength OK, Swipeduration OK, Swipeorientation OK E. g. Data object flies off screen
Receive Data object received Data is depictable E. g. Data object enters recipient’s screen

We suggest to give the feedback a tangible feel to further improve the user experience of the interaction. This can be accomplished by applying certain parts from Google’s Material Design library [7] and Disney’s 12 animation principles [5] that serve from real world metaphors. Swipe to Give’s ‘Move’ animation will serve as an example. The user moves his finger after touching an image (the data object) which follows it until he lifts up his finger from the screen. Through applying Squash and Stretch, according to Frank Thomas (Long-time Disney animator) one of the most important techniques in animation, the user can feel the acceleration of the image he is about to release. Not only does the animation look more appealing but it also encourages the user to use a sufficient swipe velocity. The user knows that throwing a real world object in the real world requires a certain velocity. Feeling and seeing the acceleration and velocity on the device guides users in a very unintrusive way to swipe correctly.

This and other principles can be applied to lots of feedback animations to improve the user experience of the entire application.

By this very detailed contemplation the multiscreen patterns provide a framework for laying out a best case scenario, that leads to an interaction design with a high intuitiveness when implemented with all the aspects. Designers are welcome to pick and choose which Atomic Interactions they want to concentrate on.

3.3 When

This section provides all the information needed to decide whether or not a pattern is appropriate for the given context and use case. In order to achieve this, we give positive and negative examples along with checkboxed properties of the recommended context, one of which being if the interaction is for a single user or rather for collaborating with others. In the case of Swipe to Give ‘Single User’ and ‘Collaboration’ would be checked since the gesture could be used to send something to a bigger, stationary device as well as to another user.

3.4 Why

The questions answered in this section are ‘What makes the gesture appropriate?’ It covers the rational for intuitiveness, but also refers to research on the pattern and usage of this pattern in products.

Swipe to Give for example refers to the metaphor of “throwing an object through the air.”. Also it is shown in several videos featuring products or visions, they are also listed in this section.

3.5 Technical Details

Figure 3 
            Swipe Recognition (used for the Swipe to Give Pattern).
Figure 3

Swipe Recognition (used for the Swipe to Give Pattern).

Each pattern description provides developers implementing multiscreen interactions with general guidance concerning the technical details of the pattern as well as an overall frame of reference for applications that combine multiple patterns.

Conducting a review of research papers that describe systems with multiscreen interactions, a general model was extracted that can be utilized to describe all identified multiscreen patterns. It consists of an abstract lifecycle with four lifecycle events: (1) Connecting the involved devices, (2) Selecting a piece of data to send, (3) Transferring that piece of data and ultimately (4) Disconnecting the devices.

Each multiscreen pattern describes a gesture that can be used to trigger one or more of these events. This is a similar approach to how Dachselt and Bucholz [3] implemented their multiscreen application. Rekimoto et al. [14] and Holmquist et al. [9] also implement aspects of the lifecycle and presented early examples of gesture-based interactions.

The recognition algorithm for each gesture is described in a state diagram acting on sensor events. Figure 3 exemplarily shows how to implement a Swipe gesture by monitoring ‘down’, ‘up’ and ‘move’ events on a Touchscreen, which correspond to our Atomic Interactions from above, and fire a Swipe Event on successful recognition. Threshold velocity, swipe length and direction can further restrict the recognition of Swipe gesture, as it is part of the Swipe to Give pattern of Figure 2 and explained in the sections before.

Generally, two types of patterns can be distinguished: Simple Gestures, like Swipe to Give and Synchronous Gestures like Stitch to Extend. The pattern Stich to Extend extends the screen of two adjacent smartphones or tablets to one combined screen by moving the finger with a touch across the borders of the first screen to the second screen. Figure 4 illustrates this interaction. Simple Gestures are implemented by monitoring one or more sensors on a single device, while Synchronous Gestures can only be recognized by communicating with another device to check if the sensor data aligns. Stitch to Extend is only a valid gesture, if the gesture is detected at both devices with a defined very short time delay and the gesture on the second device enters from the border of the device being adjoined to the first screen. The nature of different gestures constrains the possibilities of their application in the lifecycle: Simple Gestures can be utilized for all lifecycle events, Synchronous Gestures only work when a connection between two devices exists. This is necessary to ensure the aforementioned check of sensor data between the two devices.

Figure 4 
            Stitch to Extend (a variation also described by Hinckley et al. [8]).
Figure 4

Stitch to Extend (a variation also described by Hinckley et al. [8]).

Synchronous Gestures can typically be described as a combination of two or more Simple Gestures, as illustrated in the following example.

In the ‘Stitch to Extend’ gesture, two tablets are connected two a webserver. Each tablet performs a simple Stitch recognition and sends its Stitch events to the webserver. An additional constraint check comparing the direction and duration attributes of the Stitch events allows for the recognition of a synchronous Stitch gesture, which for example can be used to transfer files from one device to another. These technical descriptions are not tied to a specific technology and can be applied to all devices that publish the required touchscreen events. This is especially important because in the multiscreen context devices with different operating systems can be present. We did not want to limit our contribution to a dedicated operating system. In addition to those generic instructions each pattern links to the project’s GitHub repository,[4] where developers can consult the Android-specific Java implementation as an example and find further, more specific information.

4 Conclusion

With our pattern library we aim to spread the knowledge about more intuitive and easy to use interactions across the borders of devices. We strongly believe that the identified and described gestures can contribute significantly to smoother and more seamless interactions. At the same time, the practical work in the project with our industry partners showed that the number of combinations of different operating systems makes the implementation of the gestures not an easy effort. We hope that with future versions of operating systems the transfer, exchange and connection between devices will be further supported, which would simplify the implementation significantly. As a result of our work, we also hope to find the described gestures in future applications and thereby gain more insights about appropriate contexts and types of application they are suited for.

Funding statement: The research was founded by the BMBF conducted in the project SysPlace (Grant number: 01IS14018D).

About the authors

Dominick Madden

Dominick Madden studied computer science at the University of Applied Sciences Mannheim where he wrote his thesis on the subjects of animation and Microinteractions in the Multi-screen context. He is now part of the User Experience and Interaction Design lab led by professor Kirstin Kohler. There he works on research projects concerning user experience in the mobile, web development and IoT fields.

Horst Schneider

Horst Schneider worked as a research assistant at the University of Applied Sciences Mannheim from 2015 to 2017. In Sysplace he focused on technical requirements, opportunities, and constraints of the description and implementation of multiscreen interaction patterns. He created a model for describing gesture-based interactions and graduated with a Master’s Degree. He now works as a software developer in Heidelberg.

Kirstin Kohler

Prof. Kirstin Kohler is Professor for user experience design and interaction design at the University of Applied Sciences in Mannheim. With her research group she participates in several research projects. At the university she is also responsible for an interdisciplinary course on design and innovation. She is heading the new maker space. https://de.linkedin.com/in/kirstin-kohler-b6ab3b94.

Literatur

[1] Jan O. Borchers. 2000. A pattern approach to interaction design. In Proceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques (pp. 369–378). ACM.10.1145/347642.347795Search in Google Scholar

[2] Matthew Chalmers and I. MacColl. 2003. Seamful and seamless design in ubiquitous computing. In Workshop at the crossroads: The interaction of HCI and systems issues in UbiComp, (Vol. 8).Search in Google Scholar

[3] Raimund Dachselt and R. Buchholz. 2009 April. Natural throw and tilt interaction between mobile phones and distant displays. In CHI’09 extended abstracts on Human factors in computing systems (pp. 3253–3258). ACM.10.1145/1520340.1520467Search in Google Scholar

[4] Erich Gamma, R. Helm, R. Johnson and J. Vlissides. 1995. Design pattern. Elements of Object Oriented Software. Addison Wesley, Reading, MA.Search in Google Scholar

[5] Frank Thomas and O. Johnston. 1981. The Illusion of Life: Disney Animation.Search in Google Scholar

[6] Google Inc. The New Multi-screen World: Understanding Cross-pllattform Consumer Behavior. 2012, http://www.google.com/think/research-studies/the-new-multi-screen-world-study.html (accessed 20. Dezember 2017).Search in Google Scholar

[7] Google Inc., Material Design Guidelines. https://material.io/guidelines/ (access 02. Januar 2018).Search in Google Scholar

[8] Ken Hinckley, G. Ramos, F. Guimbretiere, P. Baudisch and M. Smith. 2004 May. Stitching: pen gestures that span multiple displays. In Proceedings of the working conference on Advanced visual interfaces (pp. 23–31). ACM.10.1145/989863.989866Search in Google Scholar

[9] Lars Holmquist, F. Mattern, B. Schiele, P. Alahuhta, M. Beigl and H. W. Gellersen. 2001. Smart-its friends: A technique for users to easily establish connections between smart artefacts. In Ubicomp 2001: Ubiquitous Computing (pp. 116–122). Springer, Berlin/Heidelberg.10.1007/3-540-45427-6_10Search in Google Scholar

[10] Jörn Hurtienne and J. H. Israel, 2007 February. Image schemas and their metaphorical extensions: intuitive patterns for tangible interaction. In Proceedings of the 1st international conference on Tangible and embedded interaction (pp. 127–134). ACM.10.1145/1226969.1226996Search in Google Scholar

[11] Manuel Imaz and D. Benyon. 2007. Designing with blends. MIT Press.10.7551/mitpress/2377.001.0001Search in Google Scholar

[12] Hans-Christian Jetter, H. Reiterer and F. Geyer 2014. Blended Interaction: understanding natural human–computer interaction in post-WIMP interactive spaces. Personal and Ubiquitous Computing, 18(5), pp. 1139–1158.10.1007/s00779-013-0725-4Search in Google Scholar

[13] Nicolai Marquardt and S. Greenberg. 2015. Proxemic interactions: From theory to practice. Synthesis Lectures on Human-Centered Informatics, 8(1), pp. 1–199.10.2200/S00619ED1V01Y201502HCI025Search in Google Scholar

[14] Jun Rekimoto. 1996 November. Tilting operations for small screen interfaces. In Proceedings of the 9th annual ACM symposium on User interface software and technology (pp. 167–168). ACM.10.1145/237091.237115Search in Google Scholar

[15] Dan Saffer, D. 2013. Microinteractions: designing with details. O’Reilly Media, Inc.Search in Google Scholar

[16] Lucia Terrenghi, A. Quigley, A. Dix. 2009. A taxonomy for and analysis of multi-person-display ecosystems. Personal and Ubiquitous Computing 13(8), pp. 583–598.10.1007/s00779-009-0244-5Search in Google Scholar

[17] Mark Weiser. 1994 March. Ubiquitous computing. In ACM Conference on Computer Science (p. 418).10.1145/197530.197680Search in Google Scholar

Published Online: 2018-03-27
Published in Print: 2018-04-25

© 2018 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 22.11.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2018-0008/html
Scroll to top button