Abstract
See-through type augmented reality (AR) head mounted displays (HMDs) allow for a highly immersive experience and are currently becoming widely used in e-commerce domains. Current AR systems mainly focus on improving the experience for the user who is wearing the HMD (HMD user), resulting in the exclusion of bystander without HMD (non-HMD user) from AR experience. We propose sharing the AR experience between the HMD user and non-HMD user by visualizing and synchronizing the virtual objects to a smartphone, thereby enabling non-HMD user to see and interact with the virtual objects on their smartphones directly. We also proposed a pre-experience system which provides users with a mobile AR based virtual product experience for enhancing online shopping. We share the experience between HMD user and non-HMD user to illustrate the concept of our approach, where the HMD user can interact with a virtual product when shopping online to get a better sense of the real product, and where the non-HMD user can have a similar AR experience through smartphones.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Recently, see-through type head mounted augmented reality (AR) devices are developing rapidly, allowing users to see virtual objects embedded in the physical environment [1]. In fact, it has already been used in e-commerce domain such as the AR try on system, product recommendation based on the environment information, providing an immersive and interactable experience to users for online shopping [2]. In such cases, for user with see-through type head mounted display (HMD user), augmented information can be seen through their wearable device and they can have an immersive experience in AR.
Despite the fact that see-through type HMD allows for a highly immersive experience, bystander without HMD (non-HMD user) is excluded from having the same experience [3]. Currently, most AR systems still mainly focus on enhancing the experience of the HMD user, while non-HMD user cannot enjoy the AR experience to the same extent as HMD user. However, excluding non-HMD user can further lead to a complete isolation of the HMD user and could potentially lead to lowering the social acceptance of the technology [4]. Therefore, sharing experience between HMD user and non-HMD user is essential for improving the experience for both side, especially for non-HMD user.
The current paper focuses on sharing AR experience between HMD user and non-HMD user, in which they can share the view of the same virtual object [5]. But as the limitation of device, those without HMD cannot experience the same immersive augmented information as the HMD user, so it becomes difficult for HMD user to share AR experience and communicate with non-HMD user [6]. Thus, an approach is needed to solve the problem of asymmetric information and experience between HMD user and non-HMD user. Therefore, the focus of our work is on eliminating the gap between HMD user and non-HMD user, and enabling non-HMD user to participate in the AR experience that the HMD user already enjoys.
2 Goal and Approach
The goal of our research is to enable HMD user to share augmented reality information with non-HMD user and enable non-HMD user to interact with the same virtual objects.
We propose a system that enables the sharing of augmented reality experience, allowing the non-HMD user to have the same AR experience as the HMD user (Fig. 1).
We use smartphone to visualize the virtual objects for non-HMD user. By synchronizing HMD user’s view to a smartphone, non-HMD user can experience holograms directly from their smartphones. In addition, non-HMD user can share AR experience to social media in the form of video rendered AR objects.
To increase the engagement and enjoyment of non-HMD user and the communications between HMD user and non-HMD user, we enable both of them to experience the same form of interaction with virtual objects in a co-located environment. Since users are not isolated from the real world, natural communication cues (e.g. speech, gestures and body position) can effectively be used during sharing activities between them [7]. Previous work showed that such kind of interaction - multiple users interacting with the same virtual elements embedded into the physical environment - has the potential to increase enjoyment and social behavior [8].
We also propose our Pre-experience system, which provides an AR experience of the product before purchasing online, to further explore the potential of our sharing system, showing the possibilities for shared AR shopping experience.
3 Sharing AR Information with Non-HMD User
3.1 General System Design
The main goal of this work was to investigate approaches for non-HMD user to have a similar augmented reality experience as the HMD user. In this design concept, there are two kinds of users: user who is using head mounted displays to view and interact with AR applications, and user, who does not have HMD. This is motivated by the fact that HMD is still quite expensive (for ones that have depth sensing), non-HMD user needs an alternate way to receive the same augmented information, and to communicate with HMD user and complete tasks together.
The augmented reality application will give HMD user appended information, and our research target is to also enable non-HMD user to get the augmented information with another device and receive the same information as that of the HMD user. Furthermore, we want to allow the non-HMD user to interact with the augmented reality objects in order to have a similar interactive experience. In this research, we focus on the communication between the HMD user and the non-HMD user using smartphones.
We designed two methods for sharing the augmented reality view, “view sharing” and “duplicate generation”. View sharing means to share exactly the same view as the HMD user, while duplicate generation uses the smartphone to generate an interactive copy of the augmented reality object. The features of each methods are showed in the following Table 1.
3.2 View Sharing
In this method, we used the simple and efficient live streaming to achieve our goal. At first, the HMD user can see and interact with the augmented reality object. Meanwhile, the view of HMD user will be captured by the mixed reality capture function, render the situation to a live stream video, and share it with the smartphone through network. In this way, the non-HMD user can see exactly what is happening in the HMD user’s view, so that they can receive the same augmented information and can communicate with each other.
One of the challenges of this method was to decide on the suitable size of view of HMD user for sharing. Fortunately, the field of view of HoloLens, the HMD we used, was relatively limited so that we can render the whole view of HMD user to the live stream video (Fig. 2).
3.3 Duplicate Generation
In this second method, non-HMD user will receive a copy of the augmented reality objects which the HMD user is interacting with. Here, the smartphone acts as a window, that connect the non-HMD user to the augmented reality world. The non-HMD user can use their smartphone to initiate to receive a copy of the augmented world information from the HMD. This process need to be done in the same network environment. After that, the smartphone becomes an augmented reality device, and the target object would be generated independently in the smartphone’s view. The duplicate of the object is independent to the one in HMD view.
The key point of this method is that non-HMD user can use the touch screen of smartphone to interact with the object using the click function. So that they can get the similar interaction experience with the HMD user. In order to enable the interaction on smartphone, we design the cursor function. The cursor of smartphone side is focused on the center of screen. With the help of the cursor, non-HMD user can click the object and get more information (Fig. 3).
4 Pre-experience
Currently, e-commerce is developing rapidly. Consumers can easily browse the webpage full of merchandise and they can purchase merchandise without going to a physical shop. However, online shopping has the main drawback of users not knowing the actual physical experience of the merchandise. In contrast, when consumers go to a physical shop, samples of merchandise are available for consumers to know about the merchandise in details. Online consumers however, may risk purchasing items that are not ideal for them, due to the lack of physical samples, which may cause consumers disappointment and even lead to depression and complete withdrawal from e-commerce. To solve the problem that consumers cannot try out products when shopping online, we propose Pre-experience, which lets consumers get a sense of the real products before they make purchase when shopping online via Augmented Reality.
There are some previous work focusing on utilizing 3D models to represent the merchandises and let consumer have a broad understanding of real merchandise before purchase, such as IKEA Place [9] and Houzz [10]. But in these work, users can only place the 3D models or change the size of 3D models, which is not enough for users to know about merchandises when making purchases. That is because in most cases, consumer’s consideration of the merchandise is more than just the appearance and size. There should be interactive behavior between consumers and virtual products, like interaction with products, seeing the operating status of electrical appliances, and others.
Pre-experience allows the HMD user to interact with the virtual 3D products, to get a sense of the real product before making purchases online. Pre-experience enables user to manipulate the virtual products similar to real world products and get the feedback from virtual products. We built a prototype of the Pre-experience on a see-through type AR HMD, since HMD can provide an immersive experience in AR so that users can have a relatively realistic feeling.
User can simulate how the product can be operated in this online shopping Pre-experience. For example, if user powers on a kettle, water will start boiling and water vapor can be seen escaping from the kettle. In our system, we simulate the process of boiling water by the animating the 3D virtual product. User’s interaction with the virtual product can activate a different animation.
5 System Usage Scenario
We propose a usage scenario illustrating how the Pre-experience can be shared between HMD user and non-HMD user. HMD user can see an AR Pre-experience on an HMD. And by sharing this AR experience, non-HMD user can also enjoy the same Pre-experience and interact with virtual products as the HMD user, but on their smartphone.
There are two kinds of users in this Pre-experience: HMD user and non-HMD user. To ensure the interaction of non-HMD user, we choose the “duplicate generation” to share the Pre-experience. In our case, HMD will act as the server, in which the application is running on, so HMD user will get the augmented information independently, while non-HMD user need to get a duplicated augmented information from HMD user by connecting their smartphone to the HMD as a client.
5.1 Pre-experience of HMD Side
The Pre-experience of HMD side is an independent process, meaning that it can be accomplished without sharing to non-HMD user. The process of Pre-experience will be introduced in detail in the following sections.
Scanning Shopping Webpage with HMD.
To start the Pre-experience, the HMD user needs to look at the shopping webpage with their HMD, as showed in Fig. 4. Since we use see-through type HMD in our system, HMD user can see AR information at the same time while shopping on a webpage through their HMD.
Recognizing Product Information.
At the initial interface of our system, we added reminder texts to instruct user to start Pre-experience, as shown in Fig. 5. When HMD user is interested in a product and wants to click into the product’s webpage to see the details of it, our system will recognize product’s information by its photo. HMD user needs to make sure the camera of HMD is pointed to the photo of the product. The process of recognition may take a few seconds, after that, our system can then recognize which product the user is looking at, and make further response.
Getting 3D Virtual Products.
After recognizing successfully, the reminder text will disappear. Then our system will scan around the environment to combine the virtual world coordinate with the real-world coordinate. At this stage, the surface of the physical environment will be recognized and visualized as white wireframes for later use. Then a 3D virtual product that correspond to the product on the webpage will then pop up in front of HMD user’s view. The virtual product has the same appearance and size as the real physical product, as the Fig. 6 shows. The small white point in the center is the cursor of HMD.
Interacting with Virtual Products.
We provide several ways of interaction when using the Pre-experience. As we have already built the connection between virtual world and real world, HMD user can place the virtual product on the real world’s surface that we have detected in the previous stage. Besides, every time when making a placement of virtual products, our system will detect surface once again and update the virtual space, so that the placement of virtual product matches the environment of real world.
User can simulate how product can be operated in this online shopping Pre-experience. Even without trying out the real physical product, user can understand how the product works (Fig. 7).
5.2 Sharing Pre-experience to Non-HMD User
In conjunction with our sharing system, non-HMD user can also experience this Pre-experience on their smartphones.
Synchronizing HMD and Smartphone.
The first step to sharing the Pre-experience is to synchronize HMD and smartphone. Below is the procedure of building synchronization between HMD and smartphone.
-
Ensure that both HMD and smartphone are on the same network.
-
Start the application on both HMD and smartphone. The process of starting the application on the smartphone should trigger the HMD camera to turn on and begin taking photos.
-
As soon as the application starts smartphone, it will look for surfaces, such as floors or tables. When surfaces are found, a marker will be shown on the screen of smartphone. Non-HMD user will need to show this marker to the HMD user, and HMDwill establish connection with the smartphone according to this marker.
-
Once established, the marker will disappear and both devices should be connected and spatially synchronized, which means that information of the AR Pre-experience, including the virtual products, location information of virtual products and status of virtual products, that runs on HMD should be synchronized to the smartphone (Fig. 8).
Pre-experience of Non-HMD user.
After synchronizing with the HMD, as the HMD user has already received the 3D virtual products, non-HMD user do not need to scan shopping webpage anyone. Instead, non-HMD user can see the same 3D virtual product directly on their smartphone, and the position and status of the virtual product is the same as in the HMD. In addition, non-HMD user has the same authority as HMD user when interacting with virtual product and the interaction with virtual product is independent to HMD user’s action. That is to say, non-HMD user’s interaction with virtual product is not influenced by HMD user. Non-HMD user can interact with 3D virtual product by manipulatingsmartphone. With this kind of sharing system, non-HMD user can get the equivalent AR Pre-experience as HMD user.
Sharing Pre-experience to Social Media.
We also provide a method for non-HMD user to share their Pre-experience to social media. Non-HMD user can record the interaction by recording the screen of smartphone and share the recording to social media.
6 Implementation
The implementation of our work consists of development environment, implementation of view sharing, implementation of duplicate generation and implementation of Pre-experience.
6.1 Development Environment
The hardware used for development of the sharing application include an HMD, a Computer with Unity, and a smartphone.
We choose Microsoft HoloLens as the see-through type Head Mounted Display for mixed reality experience. The HoloLens features an inertial measurement unit (IMU) (which includes an accelerometer, gyroscope, and a magnetometer) four “environment understanding” sensors, an energy-efficient depth camera with a 120° × 120° angle of view, a 2.4-megapixel photographic video camera, a four-microphone array, and an ambient light sensor [11]. The operating system of HoloLens is Windows 10, which can run the universal windows platform applications (Fig. 9).
The smartphone we used in this work is iPhone X. The ARkit, provided by Apple, combines device motion tracking, camera scene capture, advanced scene processing, and display conveniences to simplify the task of building an AR experience [12].
We choose Unity 3D, a cross-platform game engine, as the development IDE of our project because it offers a method to solve the multi-platform network communication between HMD and smartphone. We use the ARkit tool SDK on Unity to support the development of AR application with iPhone X. We also referenced the mixed reality tool examples which is offered by Microsoft.
6.2 Implementation of View Sharing
In our system, we aim to extract the augmented information in the HMD view and send it to the smartphone in the form of a live streaming video. In order to get the view with augmented reality information, we need to record the angle of camera and render the view of virtual object with real-world video.
This time, we use the Microsoft mixed reality capture to capture the HMD view. Mixed reality capture (MRC) let’s us capture the experience as either a photograph or a video. It in turns allows us to share the experience with others, by allowing others to see the same holograms of virtual information as seen by the HMD. These videos and photos are from a first-person point of view [13].
We chose the Mixed Remote View Compositor from the SDK for implementing the live streaming of HMD view. Mixed Remote View Compositor provides the ability for developers to incorporate near real-time viewing of HoloLens experiences from within a viewing application. This is achieved through low level Media Foundation components that use a lightweight network layer to transmit the data from the device to a remote viewing application [14].
6.3 Implementation of Duplicate Generation
In this method, the goal of our system is to generate the interactive virtual object for smartphone user. To achieve this, we need to acquire the location information of the object, by synchronizing the smartphone with HMD. We used Spectator View Preview, provided by Microsoft, to synchronize between HoloLens and smartphone [14].
We used Unity as the engine because Unity enable one application to run on different platforms. On the HoloLens side, we use Mixed Reality Toolkit to help us develop the contents. The Mixed Reality Toolkit is a collection of scripts and components intended to accelerate development of applications targeting Microsoft HoloLens and Windows Mixed Reality headsets.
On the smartphone side, we use Unity-ARKit-Plugin to provide the augmented reality experience. This is a native Unity plugin that exposes the functionality of Apple’s ARKit SDK to our Unity projects for compatible iOS devices. Includes ARKit features such as world tracking, pass-through camera rendering, horizontal and vertical plane detection and update to Unity developers for their AR projects. The process of synchronization help us get the transform data of virtual object, which is used to locate the object in smartphone. As for interaction on the smartphone, a cursor is created at the center of the touch screen. Non-HMD user can move the smartphone to align the cursor onto the place they want to interact, and touch directly on the screen to activate the click function. As the objects are independent from the ones in HMD user’s view, non-HMD user can operate them by himself or herself and have a similar experience as the HMD user has.
6.4 Implementation of Pre-experience
The application, Pre-experience, consists of the recognition of products’ webpages and interaction with virtual products.
We use Vuforia Engine [15] for recognizing the product’s information. Our goal is to recognize specific product by its image directly. Vuforia Engine brings an important capability to connect AR experiences to specific images and objects in the environment. We set the photo of the product as image target, which is used for target recognition and tracking in Vuforia. One photo correspond to one 3D virtual products. In this way, we can build the association between image of product and virtual product.
For interaction with virtual products, we need to combine the real world and virtual world together to make sense to the Pre-experience. We use Spatial Mapping, provided by Microsoft, enabling a detailed representation of real world surfaces in the environment around the HMD. By merging the real world with the virtual world, the virtual products can seem real so that user can make real-world behaviors and interactions [16]. We use gaze and gesture as the input methods for HMD user and touch as the input method of non-HMD user. To achieve the simulation of operating status of product, we use Animator Controller to control the transition of virtual product [17]. Animator Controller allows to arrange and maintain a set of animation clips and associated animation transitions for a character or object. The simulation of operating status of product has multiple animations and switches between these animations. By using Animator Controller, the transition of virtual product could be more fluent.
7 Related Works
Our work is strongly influenced by the fields of Shared/Collaborative Augmented Reality and Sharing Virtual Experience Between Asymmetric Devices.
7.1 Shared/Collaborative Augmented Reality
Shared AR was first defined by Rekimoto [5] in 1996 as an environment that allows several users to share the same view of virtual objects and any modifications of the objects, in which the virtual objects are embedded in the real physical world. He presented a shared AR system called TransVision which augments real table-top with the virtual objects. Multiple users hold a palmtop size see-through display and look at the same virtual and real world through it.
In 2002, Billinghurst and Kato [18] extended this concept by defining collaborative AR, which enables interaction and collaboration among several users. They presented a system that contains variety of interfaces blending reality and virtuality, allowing communication behaviors in a co-located environment.
These two terms are quite similar, while collaborative AR emphasize the asymmetric visualization and input capabilities of different users. In this paper, we use the latter term. Since we have HMD user and non-HMD user, whose input capabilities and visualization of virtual objects are different, caused by the differences of devices.
7.2 Sharing Virtual Experience Between Asymmetric Devices
ShareVR [3], proposed by Gugenheimer et al. follows an asymmetric interaction approach by offering individual interaction and visualization concepts for users without an HMD in a co-located collaboration. ShareVR uses floor projection and mobile displays in combination with positional tracking to visualize the virtual world for non-HMD user, enabling them to interact with HMD user and become part of the VR experience. In this work, non-HMD user cannot acquire the same information as the HMD user, and non-HMD user is at a less engagement level with AR experience than HMD user.
FaceDisplay [6] is a concept proposed by Gugenheimer et al. for a mobile VR HMD that is designed to have the HMD user within the environment with all other people. FaceDisplay consists of three displays arranged around the back side of the HMD to function as a visualization for the non-HMD user. They also attached a Leap Motion facing outwards to allow for gestural interaction. In this work, the devices used for enhancing the interaction of non-HMD user are all mounted on the HMD, which may be unhygienic to HMD user. In addition, non-HMD users need to touch the screen mounted on HMD user’s face. However, gamedesigns including heavy movements can results in too strongly perceived touch impacts on the HMD User. Having an unpredictably moving user could also result in safety hazards for thenon-HMD user.
Stafford et al. [4] presented “god-like interactions”, an approach to enable asymmetric interaction between a user with an AR HMD and a user with a tablet. “God-like interaction” is a metaphor for improved communication of situational and navigational information between AR HMD users equipped with mobile augmented reality systems, and non-HMD users equipped with tabletop projector display systems. Physical objects are captured by a series of cameras viewing a table surface, the data is sent over a wireless network, and is then reconstructed at a real-world location for AR HMD users.
Compared with the aforementioned system, our work has several advantages. First, our system is meant to reduce the gap between HMD user and non-HMD user in the condition of asymmetry caused by differences of devices. Even though the asymmetry of devices exists, non-HMD user can get the same AR information as HMD user. Second, once receiving the shared AR information from HMD user, non-HMD user can interact with virtual objects independently, instead of being under the control of the HMD user.
8 Conclusion
In this paper we present two ways for sharing the Augmented Reality Experience between HMD and Non-HMD User, in order to improve the communication between the two kinds of users when the information is not equal. We also design a scenario, a Pre-experience based online shopping system that demonstrate the utility of these techniques between HMD user and non-HMD user and describe how the system works to improve the communication.
The first method of our sharing system, is to share the live stream view of HMD user with non-HMD user. It means that non-HMD user can view all of the information and process during the interaction, as well as the operations made by HMD user. However, in this method, the virtual object is fully under the HMD user’s control so that the listener of the live stream may be confused about the changing view. This passive way of receiving information may cause confusion and anxiety to users during sharing.
The second method of our sharing system focuses on the smartphone AR. After synchronizing with the HMD user, the smartphone can generate the same augmented reality object, at the same physical location. The virtual object in the smartphone AR is independent and interactable. It means that, the object on the smartphone is a separate copy of that in the HMD. In this way of sharing, the non-HMD user can enjoy the augmented information proactively and receives the complete information. Of course, this method also has its disadvantage. Both HMD and non-HMD users cannot know the condition of the object in the opponents’ view, which may cause some communication problem when they would like to talk about the objects in real time.
9 Discussion and Future Work
In this paper, we present two ways for sharing the Augmented Reality Experience between HMD and Non-HMD User. Both methods have some disadvantages on the process of sharing the augmented reality information.
For the first method, the sharing view, we intend to share the augmented reality experience through sharing the live stream video of HMD user’s view. The most significant shortcoming of this method, is that the non-HMD user cannot do any interaction within the scene. They are forced to understand the augmented video passively, which may make the contents difficult to understand.
In order to solve this problem, we are considering adding some function to enhance the communication between the users. Such as sending stamps or enabling voice chat so that the users can communicate and react to the content being shared.
For the second method, the duplicate generation, the augmented reality experience is shared by creating the same situation for non-HMD user, using their smartphone. Non-HMD user can get the similar experience and information through the process of interacting with the duplicate object independently.
In order to enable an approximate experience and avoid the conflict operation, the virtual object has its own status when it is created in the smartphone. However, as the status of the object is not shared between the two devices, it is difficult for the non-HMD user to know what is happening in the HMD user’s view. They need to share the status by talking or sending messages currently. As future work, we can add a switch for non-HMD user to choose if we should synchronize the status of object or not. The users can choose one side of the client to control over the object. Alternatively, non-conflicting interaction method for the AR contents can be designed, so that they can interact with the same object at the same time.
Another problem of the second method is that the two kinds of users must be at the same place to do synchronization, as the use case of the application is designed for augmented reality application. The environment data is necessary to simulate the HMD’s view. It would also be interesting to consider a virtual reality situation and the remote experience sharing solution.
As the field of view of augmented reality HMD increases in the coming years, and at the same time the screen size of smartphone’s touch screen remains limited, the sharing field of view for the non-HMD user could become an important challenge for us to solve in the near future [19, 20]. The core problem could be in finding the most important content that is shown in the HMD view and only share those important content, which may improve the experience for the non-HMD user.
References
Billinghurst, M., Clark, A., Lee, G.: A survey of augmented reality. Found. Trends Hum.-Comput. Interact. 8(2–3), 73–272 (2015)
Benko, H., Ishak, E., Feiner, S.: Collaborative mixed reality visualization of an archaeological excavation. In: 3rd IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 132–140. IEEE, Arlington (2004)
Gugenheimer, J., Stemasov, E., Frommel, J., Rukzio, E.: ShareVR: enabling co-located experiences for virtual reality between HMD and non-HMD users. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 4021–4033. ACM, Denver (2017)
Stafford, A., Piekarski, W., Thomas, B.: Implementation of god-like interaction techniques for supporting collaboration between outdoor AR and indoor tabletop users. In: Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 165–172. IEEE, Washington (2006)
Rekimoto, J.: Transvision: a hand-held augmented reality system for collaborative design. In: Proceedings of Virtual Systems and Multi-Media, pp. 18–20. IEEE, Gifu (1996)
Gugenheimer, J., Stemasov, E., Sareen, H., Rukzio, E.: FaceDisplay: towards asymmetric multi-user interaction for nomadic virtual reality. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, no. 54. ACM, Montreal (2018)
Kiyokawa, K., Billinghurst, M., Hayes, S., Gupta, A., Sannohe, Y., Kato, H.: Communication behaviors of co-located users in collaborative AR interfaces. In: International Symposium on Mixed and Augmented Reality, pp. 139–148. ACM, Darmstadt (2002)
Lindley, S., Couteur, J., Berthouze, N.: Stirring up experience through movement in game play: effects on engagement and social behaviour. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 511–514. ACM, Florence (2008)
IKEA Applications. https://www.ikea.com/ms/ja_JP/customer-service/about-shopping/download-ikea-apps/#app_place. Accessed 25 Jan 2019
Houzz. https://www.houzz.jp/?m_refid=olm_google_171700067_9563462027_aud-312923979530:kwd-361607895357&pos=1t1&device=c&nw=g&matchtype=e&gclid=CjwKCAiAyrXiBRAjEiwATI95mRJF4bjmQJ82mE18PgHhbZwrJM8U29FNzWkZ9KZf7spG7aOou-SkwxoCQEMQAvD_BwE. Accessed 27 Jan 2019
Microsoft HoloLens. https://en.wikipedia.org/wiki/Microsoft_HoloLens. Accessed 25 Jan 2019
ARKit. https://developer.apple.com/documentation/arkit. Accessed 25 Jan 2019
Mixed reality capture. https://docs.microsoft.com/en-us/windows/mixed-reality/mixed-reality-capture. Accessed 25 Jan 2019
MixedRealityCompanionKit. https://github.com/Microsoft/MixedRealityCompanionKit/tree/master/MixedRemoteViewCompositor. Accessed 25 Jan 2019
Developing Vuforia Engine Apps for HoloLens. https://library.vuforia.com/articles/Training/Developing-Vuforia-Apps-for-HoloLens. Accessed 25 Jan 2019
Spatial Mapping. https://docs.microsoft.com/en-us/windows/mixed-reality/spatial-mapping. Accessed 25 Jan 2019
Animator Controller. https://docs.unity3d.com/2018.1/Documentation/Manual/class-AnimatorController.html. Accessed 25 Jan 2019
Billinghurst, M., Kato, H.: Collaborative augmented reality. Commun. ACM 45(7), 64–70 (2002)
The human eye diopter. https://baike.baidu.com/item/%E4%BA%BA%E7%9C%BC%E8%A7%86%E5%BA%A6/5997035?fr=aladdin. Accessed 25 Jan 2019
Field of view. https://en.wikipedia.org/wiki/Field_of_view. Accessed 25 Jan 2019
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Xu, S., Yang, B., Liu, B., Cheng, K., Masuko, S., Tanaka, J. (2019). Sharing Augmented Reality Experience Between HMD and Non-HMD User. In: Yamamoto, S., Mori, H. (eds) Human Interface and the Management of Information. Information in Intelligent Systems. HCII 2019. Lecture Notes in Computer Science(), vol 11570. Springer, Cham. https://doi.org/10.1007/978-3-030-22649-7_16
Download citation
DOI: https://doi.org/10.1007/978-3-030-22649-7_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22648-0
Online ISBN: 978-3-030-22649-7
eBook Packages: Computer ScienceComputer Science (R0)