US20190272674A1 - Information handling system augmented reality through a virtual object anchor - Google Patents
Information handling system augmented reality through a virtual object anchor Download PDFInfo
- Publication number
- US20190272674A1 US20190272674A1 US15/909,108 US201815909108A US2019272674A1 US 20190272674 A1 US20190272674 A1 US 20190272674A1 US 201815909108 A US201815909108 A US 201815909108A US 2019272674 A1 US2019272674 A1 US 2019272674A1
- Authority
- US
- United States
- Prior art keywords
- virtual object
- offset
- anchor
- information handling
- handling system
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G06K9/00671—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1813—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast for computer conferences, e.g. chat rooms
- H04L12/1827—Network arrangements for conference optimisation or adaptation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/10—Multimedia information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/18—Commands or executable codes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/52—Network services specially adapted for the location of the user terminal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/535—Tracking the activity of the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/024—Multi-user, collaborative environment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/131—Protocols for games, networked simulations or virtual reality
-
- H04L67/38—
Definitions
- the present invention relates in general to the field of information handling system visual information presentation, and more particularly to an information handling system augmented reality through a virtual object anchor.
- An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information.
- information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated.
- the variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications.
- information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
- Information handling systems often process information to present the information at a display as a visual image.
- information handling systems include graphics systems that process visual information into pixel values that define an image at a display.
- Conventional information handling systems typically interface with flat panel displays that present the visual information as two dimensional visual objects.
- Enterprises that do design work often use information handling systems with powerful processing capabilities and graphics chips to rapidly process and present complex visual information.
- computer aided design (CAD) applications define a product, building or other designed object in software and then render the object as a visual image that an end user can manipulate, such as by changing the object's orientation or peeling back layers of the object to view structures under the object.
- CAD applications greatly simplify design work by aiding end user visualization of the designed object before prototypes are built.
- Head mounted displays have entered enterprise work places as a tool to help designers visualize designed objects.
- Head mounted displays generally operate by projecting an image within head gear worn by an end user and in front of the end user's eyes. Head mounted displays can project the designed object in three dimensions to appear as if the object is in space at a location in front of the end user.
- An engineer wearing a head mounted display can view a three dimensional image of a designed object within arm's length and then reach out and manipulate the designed object with gestures, such as grasping at the projected location of the three dimensional image.
- Head mounted displays generally work in a virtual reality or an augmented reality. Virtual reality images are presented against a darkened background so that the end user views just the displayed virtual reality.
- Augmented reality images are presented against an opening that lets the end user see the “real” world in front of him while projecting the three dimensional image against the real world background.
- an end user manipulating a virtual object in virtual reality cannot directly view his hands during the manipulation; in contrast, an end user manipulating a virtual object in augmented reality can view his hands as they touch and gesture relative to the virtual object.
- Augmented reality tends to provide a more intuitive interaction with end user gestures.
- a difficulty with augmented reality in a collaborative environment is that each end user has his own three dimensional virtual object presented through his own head mounted display.
- a “real-life” collaboration when end users each wear their own head mounted display does not generally involve coordinated presentation of the virtual object so that end users interact at the same virtual object location.
- Efforts in industry to implement collaborative solutions for augmented reality that let each end user work on the same virtual object generally attempt to merge and synchronize the “virtual spaces” of separate head mounted displays using inside-out tracking data.
- Head mounted display tracking for collaboration purposes has limitations in distance accuracy, field of view, processing power and user tracking.
- a virtual object anchor stores an offset that defines a virtual object position, orientation and scale relative to a position of the virtual object anchor.
- Information handling systems retrieve the offset an apply the offset to generate a virtual object in a head mounted display at a location relative to the virtual object anchor defined by the offset.
- a virtual object anchor integrates a processor, memory, network interface device and sensors in a portable housing configured to rest on a desktop surface. Instructions stored in non-transitory memory of the virtual object anchor execute on the processor to store an offset that defines a position, orientation and scale of a virtual object relative to the housing and that communicates the offset to information handling systems proximate the object. Information handling systems apply the offset to generate a three dimensional visual image of a virtual object in a head mounted display at a location defined by the offset. As an information handling system detects gestures of an end user that change the virtual object's presentation, the information handling system updates the offset and communicates the updated offset to the virtual object anchor.
- the virtual object anchor stores the updated offset and communicates the updated offset to other information handling systems so that the other information handling systems render the virtual object as updated by the gestures.
- sensors integrated in the virtual object anchor detect gestures at the position defined by the offset and apply the detected gestures to update the offset so that information handling systems can retrieve the updated offset and render the virtual object against a common coordinate system.
- position information determined to a head mounted display by sensors of the virtual object anchor is compared with position information determined to the virtual object anchor by sensors of the head mounted display to calibrate the presentation position of the virtual object.
- a virtual object presented at a head mounted display correlates in its relative physical position to virtual objects of other head mounted displays so that end users wearing the head mounted displays can collaborate with interactions at the virtual objects.
- the location of the virtual object is defined by offset values stored in a physical device, referred to herein as the virtual object anchor, that each head mounted display retrieves.
- Each head mounted display then applies the offset and the head mounted display's position relative to the virtual object anchor to determine a location of the virtual object, and presents the virtual object at the determined location.
- changes made to the presentation of the virtual object are stored locally on the virtual object anchor with updated offset information that allows other head mounted displays to match the virtual object's relative position.
- multiple virtual object anchors cooperate through networked communications so that the virtual object anchors at different physical locations coordinate the relative positioning of the virtual object.
- FIG. 1 depicts a block diagram of an information handling system and virtual object anchor that coordinate presentation of a virtual object through plural head mounted displays;
- FIG. 2 depicts a block diagram of a virtual object anchor that stores a virtual object offset defining position, orientation and scale of a virtual object:
- FIG. 3 depicts an example embodiment of collaboration between multiple head mounted displays that present a virtual object coordinated by multiple virtual object anchors
- FIG. 4 depicts a flow diagram of a process of an example embodiment for initial discovery of a virtual object anchor and virtual object rendering relative to the virtual object anchor;
- FIG. 5 depicts a flow diagram of a process of an example embodiment for display object calibration of a virtual object location:
- FIG. 6 depicts a flow diagram of a process of an example embodiment for tracking gesture inputs made at a virtual object
- FIG. 7 depicts a flow diagram of a process of an example embodiment for tracking virtual object position changes related to virtual object anchor movement.
- a virtual object anchor coordinates presentation of a virtual object by plural information handling systems through plural head mounted displays by storing an offset of the virtual object's position relative to the virtual object anchor and providing the offset to the plural information handling systems.
- an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes.
- an information handling system may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price.
- the information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
- RAM random access memory
- processing resources such as a central processing unit (CPU) or hardware or software control logic
- ROM read-only memory
- Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display.
- I/O input and output
- the information handling system may also include one or more buses operable to transmit communications between the various hardware components.
- FIG. 1 a block diagram depicts an information handling system 10 and virtual object anchor 54 that coordinate presentation of a virtual object 70 through plural head mounted displays 32 .
- information handling system 10 processes information by executing instructions on a central processing unit (CPU) 12 .
- CPU 12 executes instructions of an operating system and applications that are stored in random access memory (RAM) 14 , such as to generate visual information for presentation to an end user based upon inputs made by the end user.
- RAM random access memory
- a chipset 16 interfaces with CPU 12 to coordinate interactions with input/output (I/O) devices, such as a keyboard and mouse.
- I/O input/output
- a graphics processor unit (GPU) 18 interfaces with CPU 12 to accept visual information and process the visual information into pixel values that render a visual image at a display 22 , such as a flat panel display integrated in a portable housing with CPU 12 .
- a wireless network interface card (WNIC) 20 interfaces with CPU 12 to communicate with external devices, such as through WiGiG, Bluetooth or other wireless protocols.
- a solid state drive (SSD) 24 or other similar persistent storage device stores information and applications at information handling system 10 during power-off time periods.
- a CAD application 26 and augmented reality (AR) application 28 retrieved to RAM 14 support end user interactions with designed objects through head mounted displays 32 as described in greater detail below.
- a camera 30 senses end user actions, such as in cooperation with infrared emitters to operate as a depth camera.
- Other sensors may be used to send end user actions, such as an ultrasonic sensor that uses Doppler effects to detect end user gestures.
- Information handling system 10 depicts one example configuration of processing components that cooperate to process information and other configurations may be used.
- information handling system 10 interfaces with a head mounted display 32 that rests on an end user head to present visual images as three dimensional objects as the end user views real objects through a clear visor 52 .
- head mounted display 32 has a CPU 12 that executes instructions stored in RAM 14 to coordinate presentation of visual images, such as operating code retrieved by CPU 12 to RAM 14 from non-transient flash memory.
- Head mounted display 32 includes a wireless network interface card 20 that communicates through a wireless or wired (e.g. HDMI, USB, DP) interface 38 to information handling system 10 , such as to retrieve visual information to present as visual images at display 32 .
- a wireless or wired e.g. HDMI, USB, DP
- Display 32 is, for instance an LCD projector integrated in head mounted display 32 presents a three dimensional object to appear as a virtual object 70 at a location beyond the clear visor 52 , such as at an arm's reach of the end user.
- Head mounted display 32 includes one or more accelerometers to detect motion, such as configured as an inertial motion detector that senses orientation.
- Head mounted display 32 includes a magnetic compass 36 that provides a reference axis, such as true north.
- Various sensors, such as depth camera 30 or ultrasonic sensor, integrated in head mounted display 32 detect objects, such as end user hands that perform gestures. For example, an end user viewing a virtual object 70 can reach with hands to make a rotational movement at object 70 that is detected by depth camera 30 .
- information handling system 10 where augmented reality application 28 and GPU 18 cooperate to adapt the presentation of virtual object 70 to reflect the gesture.
- information handling system 10 may be integrated into head mounted display 32 to process and present a virtual object as a contiguous unit.
- a virtual object anchor 54 placed on desktop 46 locally stores a virtual object offset 50 that defines a presentation location, orientation and scale for virtual object 70 .
- Virtual object anchor 54 is a physical object that provides a physical reference point for presentation of a virtual object.
- the offset is nine data points per frame, including three for position, three for rotation and three for scale. The nine data points describe the virtual object's location in space relative to virtual object anchor 54 so that an information handling system 10 that knows virtual object anchor 54 's position and rotation along with the offset can render virtual object 70 relative to virtual object anchor 54 in common with other information handling system's 10 .
- two separate information handling systems 10 render virtual object 70 for two separate end user head mounted displays 32 .
- Each information handling system 10 retrieves the same offset from virtual object anchor 54 , such as with a Bluetooth interface between each information handling system 10 and virtual object anchor 54 .
- Each head mounted display 32 includes sensors that define a user position vector 48 from the head mounted display 32 to virtual object anchor 54 .
- augmented reality application 28 cooperates with GPU 18 to render virtual object 70 in head mounted display 32 to have a common location presentation across multiple head mounted displays 32 .
- each end user sees virtual object 70 in a position, orientation and scale fixed relative to virtual object anchor 54 .
- end user's located at opposite sides of virtual object 70 will see opposite sides of virtual object 70 .
- FIG. 1 depicts a single virtual object 70
- multiple virtual objects may be defined by multiple offsets 50 for presentation of multiple virtual objects at desktop 46 .
- virtual object 70 has presentation bifurcated across two or more databases.
- Information handling systems 10 interface through a network 40 with a server information handling system 42 that has a CAD database 44 that stores a CAD model of virtual object 70 .
- a CAD application 26 on each information handling system 10 retrieves the model from CAD database 44 to render the model with GPU 18 as a three dimensional visual image for presentation by a head mounted display 32 .
- Augmented reality application 28 adjusts the presentation position, orientation and scale based upon virtual object offset 50 to render the image in a fixed location relative to virtual object anchor 54 .
- an update to offset 50 is determined based upon the end user interaction and stored to virtual object anchor 54 so that all end user's viewing virtual object 70 can retrieve the updated offset and render virtual object 70 with updated position, orientation and scale based upon the detected gestures.
- Bifurcation of model and offset data provides a powerful collaboration tool. For example, an end user may elect to manipulate virtual object 70 without storing an updated offset so that the end user can consider alternative views alone, and then update offset 50 when ready to share with others. At any time, all end users viewing virtual object 70 may share a common view rendered at each head mounted display 32 by retrieving offset 50 from memory of virtual object anchor 54 and applying offset 50 with user position vector 48 to render virtual object 70 .
- CAD database 44 may be collocated with offset information, such as at the same server information handling system or stored in virtual object anchor 54 .
- virtual object anchor 54 may itself include resources to act as a server information handling system and/or a data storage server that has memory and network bandwidth sufficient to support communication of a CAD model to client information handling systems for presentation at head mounted displays.
- FIG. 2 a block diagram depicts a virtual object anchor 54 that stores a virtual object offset 50 defining position, orientation and scale of a virtual object.
- virtual object anchor 54 includes a CPU 12 , such as an ARM processor, that executes an augmented reality application 60 stored in an SSD 24 or other non-transient memory.
- Augmented reality application 60 stores offset information for one or more virtual objects in an augmented reality database 62 .
- Virtual object anchor 54 has a wireless interface device, such as a Bluetooth WNIC 20 , to communicate with information handling systems 10 and/or head mounted displays 32 .
- Sensors 56 integrated in virtual object anchor 54 interface with CPU 12 to determine virtual object anchor 54 's position, orientation and movements.
- accelerometers 34 detect motion and rotation of virtual object anchor 54 , such as with inertial logic that senses accelerations and gyroscopic forces across plural axes.
- a compass 36 provides a reference to true north so that offset 50 is defined relative to geographic feature that head mounted displays and information handling systems can separately detect.
- virtual object anchor 54 has LEDs 64 or other visual reference points that provide a visual indication of a common axis relative to offset 50 . For instance, illumination of an LED acts as the X-axis relative to the center of virtual object anchor 54 that a head mounted display 32 can sense with a camera and apply to offset 50 to determine orientation for rendering virtual object 70 .
- An infrared emitter 58 and camera 30 integrated in virtual object anchor 54 provides virtual object anchor 54 to sense external conditions, such as end user gestures made at a location of virtual object 70 as set forth below.
- alternative sensors may be included, such as ultrasonic sensors that use Doppler effects to detect motion like gestures made at a virtual object 70 offset relative to virtual object anchor 54 .
- network interfaces may be included, such as WiGiG and wired interfaces, or a USB interface established through an information handling system.
- a display 22 integrated in virtual object anchor 54 such as an OLED, LED or LCD, presents visual information detectable by head mounted displays, such as non-symmetrical identifier that identifies a compass or other orientation of virtual object anchor 54 .
- storage and sensors integrated in virtual object anchor 54 provide flexible interactions with multiple information handling systems for presenting a common virtual object and manipulating that object.
- Storing a virtual object offset in memory of virtual object anchor 54 lets multiple information handling systems retrieve the offset information an present the virtual object at a common coordinate system referenced to virtual object anchor 54 , such as true north or a visually distinct marking on virtual object anchor 54 .
- information handling systems retrieve the offset as needed, such as at regular intervals or upon receiving an alert that the offset has changed. For example, if one end user makes a gestured detected by the end user's sensors, the end user's information handling system communicates an updated offset to virtual object anchor 54 for storage in memory.
- the information handling system may alert collaborating information handling systems of the change or, alternatively, virtual object anchor 54 may broadcast an alert that an updated offset is available for download.
- virtual object anchor 54 may broadcast the offset at regular intervals, such as with a Bluetooth beacon, so that any information handling system in range get the offset by listening to the broadcast.
- sensors 56 of virtual object anchor 54 detect end user gestures, compute an updated offset based upon the detected gestures, and provide the updated offset to the information handling to apply in rendering the virtual object. As is set forth below in greater detail, sensors 56 also provide a way to calibrate end user presentation and gesture input by measuring the relative position of head mounted displays in proximity to virtual object anchor 54 .
- an example embodiment depicts collaboration between multiple head mounted displays that present a virtual object coordinated by multiple virtual object anchors 54 .
- virtual object anchors 54 at locations A and B interface through a network 40 and server information handling system 42 to coordinate presentation of a virtual object 70 in a collaborative environment at both locations.
- Each virtual object anchor 54 locally stores an offset and communicates updates to an virtual object anchor database 68 so that both virtual object anchors 54 synchronize the offset, such as with a server push to each virtual object anchor 54 as updates are made at other virtual object anchors 54 .
- virtual object anchor 54 stores a location, such as in offset form, of each end user sensed proximate each virtual object anchor 54 .
- an end user 72 at location A sees a front view of virtual object 70 based upon an offset retrieved from virtual object anchor 54 at location A and a vector sensed from end user 72 to virtual object anchor 54 .
- end user 74 located at location B views a left side of virtual object 70 based upon the offset retrieved by end user 74 from virtual object anchor 54 at location B.
- End user 74 's relative location to virtual object anchor 54 is captured with sensors of end user 74 's head mounted display 32 and/or with sensors of virtual object anchor 54 , and then stored in virtual object anchor 54 and virtual object anchor database 68 .
- head mounted display 32 at location A is able to create a virtual person within a clear visor 76 that shows the virtual position of end user 74 to the left of virtual object 70 .
- virtual object 70 is presented so that both end users 72 and 74 see the same object oriented in the same manner.
- End user 72 sees, for instance, a front view of virtual object 70
- end user 74 standing to the left sees the left side view of virtual object 70 .
- End user 72 is provided with intuitive knowledge of end user 74 's position relative to virtual object anchor 54 and knows from the relative viewing perspective how end user 74 is viewing the virtual object 70 . Based upon this shared viewing of virtual object 70 using the shared offset stored on each virtual object anchor 54 , end users 72 and 74 may more readily collaborate through manipulation of virtual object 70 .
- a flow diagram depicts a process of an example embodiment for initial discovery of a virtual object anchor and virtual object rendering relative to the virtual object anchor.
- the process starts at step 78 with initiation of wireless module pairing, such as Bluetooth, between the virtual object anchor 54 and a head mounted display 32 , such as by a button press or other user action. Pairing between the head mounted display and virtual object anchor includes a transfer of information that aids in identification of the head mounted display and virtual object anchor by each other. For instance, at step 80 with pairing an infrared sensor initiates to emit light from virtual object anchor 54 that helps virtual object anchor 54 detect the position of head mounted display 32 .
- virtual object anchor 82 calculates a distance and angle from virtual object anchor 54 and stores the calculation locally for access by head mounted display 32 and/or other devices that desire to know the spatial relationship of devices in the area of virtual object anchor 54 .
- virtual object anchor 54 in one embodiment continues to track head mounted display 32 as it moves and updates positions stored in virtual object anchor 54 .
- pairing through head mounted display 32 is completed with cooperation of a host information handling system 10 .
- host information handling system 10 translates the distance and angle data from the virtual object anchor 54 position to real world coordinates, such as true north or an axis derived from a visually distinctive indication of a reference axis on the exterior of virtual object anchor 54 , such as an LED.
- the virtual object is presented in the head mounted display at a location indicated by offset retrieved from virtual object anchor 54 during pairing. For instance, if the position indicated by the offset is over the top of the virtual object anchor 54 , at step 90 the virtual object is presented in the head mounted display over virtual object anchor 54 .
- the virtual object is placed initially over virtual object anchor 54 using the model database and oriented to the head mounted display until offset information is retrieved from virtual object anchor 54 and applied by the host information handling system.
- a flow diagram depicts a process of an example embodiment for display object calibration of a virtual object location.
- the process starts at step 92 with transmission of position information from virtual object anchor 54 to host information handling system 10 with conventional wireless or wired communication.
- the host information handling system monitors position data coming from virtual object anchor 54 .
- virtual object anchor 54 's top surface starts to show a distinct and non-symmetrical identifier, such as is distinguishable by a camera of head mounted display 32 .
- head mounted display inside-out tracking locates virtual object anchor 54 if virtual object anchor 54 is within the field of view of a sensor of head mounted display 32 .
- head mounted display 32 monitors data coming through networked communications and from visual indications presented at virtual object anchor 54 .
- the host information handling system applies the position information derived from head mounted display 32 and virtual object anchor 54 to calibrate the position of the virtual object presentation within head mounted display 32 .
- an auto-correction calibration or drift in the virtual object anchor inertial monitoring unit or infrared sensor position detection provides a correction that helps head mounted display 32 more precisely present the virtual object.
- virtual object anchor 54 calculates the distance and angle of virtual object anchor 54 to head mounted display 32 to provide a basis for comparing between the resolution of sensors of virtual object anchor 54 and head mounted display 32 . Continuous monitoring reduces drift and variation in the presentation of the virtual object as multiple head mounted displays should have the virtual object presented with the same offset at the same position, orientation and scale.
- a flow diagram depicts a process of an example embodiment for tracking gesture inputs made at a virtual object.
- the process starts at step 104 by tracking virtual gestures performed by an end user of head mounted display 32 , such as zoom, movement laterally, longitudinally and vertically, and changes to orientation relative to virtual object anchor 54 .
- an information handling system supporting the head mounted display may alter the virtual object responsive to the gestures or may wait until the offset stored in virtual object anchor 54 is updated.
- virtual object anchor 54 collects gesture data from all of the users that are interacting with the virtual object.
- sensors of virtual object anchor 54 may directly detect gestures rather than receiving gesture information from head mounted display gesture detection.
- virtual object anchor 54 synchronizes gestures performed by the end user to the location, rotation and size of the virtual object relative to virtual object anchor 54 , such as by resolving multiple gestures by multiple users to generate an appropriate input that is reflected in the offset stored on virtual object anchor 54 .
- the offset that reflects the resolved position, orientation and scale of the virtual object is communicated to all information handling systems involved in the collaboration.
- information handling system 10 renders the virtual object in the head mounted display to present the virtual object at the location indicated by the offset received from virtual object anchor 54 , such as according to a common coordinate system shared with other collaborating information handling systems relative to virtual object anchor 54 .
- all head mounted displays 32 participating in the collaboration present the virtual object in its new location/position relative to virtual object anchor 54 .
- a flow diagram depicts a process of an example embodiment for tracking virtual object position changes related to virtual object anchor movement.
- the process starts at step 114 with virtual object anchor 54 physically moved to a new location, such as to the left or right on a desktop.
- virtual object anchor 54 calculates the new distance and angle of virtual object anchor 54 from head mounted displays 32 , such as based upon the infrared sensor.
- each host information handling system 10 translates the new distance and angle information received from virtual object anchor 54 to determine a position of the virtual object in shared coordinates.
- the host information handling systems render the virtual object in the head mounted display when the virtual object is within the field of view of the head mounted display.
- all head mounted displays 32 collaborating through virtual object anchor 54 present the virtual object relative to virtual object anchor 54 's new location and orientation.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
- The present invention relates in general to the field of information handling system visual information presentation, and more particularly to an information handling system augmented reality through a virtual object anchor.
- As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
- Information handling systems often process information to present the information at a display as a visual image. Typically information handling systems include graphics systems that process visual information into pixel values that define an image at a display. Conventional information handling systems typically interface with flat panel displays that present the visual information as two dimensional visual objects. Enterprises that do design work often use information handling systems with powerful processing capabilities and graphics chips to rapidly process and present complex visual information. For example, computer aided design (CAD) applications define a product, building or other designed object in software and then render the object as a visual image that an end user can manipulate, such as by changing the object's orientation or peeling back layers of the object to view structures under the object. CAD applications greatly simplify design work by aiding end user visualization of the designed object before prototypes are built.
- Recently, head mounted displays have entered enterprise work places as a tool to help designers visualize designed objects. Head mounted displays generally operate by projecting an image within head gear worn by an end user and in front of the end user's eyes. Head mounted displays can project the designed object in three dimensions to appear as if the object is in space at a location in front of the end user. An engineer wearing a head mounted display can view a three dimensional image of a designed object within arm's length and then reach out and manipulate the designed object with gestures, such as grasping at the projected location of the three dimensional image. Head mounted displays generally work in a virtual reality or an augmented reality. Virtual reality images are presented against a darkened background so that the end user views just the displayed virtual reality. Augmented reality images are presented against an opening that lets the end user see the “real” world in front of him while projecting the three dimensional image against the real world background. Thus, for example, an end user manipulating a virtual object in virtual reality cannot directly view his hands during the manipulation; in contrast, an end user manipulating a virtual object in augmented reality can view his hands as they touch and gesture relative to the virtual object. Augmented reality tends to provide a more intuitive interaction with end user gestures.
- Often enterprise designers collaborate on design projects with different individuals making different contributions to the design. A difficulty with augmented reality in a collaborative environment is that each end user has his own three dimensional virtual object presented through his own head mounted display. Thus, a “real-life” collaboration when end users each wear their own head mounted display does not generally involve coordinated presentation of the virtual object so that end users interact at the same virtual object location. Efforts in industry to implement collaborative solutions for augmented reality that let each end user work on the same virtual object generally attempt to merge and synchronize the “virtual spaces” of separate head mounted displays using inside-out tracking data. Head mounted display tracking for collaboration purposes has limitations in distance accuracy, field of view, processing power and user tracking.
- Therefore, a need has arisen for a system and method which provides information handling system augmented reality through a virtual object anchor.
- In accordance with the present invention, a system and method are provided which substantially reduce the disadvantages and problems associated with previous methods and systems for collaborating between multiple users interacting with an augmented reality object. A virtual object anchor stores an offset that defines a virtual object position, orientation and scale relative to a position of the virtual object anchor. Information handling systems retrieve the offset an apply the offset to generate a virtual object in a head mounted display at a location relative to the virtual object anchor defined by the offset.
- More specifically, a virtual object anchor integrates a processor, memory, network interface device and sensors in a portable housing configured to rest on a desktop surface. Instructions stored in non-transitory memory of the virtual object anchor execute on the processor to store an offset that defines a position, orientation and scale of a virtual object relative to the housing and that communicates the offset to information handling systems proximate the object. Information handling systems apply the offset to generate a three dimensional visual image of a virtual object in a head mounted display at a location defined by the offset. As an information handling system detects gestures of an end user that change the virtual object's presentation, the information handling system updates the offset and communicates the updated offset to the virtual object anchor. The virtual object anchor stores the updated offset and communicates the updated offset to other information handling systems so that the other information handling systems render the virtual object as updated by the gestures. Alternatively, sensors integrated in the virtual object anchor detect gestures at the position defined by the offset and apply the detected gestures to update the offset so that information handling systems can retrieve the updated offset and render the virtual object against a common coordinate system. In one embodiment, position information determined to a head mounted display by sensors of the virtual object anchor is compared with position information determined to the virtual object anchor by sensors of the head mounted display to calibrate the presentation position of the virtual object.
- The present invention provides a number of important technical advantages. One example of an important technical advantage is that a virtual object presented at a head mounted display correlates in its relative physical position to virtual objects of other head mounted displays so that end users wearing the head mounted displays can collaborate with interactions at the virtual objects. The location of the virtual object is defined by offset values stored in a physical device, referred to herein as the virtual object anchor, that each head mounted display retrieves. Each head mounted display then applies the offset and the head mounted display's position relative to the virtual object anchor to determine a location of the virtual object, and presents the virtual object at the determined location. As an end user interacts with the virtual object, changes made to the presentation of the virtual object are stored locally on the virtual object anchor with updated offset information that allows other head mounted displays to match the virtual object's relative position. In one embodiment, multiple virtual object anchors cooperate through networked communications so that the virtual object anchors at different physical locations coordinate the relative positioning of the virtual object.
- The present invention may be better understood, and its numerous objects, features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference number throughout the several figures designates a like or similar element.
-
FIG. 1 depicts a block diagram of an information handling system and virtual object anchor that coordinate presentation of a virtual object through plural head mounted displays; -
FIG. 2 depicts a block diagram of a virtual object anchor that stores a virtual object offset defining position, orientation and scale of a virtual object: -
FIG. 3 depicts an example embodiment of collaboration between multiple head mounted displays that present a virtual object coordinated by multiple virtual object anchors; -
FIG. 4 depicts a flow diagram of a process of an example embodiment for initial discovery of a virtual object anchor and virtual object rendering relative to the virtual object anchor; -
FIG. 5 depicts a flow diagram of a process of an example embodiment for display object calibration of a virtual object location: -
FIG. 6 depicts a flow diagram of a process of an example embodiment for tracking gesture inputs made at a virtual object; and -
FIG. 7 depicts a flow diagram of a process of an example embodiment for tracking virtual object position changes related to virtual object anchor movement. - A virtual object anchor coordinates presentation of a virtual object by plural information handling systems through plural head mounted displays by storing an offset of the virtual object's position relative to the virtual object anchor and providing the offset to the plural information handling systems. For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
- Referring now to
FIG. 1 , a block diagram depicts aninformation handling system 10 andvirtual object anchor 54 that coordinate presentation of avirtual object 70 through plural head mounteddisplays 32. In the example embodiment,information handling system 10 processes information by executing instructions on a central processing unit (CPU) 12. For instance,CPU 12 executes instructions of an operating system and applications that are stored in random access memory (RAM) 14, such as to generate visual information for presentation to an end user based upon inputs made by the end user. Achipset 16 interfaces withCPU 12 to coordinate interactions with input/output (I/O) devices, such as a keyboard and mouse. A graphics processor unit (GPU) 18 interfaces withCPU 12 to accept visual information and process the visual information into pixel values that render a visual image at adisplay 22, such as a flat panel display integrated in a portable housing withCPU 12. A wireless network interface card (WNIC) 20 interfaces withCPU 12 to communicate with external devices, such as through WiGiG, Bluetooth or other wireless protocols. A solid state drive (SSD) 24 or other similar persistent storage device stores information and applications atinformation handling system 10 during power-off time periods. In the example embodiment, aCAD application 26 and augmented reality (AR)application 28 retrieved to RAM 14 support end user interactions with designed objects through head mounted displays 32 as described in greater detail below. Acamera 30 senses end user actions, such as in cooperation with infrared emitters to operate as a depth camera. Other sensors may be used to send end user actions, such as an ultrasonic sensor that uses Doppler effects to detect end user gestures.Information handling system 10 depicts one example configuration of processing components that cooperate to process information and other configurations may be used. - In the example embodiment,
information handling system 10 interfaces with a head mounteddisplay 32 that rests on an end user head to present visual images as three dimensional objects as the end user views real objects through aclear visor 52. In the example embodiment, head mounteddisplay 32 has aCPU 12 that executes instructions stored inRAM 14 to coordinate presentation of visual images, such as operating code retrieved byCPU 12 to RAM 14 from non-transient flash memory. Head mounteddisplay 32 includes a wirelessnetwork interface card 20 that communicates through a wireless or wired (e.g. HDMI, USB, DP)interface 38 toinformation handling system 10, such as to retrieve visual information to present as visual images atdisplay 32.Display 32 is, for instance an LCD projector integrated in head mounteddisplay 32 presents a three dimensional object to appear as avirtual object 70 at a location beyond theclear visor 52, such as at an arm's reach of the end user. Head mounteddisplay 32 includes one or more accelerometers to detect motion, such as configured as an inertial motion detector that senses orientation. Head mounteddisplay 32 includes amagnetic compass 36 that provides a reference axis, such as true north. Various sensors, such asdepth camera 30 or ultrasonic sensor, integrated in head mounteddisplay 32 detect objects, such as end user hands that perform gestures. For example, an end user viewing avirtual object 70 can reach with hands to make a rotational movement atobject 70 that is detected bydepth camera 30. The detected gestures are then communicated toinformation handling system 10 whereaugmented reality application 28 andGPU 18 cooperate to adapt the presentation ofvirtual object 70 to reflect the gesture. In an alternative embodiment,information handling system 10 may be integrated into head mounteddisplay 32 to process and present a virtual object as a contiguous unit. - In order to coordinate presentation of
virtual object 70 for multiple end users, avirtual object anchor 54 placed ondesktop 46 locally stores a virtual object offset 50 that defines a presentation location, orientation and scale forvirtual object 70.Virtual object anchor 54 is a physical object that provides a physical reference point for presentation of a virtual object. For example, the offset is nine data points per frame, including three for position, three for rotation and three for scale. The nine data points describe the virtual object's location in space relative tovirtual object anchor 54 so that aninformation handling system 10 that knowsvirtual object anchor 54's position and rotation along with the offset can rendervirtual object 70 relative tovirtual object anchor 54 in common with other information handling system's 10. In the example embodiment, two separateinformation handling systems 10 rendervirtual object 70 for two separate end user head mounted displays 32. Eachinformation handling system 10 retrieves the same offset fromvirtual object anchor 54, such as with a Bluetooth interface between eachinformation handling system 10 andvirtual object anchor 54. Each head mounteddisplay 32 includes sensors that define auser position vector 48 from the head mounteddisplay 32 tovirtual object anchor 54. Once avirtual object 70 offset 50 anduser position vector 48 are known toaugmented reality application 28,augmented reality application 28 cooperates withGPU 18 to rendervirtual object 70 in head mounteddisplay 32 to have a common location presentation across multiple head mounted displays 32. For example, each end user seesvirtual object 70 in a position, orientation and scale fixed relative tovirtual object anchor 54. For instance, end user's located at opposite sides ofvirtual object 70 will see opposite sides ofvirtual object 70. AlthoughFIG. 1 depicts a singlevirtual object 70, in alternative embodiments, multiple virtual objects may be defined bymultiple offsets 50 for presentation of multiple virtual objects atdesktop 46. - In the example embodiment,
virtual object 70 has presentation bifurcated across two or more databases.Information handling systems 10 interface through anetwork 40 with a serverinformation handling system 42 that has aCAD database 44 that stores a CAD model ofvirtual object 70. ACAD application 26 on eachinformation handling system 10 retrieves the model fromCAD database 44 to render the model withGPU 18 as a three dimensional visual image for presentation by a head mounteddisplay 32.Augmented reality application 28 adjusts the presentation position, orientation and scale based upon virtual object offset 50 to render the image in a fixed location relative tovirtual object anchor 54. If the end user altersvirtual object 70's position, orientation and/or scale with a gesture, an update to offset 50 is determined based upon the end user interaction and stored tovirtual object anchor 54 so that all end user's viewingvirtual object 70 can retrieve the updated offset and rendervirtual object 70 with updated position, orientation and scale based upon the detected gestures. Bifurcation of model and offset data provides a powerful collaboration tool. For example, an end user may elect to manipulatevirtual object 70 without storing an updated offset so that the end user can consider alternative views alone, and then update offset 50 when ready to share with others. At any time, all end users viewingvirtual object 70 may share a common view rendered at each head mounteddisplay 32 by retrieving offset 50 from memory ofvirtual object anchor 54 and applying offset 50 withuser position vector 48 to rendervirtual object 70. In alternative embodiments,CAD database 44 may be collocated with offset information, such as at the same server information handling system or stored invirtual object anchor 54. For example,virtual object anchor 54 may itself include resources to act as a server information handling system and/or a data storage server that has memory and network bandwidth sufficient to support communication of a CAD model to client information handling systems for presentation at head mounted displays. - Referring now to
FIG. 2 , a block diagram depicts avirtual object anchor 54 that stores a virtual object offset 50 defining position, orientation and scale of a virtual object. In the example embodiment,virtual object anchor 54 includes aCPU 12, such as an ARM processor, that executes anaugmented reality application 60 stored in anSSD 24 or other non-transient memory.Augmented reality application 60 stores offset information for one or more virtual objects in anaugmented reality database 62.Virtual object anchor 54 has a wireless interface device, such as aBluetooth WNIC 20, to communicate withinformation handling systems 10 and/or head mounted displays 32. Sensors 56 integrated invirtual object anchor 54 interface withCPU 12 to determinevirtual object anchor 54's position, orientation and movements. In the example embodiment,accelerometers 34 detect motion and rotation ofvirtual object anchor 54, such as with inertial logic that senses accelerations and gyroscopic forces across plural axes. Acompass 36 provides a reference to true north so that offset 50 is defined relative to geographic feature that head mounted displays and information handling systems can separately detect. Alternatively or in combination withcompass 36,virtual object anchor 54 has LEDs 64 or other visual reference points that provide a visual indication of a common axis relative to offset 50. For instance, illumination of an LED acts as the X-axis relative to the center ofvirtual object anchor 54 that a head mounteddisplay 32 can sense with a camera and apply to offset 50 to determine orientation for renderingvirtual object 70. Aninfrared emitter 58 andcamera 30 integrated invirtual object anchor 54 providesvirtual object anchor 54 to sense external conditions, such as end user gestures made at a location ofvirtual object 70 as set forth below. In alternative embodiments, alternative sensors may be included, such as ultrasonic sensors that use Doppler effects to detect motion like gestures made at avirtual object 70 offset relative tovirtual object anchor 54. Further alternative network interfaces may be included, such as WiGiG and wired interfaces, or a USB interface established through an information handling system. In one alternative embodiment, adisplay 22 integrated invirtual object anchor 54, such as an OLED, LED or LCD, presents visual information detectable by head mounted displays, such as non-symmetrical identifier that identifies a compass or other orientation ofvirtual object anchor 54. - Advantageously, storage and sensors integrated in
virtual object anchor 54 provide flexible interactions with multiple information handling systems for presenting a common virtual object and manipulating that object. Storing a virtual object offset in memory ofvirtual object anchor 54 lets multiple information handling systems retrieve the offset information an present the virtual object at a common coordinate system referenced tovirtual object anchor 54, such as true north or a visually distinct marking onvirtual object anchor 54. In one embodiment, information handling systems retrieve the offset as needed, such as at regular intervals or upon receiving an alert that the offset has changed. For example, if one end user makes a gestured detected by the end user's sensors, the end user's information handling system communicates an updated offset tovirtual object anchor 54 for storage in memory. The information handling system may alert collaborating information handling systems of the change or, alternatively,virtual object anchor 54 may broadcast an alert that an updated offset is available for download. In one embodiment,virtual object anchor 54 may broadcast the offset at regular intervals, such as with a Bluetooth beacon, so that any information handling system in range get the offset by listening to the broadcast. In an alternative embodiment, sensors 56 ofvirtual object anchor 54 detect end user gestures, compute an updated offset based upon the detected gestures, and provide the updated offset to the information handling to apply in rendering the virtual object. As is set forth below in greater detail, sensors 56 also provide a way to calibrate end user presentation and gesture input by measuring the relative position of head mounted displays in proximity tovirtual object anchor 54. - Referring now to
FIG. 3 , an example embodiment depicts collaboration between multiple head mounted displays that present a virtual object coordinated by multiple virtual object anchors 54. In the example embodiment, virtual object anchors 54 at locations A and B interface through anetwork 40 and serverinformation handling system 42 to coordinate presentation of avirtual object 70 in a collaborative environment at both locations. Eachvirtual object anchor 54 locally stores an offset and communicates updates to an virtualobject anchor database 68 so that both virtual object anchors 54 synchronize the offset, such as with a server push to eachvirtual object anchor 54 as updates are made at other virtual object anchors 54. In addition,virtual object anchor 54 stores a location, such as in offset form, of each end user sensed proximate eachvirtual object anchor 54. Thus, in the example embodiment, anend user 72 at location A sees a front view ofvirtual object 70 based upon an offset retrieved fromvirtual object anchor 54 at location A and a vector sensed fromend user 72 tovirtual object anchor 54. At the same time,end user 74 located at location B views a left side ofvirtual object 70 based upon the offset retrieved byend user 74 fromvirtual object anchor 54 at locationB. End user 74's relative location tovirtual object anchor 54 is captured with sensors ofend user 74's head mounteddisplay 32 and/or with sensors ofvirtual object anchor 54, and then stored invirtual object anchor 54 and virtualobject anchor database 68. By knowing the relative position ofend user 74 at location B, head mounteddisplay 32 at location A is able to create a virtual person within aclear visor 76 that shows the virtual position ofend user 74 to the left ofvirtual object 70. In summary,virtual object 70 is presented so that bothend users End user 72 sees, for instance, a front view ofvirtual object 70, andend user 74 standing to the left sees the left side view ofvirtual object 70.End user 72 is provided with intuitive knowledge ofend user 74's position relative tovirtual object anchor 54 and knows from the relative viewing perspective howend user 74 is viewing thevirtual object 70. Based upon this shared viewing ofvirtual object 70 using the shared offset stored on eachvirtual object anchor 54,end users virtual object 70. - Referring now to
FIG. 4 , a flow diagram depicts a process of an example embodiment for initial discovery of a virtual object anchor and virtual object rendering relative to the virtual object anchor. The process starts atstep 78 with initiation of wireless module pairing, such as Bluetooth, between thevirtual object anchor 54 and a head mounteddisplay 32, such as by a button press or other user action. Pairing between the head mounted display and virtual object anchor includes a transfer of information that aids in identification of the head mounted display and virtual object anchor by each other. For instance, atstep 80 with pairing an infrared sensor initiates to emit light fromvirtual object anchor 54 that helpsvirtual object anchor 54 detect the position of head mounteddisplay 32. Atstep 82,virtual object anchor 82 calculates a distance and angle fromvirtual object anchor 54 and stores the calculation locally for access by head mounteddisplay 32 and/or other devices that desire to know the spatial relationship of devices in the area ofvirtual object anchor 54. For example,virtual object anchor 54 in one embodiment continues to track head mounteddisplay 32 as it moves and updates positions stored invirtual object anchor 54. Atstep 84 pairing through head mounteddisplay 32 is completed with cooperation of a hostinformation handling system 10. Atstep 86, hostinformation handling system 10 translates the distance and angle data from thevirtual object anchor 54 position to real world coordinates, such as true north or an axis derived from a visually distinctive indication of a reference axis on the exterior ofvirtual object anchor 54, such as an LED. Atstep 88, based upon the infrared sensor position, if the virtual object is in the field of view of the head mounteddisplay 32, the virtual object is presented in the head mounted display at a location indicated by offset retrieved fromvirtual object anchor 54 during pairing. For instance, if the position indicated by the offset is over the top of thevirtual object anchor 54, atstep 90 the virtual object is presented in the head mounted display overvirtual object anchor 54. In one example embodiment, the virtual object is placed initially overvirtual object anchor 54 using the model database and oriented to the head mounted display until offset information is retrieved fromvirtual object anchor 54 and applied by the host information handling system. - Referring now to
FIG. 5 , a flow diagram depicts a process of an example embodiment for display object calibration of a virtual object location. The process starts atstep 92 with transmission of position information fromvirtual object anchor 54 to hostinformation handling system 10 with conventional wireless or wired communication. Atstep 94, the host information handling system monitors position data coming fromvirtual object anchor 54. In the example embodiment, atstep 96virtual object anchor 54's top surface starts to show a distinct and non-symmetrical identifier, such as is distinguishable by a camera of head mounteddisplay 32. Atstep 98, head mounted display inside-out tracking locatesvirtual object anchor 54 ifvirtual object anchor 54 is within the field of view of a sensor of head mounteddisplay 32. Atstep 100, head mounteddisplay 32 monitors data coming through networked communications and from visual indications presented atvirtual object anchor 54. Atstep 102, the host information handling system applies the position information derived from head mounteddisplay 32 andvirtual object anchor 54 to calibrate the position of the virtual object presentation within head mounteddisplay 32. For example an auto-correction calibration or drift in the virtual object anchor inertial monitoring unit or infrared sensor position detection provides a correction that helps head mounteddisplay 32 more precisely present the virtual object. Atstep 104,virtual object anchor 54 calculates the distance and angle ofvirtual object anchor 54 to head mounteddisplay 32 to provide a basis for comparing between the resolution of sensors ofvirtual object anchor 54 and head mounteddisplay 32. Continuous monitoring reduces drift and variation in the presentation of the virtual object as multiple head mounted displays should have the virtual object presented with the same offset at the same position, orientation and scale. - Referring now to
FIG. 6 , a flow diagram depicts a process of an example embodiment for tracking gesture inputs made at a virtual object. The process starts atstep 104 by tracking virtual gestures performed by an end user of head mounteddisplay 32, such as zoom, movement laterally, longitudinally and vertically, and changes to orientation relative tovirtual object anchor 54. In one embodiment, an information handling system supporting the head mounted display may alter the virtual object responsive to the gestures or may wait until the offset stored invirtual object anchor 54 is updated. Atstep 106,virtual object anchor 54 collects gesture data from all of the users that are interacting with the virtual object. In one embodiment, sensors ofvirtual object anchor 54 may directly detect gestures rather than receiving gesture information from head mounted display gesture detection. Atstep 108,virtual object anchor 54 synchronizes gestures performed by the end user to the location, rotation and size of the virtual object relative tovirtual object anchor 54, such as by resolving multiple gestures by multiple users to generate an appropriate input that is reflected in the offset stored onvirtual object anchor 54. The offset that reflects the resolved position, orientation and scale of the virtual object is communicated to all information handling systems involved in the collaboration. Atstep 110, based upon continuous infrared or other sensor tracking data provided byvirtual object anchor 54,information handling system 10 renders the virtual object in the head mounted display to present the virtual object at the location indicated by the offset received fromvirtual object anchor 54, such as according to a common coordinate system shared with other collaborating information handling systems relative tovirtual object anchor 54. Atstep 112, all head mounted displays 32 participating in the collaboration present the virtual object in its new location/position relative tovirtual object anchor 54. - Referring now to
FIG. 7 , a flow diagram depicts a process of an example embodiment for tracking virtual object position changes related to virtual object anchor movement. The process starts atstep 114 withvirtual object anchor 54 physically moved to a new location, such as to the left or right on a desktop. At step 116,virtual object anchor 54 calculates the new distance and angle ofvirtual object anchor 54 from head mounted displays 32, such as based upon the infrared sensor. Atstep 118, each hostinformation handling system 10 translates the new distance and angle information received fromvirtual object anchor 54 to determine a position of the virtual object in shared coordinates. Atstep 120, based upon the new location ofvirtual object anchor 54 and the location of the virtual object relative tovirtual object anchor 54, the host information handling systems render the virtual object in the head mounted display when the virtual object is within the field of view of the head mounted display. Atstep 122, all head mounted displays 32 collaborating throughvirtual object anchor 54 present the virtual object relative tovirtual object anchor 54's new location and orientation. - Although the present invention has been described in detail, it should be understood that various changes, substitutions and alterations can be made hereto without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/909,108 US10403047B1 (en) | 2018-03-01 | 2018-03-01 | Information handling system augmented reality through a virtual object anchor |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/909,108 US10403047B1 (en) | 2018-03-01 | 2018-03-01 | Information handling system augmented reality through a virtual object anchor |
Publications (2)
Publication Number | Publication Date |
---|---|
US10403047B1 US10403047B1 (en) | 2019-09-03 |
US20190272674A1 true US20190272674A1 (en) | 2019-09-05 |
Family
ID=67768759
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/909,108 Active 2038-03-22 US10403047B1 (en) | 2018-03-01 | 2018-03-01 | Information handling system augmented reality through a virtual object anchor |
Country Status (1)
Country | Link |
---|---|
US (1) | US10403047B1 (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190385372A1 (en) * | 2018-06-15 | 2019-12-19 | Microsoft Technology Licensing, Llc | Positioning a virtual reality passthrough region at a known distance |
US20200020163A1 (en) * | 2018-07-11 | 2020-01-16 | The Boeing Company | Augmented Reality System with an Active Portable Anchor |
US10891792B1 (en) * | 2019-01-31 | 2021-01-12 | Splunk Inc. | Precise plane detection and placement of virtual objects in an augmented reality environment |
US10896546B2 (en) * | 2018-07-11 | 2021-01-19 | The Boeing Company | Augmented reality system with an active portable anchor |
US11030459B2 (en) * | 2019-06-27 | 2021-06-08 | Intel Corporation | Methods and apparatus for projecting augmented reality enhancements to real objects in response to user gestures detected in a real environment |
US11087538B2 (en) | 2018-06-26 | 2021-08-10 | Lenovo (Singapore) Pte. Ltd. | Presentation of augmented reality images at display locations that do not obstruct user's view |
US11094127B2 (en) * | 2018-09-25 | 2021-08-17 | Magic Leap, Inc. | Systems and methods for presenting perspective views of augmented reality virtual object |
US11232635B2 (en) * | 2018-10-05 | 2022-01-25 | Magic Leap, Inc. | Rendering location specific virtual content in any location |
US11244511B2 (en) * | 2018-10-18 | 2022-02-08 | Guangdong Virtual Reality Technology Co., Ltd. | Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device |
WO2022066459A1 (en) * | 2020-09-24 | 2022-03-31 | Sterling Labs Llc | Synchronization in a multiuser experience |
US11341676B2 (en) * | 2019-02-05 | 2022-05-24 | Google Llc | Calibration-free instant motion tracking for augmented reality |
US20220207832A1 (en) * | 2019-08-14 | 2022-06-30 | Korea Institute Of Science And Technology | Method and apparatus for providing virtual contents in virtual space based on common coordinate system |
US20220215342A1 (en) * | 2021-01-04 | 2022-07-07 | Polaris Industries Inc. | Virtual collaboration environment |
US11393170B2 (en) * | 2018-08-21 | 2022-07-19 | Lenovo (Singapore) Pte. Ltd. | Presentation of content based on attention center of user |
EP4040268A1 (en) * | 2021-02-08 | 2022-08-10 | Beijing SuperHexa Century Technology Co. Ltd. | Object sharing method and apparatus |
US11562492B2 (en) * | 2018-05-18 | 2023-01-24 | Ebay Inc. | Physical object boundary detection techniques and systems |
WO2023014618A1 (en) * | 2021-08-06 | 2023-02-09 | Apple Inc. | Object placement for electronic devices |
US11694390B2 (en) * | 2018-06-25 | 2023-07-04 | Koninklijke Philips N.V. | Apparatus and method for generating images of a scene |
EP4103999A4 (en) * | 2020-02-14 | 2023-08-02 | Magic Leap, Inc. | Session manager |
US11816757B1 (en) * | 2019-12-11 | 2023-11-14 | Meta Platforms Technologies, Llc | Device-side capture of data representative of an artificial reality environment |
US11893301B2 (en) | 2020-09-10 | 2024-02-06 | Snap Inc. | Colocated shared augmented reality without shared backend |
US11936733B2 (en) | 2018-07-24 | 2024-03-19 | Magic Leap, Inc. | Application sharing |
US12001751B2 (en) | 2019-04-18 | 2024-06-04 | Apple Inc. | Shared data and collaboration for head-mounted devices |
US12051163B2 (en) * | 2022-08-25 | 2024-07-30 | Snap Inc. | External computer vision for an eyewear device |
US12079938B2 (en) | 2020-02-10 | 2024-09-03 | Magic Leap, Inc. | Dynamic colocation of virtual content |
US12100207B2 (en) | 2020-02-14 | 2024-09-24 | Magic Leap, Inc. | 3D object annotation |
US12112098B2 (en) | 2020-02-14 | 2024-10-08 | Magic Leap, Inc. | Tool bridge |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018156809A1 (en) * | 2017-02-24 | 2018-08-30 | Masimo Corporation | Augmented reality system for displaying patient data |
EP3585254B1 (en) | 2017-02-24 | 2024-03-20 | Masimo Corporation | Medical device cable and method of sharing data between connected medical devices |
CN110809804B (en) | 2017-05-08 | 2023-10-27 | 梅西莫股份有限公司 | System for pairing a medical system with a network controller using an adapter |
US10810773B2 (en) * | 2017-06-14 | 2020-10-20 | Dell Products, L.P. | Headset display control based upon a user's pupil state |
US11120639B1 (en) | 2020-04-24 | 2021-09-14 | Microsoft Technology Licensing, Llc | Projecting telemetry data to visualization models |
US11756225B2 (en) * | 2020-09-16 | 2023-09-12 | Campfire 3D, Inc. | Augmented reality collaboration system with physical device |
US11176756B1 (en) | 2020-09-16 | 2021-11-16 | Meta View, Inc. | Augmented reality collaboration system |
KR102299943B1 (en) * | 2020-12-29 | 2021-09-09 | 주식회사 버넥트 | Method and system for augmented reality content production based on attribute information application |
US12028507B2 (en) * | 2021-03-11 | 2024-07-02 | Quintar, Inc. | Augmented reality system with remote presentation including 3D graphics extending beyond frame |
USD1029076S1 (en) | 2022-03-10 | 2024-05-28 | Campfire 3D, Inc. | Augmented reality pack |
US12056269B2 (en) * | 2022-12-23 | 2024-08-06 | Htc Corporation | Control device and control method |
CN116700693B (en) * | 2023-08-02 | 2023-10-27 | 北京格如灵科技有限公司 | Hololens anchor point positioning storage method, system, equipment and medium |
-
2018
- 2018-03-01 US US15/909,108 patent/US10403047B1/en active Active
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11562492B2 (en) * | 2018-05-18 | 2023-01-24 | Ebay Inc. | Physical object boundary detection techniques and systems |
US11830199B2 (en) | 2018-05-18 | 2023-11-28 | Ebay Inc. | Physical object boundary detection techniques and systems |
US20190385372A1 (en) * | 2018-06-15 | 2019-12-19 | Microsoft Technology Licensing, Llc | Positioning a virtual reality passthrough region at a known distance |
US11694390B2 (en) * | 2018-06-25 | 2023-07-04 | Koninklijke Philips N.V. | Apparatus and method for generating images of a scene |
US11087538B2 (en) | 2018-06-26 | 2021-08-10 | Lenovo (Singapore) Pte. Ltd. | Presentation of augmented reality images at display locations that do not obstruct user's view |
US20200020163A1 (en) * | 2018-07-11 | 2020-01-16 | The Boeing Company | Augmented Reality System with an Active Portable Anchor |
US10839604B2 (en) * | 2018-07-11 | 2020-11-17 | The Boeing Company | Augmented reality system with an active portable anchor |
US10896546B2 (en) * | 2018-07-11 | 2021-01-19 | The Boeing Company | Augmented reality system with an active portable anchor |
US11936733B2 (en) | 2018-07-24 | 2024-03-19 | Magic Leap, Inc. | Application sharing |
US11393170B2 (en) * | 2018-08-21 | 2022-07-19 | Lenovo (Singapore) Pte. Ltd. | Presentation of content based on attention center of user |
US11928784B2 (en) | 2018-09-25 | 2024-03-12 | Magic Leap, Inc. | Systems and methods for presenting perspective views of augmented reality virtual object |
US11094127B2 (en) * | 2018-09-25 | 2021-08-17 | Magic Leap, Inc. | Systems and methods for presenting perspective views of augmented reality virtual object |
US11651565B2 (en) | 2018-09-25 | 2023-05-16 | Magic Leap, Inc. | Systems and methods for presenting perspective views of augmented reality virtual object |
US11232635B2 (en) * | 2018-10-05 | 2022-01-25 | Magic Leap, Inc. | Rendering location specific virtual content in any location |
US11244511B2 (en) * | 2018-10-18 | 2022-02-08 | Guangdong Virtual Reality Technology Co., Ltd. | Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device |
US11276240B1 (en) * | 2019-01-31 | 2022-03-15 | Splunk Inc. | Precise plane detection and placement of virtual objects in an augmented reality environment |
US10891792B1 (en) * | 2019-01-31 | 2021-01-12 | Splunk Inc. | Precise plane detection and placement of virtual objects in an augmented reality environment |
US11657582B1 (en) * | 2019-01-31 | 2023-05-23 | Splunk Inc. | Precise plane detection and placement of virtual objects in an augmented reality environment |
US11341676B2 (en) * | 2019-02-05 | 2022-05-24 | Google Llc | Calibration-free instant motion tracking for augmented reality |
US11721039B2 (en) | 2019-02-05 | 2023-08-08 | Google Llc | Calibration-free instant motion tracking for augmented reality |
US12001751B2 (en) | 2019-04-18 | 2024-06-04 | Apple Inc. | Shared data and collaboration for head-mounted devices |
US11682206B2 (en) | 2019-06-27 | 2023-06-20 | Intel Corporation | Methods and apparatus for projecting augmented reality enhancements to real objects in response to user gestures detected in a real environment |
US11030459B2 (en) * | 2019-06-27 | 2021-06-08 | Intel Corporation | Methods and apparatus for projecting augmented reality enhancements to real objects in response to user gestures detected in a real environment |
US12002162B2 (en) * | 2019-08-14 | 2024-06-04 | Korea Institute Of Science And Technology | Method and apparatus for providing virtual contents in virtual space based on common coordinate system |
US20220207832A1 (en) * | 2019-08-14 | 2022-06-30 | Korea Institute Of Science And Technology | Method and apparatus for providing virtual contents in virtual space based on common coordinate system |
US11816757B1 (en) * | 2019-12-11 | 2023-11-14 | Meta Platforms Technologies, Llc | Device-side capture of data representative of an artificial reality environment |
US12079938B2 (en) | 2020-02-10 | 2024-09-03 | Magic Leap, Inc. | Dynamic colocation of virtual content |
US11861803B2 (en) | 2020-02-14 | 2024-01-02 | Magic Leap, Inc. | Session manager |
EP4103999A4 (en) * | 2020-02-14 | 2023-08-02 | Magic Leap, Inc. | Session manager |
US12100207B2 (en) | 2020-02-14 | 2024-09-24 | Magic Leap, Inc. | 3D object annotation |
US12112098B2 (en) | 2020-02-14 | 2024-10-08 | Magic Leap, Inc. | Tool bridge |
US11893301B2 (en) | 2020-09-10 | 2024-02-06 | Snap Inc. | Colocated shared augmented reality without shared backend |
US11909791B2 (en) | 2020-09-24 | 2024-02-20 | Apple Inc. | Synchronization in a multiuser experience |
WO2022066459A1 (en) * | 2020-09-24 | 2022-03-31 | Sterling Labs Llc | Synchronization in a multiuser experience |
US20220215342A1 (en) * | 2021-01-04 | 2022-07-07 | Polaris Industries Inc. | Virtual collaboration environment |
US11875080B2 (en) | 2021-02-08 | 2024-01-16 | Beijing SuperHexa Century Technology CO. Ltd. | Object sharing method and apparatus |
EP4040268A1 (en) * | 2021-02-08 | 2022-08-10 | Beijing SuperHexa Century Technology Co. Ltd. | Object sharing method and apparatus |
WO2023014618A1 (en) * | 2021-08-06 | 2023-02-09 | Apple Inc. | Object placement for electronic devices |
US12051163B2 (en) * | 2022-08-25 | 2024-07-30 | Snap Inc. | External computer vision for an eyewear device |
Also Published As
Publication number | Publication date |
---|---|
US10403047B1 (en) | 2019-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10403047B1 (en) | Information handling system augmented reality through a virtual object anchor | |
EP3997552B1 (en) | Virtual user interface using a peripheral device in artificial reality environments | |
US10565725B2 (en) | Method and device for displaying virtual object | |
EP3250983B1 (en) | Method and system for receiving gesture input via virtual control objects | |
EP2656181B1 (en) | Three-dimensional tracking of a user control device in a volume | |
US9591295B2 (en) | Approaches for simulating three-dimensional views | |
US10665014B2 (en) | Tap event location with a selection apparatus | |
TW201911133A (en) | Controller tracking for multiple degrees of freedom | |
US10685485B2 (en) | Navigation in augmented reality environment | |
US11727648B2 (en) | Method and device for synchronizing augmented reality coordinate systems | |
US20180032230A1 (en) | Information processing method and system for executing the information processing method | |
US20170322700A1 (en) | Haptic interface for population of a three-dimensional virtual environment | |
US11301198B2 (en) | Method for information display, processing device, and display system | |
US11321920B2 (en) | Display device, display method, program, and non-temporary computer-readable information storage medium | |
Piérard et al. | I-see-3d! an interactive and immersive system that dynamically adapts 2d projections to the location of a user's eyes | |
TWI766258B (en) | Method for selecting interactive objects on display medium of device | |
WO2022085395A1 (en) | Computer, method, and program | |
US11687309B2 (en) | Geospatial display configuration | |
US20240127006A1 (en) | Sign language interpretation with collaborative agents |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., A Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:046366/0014 Effective date: 20180529 Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT Free format text: PATENT SECURITY AGREEMENT (CREDIT);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:046286/0653 Effective date: 20180529 Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NORTH CAROLINA Free format text: PATENT SECURITY AGREEMENT (CREDIT);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:046286/0653 Effective date: 20180529 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS COLLATERAL AGENT, TEXAS Free format text: PATENT SECURITY AGREEMENT (NOTES);ASSIGNORS:DELL PRODUCTS L.P.;EMC CORPORATION;EMC IP HOLDING COMPANY LLC;REEL/FRAME:046366/0014 Effective date: 20180529 |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., T Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223 Effective date: 20190320 Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223 Effective date: 20190320 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001 Effective date: 20200409 |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 046286 FRAME 0653;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058298/0093 Effective date: 20211101 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST AT REEL 046286 FRAME 0653;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058298/0093 Effective date: 20211101 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST AT REEL 046286 FRAME 0653;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058298/0093 Effective date: 20211101 |
|
AS | Assignment |
Owner name: EMC IP HOLDING COMPANY LLC, TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (046366/0014);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060450/0306 Effective date: 20220329 Owner name: EMC CORPORATION, MASSACHUSETTS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (046366/0014);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060450/0306 Effective date: 20220329 Owner name: DELL PRODUCTS L.P., TEXAS Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (046366/0014);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:060450/0306 Effective date: 20220329 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |