Nothing Special   »   [go: up one dir, main page]

WO2015066734A1 - Stereoscopic display - Google Patents

Stereoscopic display Download PDF

Info

Publication number
WO2015066734A1
WO2015066734A1 PCT/US2014/072419 US2014072419W WO2015066734A1 WO 2015066734 A1 WO2015066734 A1 WO 2015066734A1 US 2014072419 W US2014072419 W US 2014072419W WO 2015066734 A1 WO2015066734 A1 WO 2015066734A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
stereoscopic
virtual
display
glasses
Prior art date
Application number
PCT/US2014/072419
Other languages
French (fr)
Inventor
David Woods
Original Assignee
David Woods
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/106,766 external-priority patent/US10116914B2/en
Priority claimed from US14/547,555 external-priority patent/US9883173B2/en
Application filed by David Woods filed Critical David Woods
Publication of WO2015066734A1 publication Critical patent/WO2015066734A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/22Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the stereoscopic type
    • G02B30/23Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the stereoscopic type using wavelength separation, e.g. using anaglyph techniques
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/26Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type
    • G02B30/30Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the autostereoscopic type involving parallax barriers

Definitions

  • the present invention relates to a stereo image display technique, by which a 3D image display may produce a stereoscopic image which takes viewpoint into account.
  • a stereoscopic image may be created which appears to remain at approximately the same location in space as viewpoint changes.
  • a first image for a left eye and a second image for a right eye need to arrive at both eyes in a manner of being discriminated from each other.
  • various methods are explained as follows. These images shall be referred to as first or left image and second or right image.
  • Prior art displays which may be viewed as 3D images generally fall into four methods for display of 3D imagery.
  • the first method employs polarized light images where the planes for left and rightimages are rotated by approximately 90 degrees. These polarized left and right images pass through polarized spectacles so that the corresponding image reaches the left and right eye. A viewer who tilted their head would degrade the 3D stereoscopic image.
  • liquid crystal shutter spectacles which open and close left and right shutters so as to allow the corresponding image to reach the correct eye.
  • Prior art employing liquid crystal shutters do not account for a change in viewing location from one viewer to the next. Therefore a 3D image would appear to be at different locations in space when viewed from differing viewpoints. Thus if one viewer pointed at a 3D stereoscopic object, a viewer at a second viewing location would have difficulty determining what is being pointed at.
  • a third method employs a lenticular screen provided between a display and both eyes.
  • a propagating direction of light is refracted via lens on the lenticular screen, whereby different images arrive at both eyes, respectively.
  • a fourth method requires no spectacles and utilizes parallax barriers so that only the proper image is seen by each eye.
  • This technology shall be referred to as auto stereoscopic.
  • Prior art applying this method required that the viewer remain in an optimal location for 3D viewing. Other spectators may not be able to see 3D imagery clearly. When it does account for tilting of the head or differing viewpoints it is limited to only one viewer. It is not possible for a second viewer to obtain a 3D image as well unless the second viewpoint is closely aligned with the first viewpoint. This limits prior 3D auto stereoscopic to smaller devices of the handheld variety. This technology is also unable to provide for a second viewer to determine which 3D stereoscopic object is being pointed at by a first viewer.
  • parallax barriers at different locations of the display may have different pitch angles in relation to the display surface at the same time.
  • the parallax barriers shall also be referred to as electronically configurable light guiding louvers, or louvers.
  • a larger display may be viewed auto stereoscopically.
  • louvers on opposite sides of the display would guide the light from the display at angles of slightly differing directions. In this way light from each location of the display is guided towards the intended viewpoint.
  • another embodiment of the present invention may apply these electronically configurable light guiding louvers in more than one axis concurrently. Thus when one viewer tilts his head the light passing through the louvers is guided to the intended viewing location and is blocked or shielded from other viewing locations.
  • the instant invention employs a 3D stereoscopic method combined with position tracking technology to produce a 3D stereoscopic image which remains in approximately the same location in space even when viewed from various perspectives.
  • This provides a 3D image which appears to be in approximately the same location despite the viewing location. The viewer may move towards or away, up or down, left or right yet the image remains in approximately the same location in space. It moves very little as the viewpoint changes. However, the 3D stereoscopic image will change to reflect how the viewer would see the 3D objects in the image from different perspectives.
  • This 3D stereoscopic image that remains approximately fixed in spatial location may also be referred to as a virtually real image, virtual real image or virtual image.
  • Position tracking or position sensing shall be used interchangeably and mean the same thing in this document.
  • the sensors on the display in combination with a computing device may detect when an external object or pointer is in close proximity to the virtual location of a 3D stereographic object whose position is stabilized in space. Thus stabilized, it becomes possible for viewers to interact with the 3D stereoscopic image.
  • Prior art employing gestures or voice is limited in scope and does not allow the user to manipulate and interact with a stereoscopic 3D virtual image. This shall be further elucidated in the description of the instant invention. To accomplish this goal, the perspective position of the viewpoint must be sensed and measured. In this document position tracking and position sensing shall be understood to mean the same thing. From this information an image is created which is what an observer located at this position would see if the real object were present.
  • This viewpoint is what one would expect to see if viewed in a monocular fashion (i.e. from one eye with the other closed). Therefore, for each viewing location (or eye location) a new image must be created. So the sensor must be able to calculate the position of each eye or lens of the viewer(s). The created images should take into account both the angular position of the viewpoint and the distance.
  • the viewing angle and distance shall be referred to as viewing perspective, or viewpoint.
  • the viewer is able to interact with the stabilized virtual image. This enables many applications. Input devices such as keyboards, remote controllers, musical instruments, virtual caves, virtual simulators and interactive 3D gaming systems are a few such applications, however the instant invention is not meant to be limited to these systems or devices. Some of these systems or devices will be further described in the following detailed description.
  • Prior art that utilizes gestures to interact with a 2D image display are common. However these do not allow interaction with a 3D virtual image.
  • the image may be completely computer generated or interpolated from photographic images taken from various perspectives.
  • This same technology may be adapted to create stereographic pairs of images, which when viewed by the stereographic methods of the instant invention may produce the desired stabilized 3D image.
  • current motion picture creation employs sensors which track location of body parts that are then used to create images. Such sensing technology could be used to track the eyes or lenses of glasses. Some location sensing methods employ small round objects that emit light, while others do not. These sensors may also be used to track the location of pointers, or body parts. They may also be used to track wearable devices to include, but not be limited to gloves, glasses, and hats.
  • Wearable devices may or may not include objects or markers, which may emit or reflect light to the sensors. Pulses, frequency or other method to enable sensors to differentiate location may code the emitted or reflected light. The light may be visible, infrared, or of other frequencies. Other position sensing technologies that employ magnetism, accelerometers, or gravitation sensing may be employed to improve tracking of objects with the intent of improvement of speed and accuracy.
  • anaglyph glasses are employed.
  • the left or first image is color coordinated to pass through the left or first lens of the anaglyph glasses.
  • the right or second image is color coordinated to pass through the right or second lens of the anaglyph glasses. In this way the viewer may see a 3D stereographic image.
  • passively polarized glasses are employed.
  • the left or first image has polarization coordinated to pass through the left or first lens of the passively polarized glasses.
  • the second or right image has polarization coordinated to pass through the right or second lens of the passively polarized glasses. In this way the viewer may see a 3D stereographic image.
  • Another embodiment employs a combination of anaglyph and passively polarized glasses.
  • the instant invention may also display 3D stereographic images in the manner of prior art whereby the first and second image do not use information from the sensors to vary the image based on viewpoint location.
  • This method shall be referred to as prior art 3D.
  • This method may be employed for viewing medium such as movies or games which have been created for prior art 3D.
  • the instant invention enables switching between 2D and 3D modes. In 2D mode multiple viewers may view multiple images. So two or more viewers may use the display to watch different things.
  • the display of the instant invention may be presented in portrait or landscape mode.
  • the landscape or portrait mode may be manually or automatically changed by means of an orientation sensor of various types. So a tablet, phone, or other handheld device may use the display of this invention.
  • a left or first viewing perspective is sensed and location quantified in space.
  • a left or first image is created corresponding to what would be seen by a viewer with said left or first perspective.
  • the left or first image is displayed in conjunction with technology, which limits the viewing to the left or first perspective. This may be accomplished via anaglyph glasses, passively polarized, or a combination of anaglyph and passively polarized glasses.
  • a right or second viewing perspective is sensed and location quantified in space.
  • a right or second image is created corresponding to what would be seen by a viewer with said right or second perspective.
  • the right or second image is displayed in conjunction with technology, which limits the viewing to the right or second perspective. This may be accomplished via anaglyph glasses, passively polarized glasses, or a combination of anaglyph and passively polarized glasses.
  • the process is repeated for each viewer in sequence in a continuous loop.
  • the sequence may vary in order so long as the image is coordinated with the stereoscopic method so that the correct image reaches the intended eye.
  • the display may be a liquid crystal display device, an electroluminescent display device, an organic light emitting display device, a plasma display device, or a projected display image.
  • this list of display types is for illustrative purposes only and is not intended to be limiting in any way. There are many ways of accomplishing this end. There are endless variations of placement of parts, methods of generating image patterns, different ordering of parts, and/or display images which accomplish the same objective. Someone practiced in the art will be able to design and construct many variations, which include but are not limited to those above. Hence the invention is what is in the claims and includes more than the embodiments described below.
  • FIGURE 1 is a schematic diagram illustrating prior art in which the 3D stereoscopic images virtual location moves as viewpoint shifts.
  • FIGURE 2 is a schematic diagram illustrating prior art in which the 3D virtual image is unable to be viewed stereoscopically when the viewers head is angularly tilted in relation to the display.
  • FIGURE 3 is a schematic diagram illustrating prior art 3D auto stereoscopic displays which limit viewing location.
  • FIGURE 4 is a schematic diagram illustrating an embodiment where the 3D stereoscopic image remains fixed and viewable as the viewers head is angularly tilted in relation to the display.
  • FIGURE 5 is a schematic diagram illustrating an embodiment where the 3D stereoscopic image remains fixed in space as the viewing location is moved closer or fartherfrom the display.
  • FIGURE 6 is a schematic diagram illustrating an embodiment where the 3D virtual object is seen from different viewpoints yet remains fixed in space.
  • FIGURE 7 is a schematic diagram illustrating a flow diagram of an embodiment.
  • FIGURE 8 is a schematic diagram illustrating an embodiment applying viewpoint sensors and
  • FIGURE 9 is a schematic diagram illustrating an embodiment where images may be displayed as time progresses.
  • FIGURE 10 is a schematic diagram illustrating an embodiment applying shutter glasses and viewpoint location sensing where images may be displayed as time progresses.
  • FIGURE 1 1 is a schematic diagram illustrating an embodiment applying shutter glasses and viewpoint position sensing where images may be displayed as time progresses.
  • FIGURE 12 is a schematic diagram illustrating an embodiment applying shutterglasses and viewpoint position sensing where images may be displayed as time progresses.
  • FIGURE 13 is a schematic diagram illustrating an embodiment of anaglyph glasses.
  • FIGURE 14 is a schematic diagram illustrating an embodiment of passively polarized glasses.
  • FIGURE 15 is a schematic diagram illustrating an embodiment of passively polarized anaglyph glasses.
  • FIGURE 16 is a schematic diagram illustrating prior art directional louvers and also an embodiment applying directional louvers in both horizontal and vertical directions.
  • FIGURE 17 is a schematic diagram illustrating an embodiment applying directional louvers in both horizontal and vertical directions and applying viewpoint location sensing.
  • FIGURE 18 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
  • FIGURE 19 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
  • FIGURE 20 is a schematic diagram illustrating an embodiment applying louvers
  • FIGURE 21 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
  • FIGURE 22 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
  • FIGURE 23 is a schematic diagram illustrating an embodiment in portrait and landscape modes.
  • FIGURE 24 is a schematic diagram illustrating an embodiment applying position
  • FIGURE 25 is a schematic diagram illustrating an embodiment applying position sensors and illustrating user interaction with the virtual image.
  • FIGURE 26 is a schematic diagram illustrating an embodiment applying position sensors and illustrating a virtual gaming system.
  • FIGURE 27 is a schematic diagram illustrating an embodiment applying position sensors and illustrating a virtual cave.
  • FIGURE 28 is a schematic diagram illustrating an embodiment applying position sensors and illustrating a virtual simulator.
  • FIG. 1 of the drawings there is shown an illustration of prior art.
  • a 3D stereoscopic image is presented to viewers positioned at A and B.
  • the left or first image (item 160) as well as the right or second image (item 170) locations is fixed on the image display (item 1 14) for either viewing from position A or B.
  • the result is 3D image object locations 180 and 182 which differ in space. Each tends to be more in front of the
  • FIG. 2 of the drawings there is shown an illustration of prior art. It is apparent that changing viewing angle results in less than optimal 3D image or possibly failure of 3D imaging.
  • Fig. 3 of the drawings there is shown an illustration of prior art, a 3D stereoscopic device which employs current louvers to aim or guide the light froman image to the viewer's eyes.
  • the limitation is because the louvers are fixed and not configurable based on viewing location. Therefore the viewing location is limited.
  • Fig. 4 of the drawings there is shown an illustration of an embodiment of the instant invention.
  • Sensors or markers locate the viewpoint perspectives. These sensors may be passive receivers, or may be emissive and receptive of signals, or of other methods to determine viewpoint locations. Facial or object recognition may be used in lieu of sensors or markers to determine viewpoint locations. Other position sensing technologies that employ magnetism, accelerometers, or gravitation sensing may be employed to improve tracking of objects with the intent of improvement of speed and accuracy. Based on where the viewpoint perspective is sensed, an image is created corresponding to how the intended image would be seen from that viewpoint.
  • the first or left displayed image is a function of the position of the left eye of viewer A.
  • the second or right displayed image is a function of the position of the right eye of viewer A.
  • the viewer located at B has his head tilted in relation to the display (item 1 14).
  • the first or left displayed image (item 162) is a function of the position of the left eye of viewer located at B.
  • the second or right displayed image is a function of the position of the right eye of viewer located at B.
  • the 3D stereoscopic object image (item 190) is now seen in approximately the same location in space from both viewpoints A and B.
  • the viewer located at B is able to see the 3D stereoscopic image in approximately the same location in space as when the viewer is located at A, even though his head is tilted with respect to the display.
  • the 3D stereographic images location remains approximately fixed in space. This allows it's fixed position coordinates to be determined. These may then be compared with the sensed location of a viewer's body part, wearable object or pointer. In this manner it becomes possible for one or more users to interact with the 3D stereographic objects or images.
  • Other position sensing or tracking technologies such as magnetic, accelerometers, inertial, or gravitation sensing may be employed with the intent of improvement of speed and accuracy.
  • FIG. 5 of the drawings there is shown an illustration of an embodiment of the present invention. This illustrates the fact that in addition to viewing angle, the viewing distance also is measured in order to create the correct display image presentations (items 260 and 270). In this manner both viewpoint 1 (item 220) and 2 (item222) are able to see the virtual object image (item 292) in approximately the same location in space.
  • Sensors (item 1 16) locate the viewpoint perspectives. These sensors may be passive receivers, or may be emissive and receptive of signals, or of other methods to determine viewpoint locations. Based on where the viewpoint perspective is sensed, an image is created corresponding to how the intended image would be seen from that viewpoint.
  • FIG. 6 of the drawings there is shown an illustration of an embodiment showing how an object might appear when viewed from different perspectives in the instant invention.
  • Fig. 7of the drawings a flow diagram of an embodiment the instant invention is presented which shows a process for creating 3D stereoscopic images which are seen in the same location in space when viewed from different perspectives.
  • One means to accomplish this is for the sensors to track an object, use facial recognition. Magnetic, acceleration, and gravitational data may also be employed to determine the first and second viewpoints.
  • the viewpoints correspond to the positions of first or left and second or right eye.
  • the other methods for locating these viewpoint locations include but are not limited to markers that may reflect or transmit light and or sound, or create a magnetic field. These markers may be located on the face, body or on a wearable object.
  • the methods given to recognize and locate a pair of eyes, glasses or facial feature viewpoints is for illustrative purposes only and is not meant to be limiting in any way.
  • FIG. 8 of the drawings there is shown an illustration of anembodiment of the present invention.
  • An object image (item 128), in this case a cylinder, would be presented as different images to perspective viewing locations represented by items 108 and 120.
  • FIG. 9 of the drawings there is shown an illustration of anembodiment of the present invention.
  • This illustration shows progression through time.
  • Item 200 shows how as viewing location is changed, the 3D stereoscopic images location remains unchanged in space.
  • Item 240 shows how this is accomplished by enabling each prospective viewpoint to see an image created based on the viewpoints perspective as viewing location differs.
  • FIG. 10 of the drawings there is shown an illustration of anembodiment of the present invention.
  • This illustration shows progression through time.
  • Item 200 shows how as viewing location is changed, the 3D stereoscopic images location remains unchanged in space.
  • Item 240 shows how this is accomplished by enabling each prospective viewpoint to see an image created based on the viewpoints perspective as viewing location differs.
  • item 240 shows employment of shutter glasses to accomplish this effect.
  • FIG. H and 12 of the drawings there is shown an illustration of an embodiment of the present invention. Images are created based on the perspective locations sensed of the lenses (items 109, 1 10,
  • the image is presented with correct optical association so that a 3D image will be seen.
  • Said 3D image is seen from various perspectives as it would be seen were the object immovable. Therefore the 3D object image appears in the same location no matter the viewing angle or distance. The viewing location is only limited by the size of the screen (item 1 14).
  • a first lens (item 204) allows light of a different color to pass than that of a second lens (item 206).
  • a first lens (item 304) allows light of an opposing polarization direction to pass than that of a second lens (item 306).
  • the polarization may be linear, circular, or elliptical.
  • FIG. 15 of the drawings there is shown an illustration of passively polarized
  • anaglyph glasses In illustration A the planes of polarization in is the same for both lenses of a pair of glasses, while the color of the lenses is different. Between glasses 802 and 812 the polarization orientation is different. The polarization may be linear, circular, or elliptical. In illustration B the polarization pattern is in opposition between lenses of the same pair of glasses. The color in the first and second lens of the glasses is the same. However the colors of one pair of glasses (item 852) differs from the colors of the second pair of glasses (item 862). These would allow two users to interact with different images.
  • Examples would be a game of scrabble or poker. However these examples are not intended to limit the use of this device in any way.
  • FIG. 16 of the drawings there is shown an illustration of the prior art and also an embodiment of the present invention.
  • louvers created by layers of liquid crystals which have a blocking function in the position of a "Z" shape. Since it is created of liquid crystals it may be reconfigured frame by frame to allow light to pass to the left or right eye in correct optical association with a first or second image so that a 3D stereoscopic effect is achieved without the need for glasses.
  • the present invention improves upon this by enabling the louvers to vary position and rotational angle. Thereby a single viewer can see a 3D stereoscopic image in the same location in space as his viewing perspective changes and/or the head is tilted.
  • louvers In part B the present invention improves upon the concept of louvers by using them in both vertical as well as horizontal planes.
  • the louvers may be configured along
  • any combination of axis in any shape or pattern any combination of axis in any shape or pattern.
  • Several shapes or patterns of louvers will be illustrated further in the description and endless varieties are possible. The result is guiding or aiming light as if through straws.
  • the cross section of the guiding straws may be one of many shapes or patterns.
  • the aiming or guiding viewpoint location is the location picked up by the location sensors.
  • the louvers are created to optimize viewing at the correct perspective location. In the present invention they may be angled differently at different locations of the screen to optimize this effect. This allows the viewpoint to be in any plane or angle. In this configuration it is possible for two or more viewers to observe the intended 3D stereoscopic image in the same location in space.
  • FIG. 17 of the drawings there is shown an illustration of an embodiment of the present invention. This shows how louvers may be employed to direct the correct image with optical association to the proper viewpoint as determined by sensors (item 1 16), so a 3D stereoscopic image is seen. Note the 3D stereoscopic object image does not change location in space as viewpoint is changed.
  • Fig. 18 of the drawings there is shown an illustration of an embodiment of the present invention. Louvers (item 217) with horizontal and vertical components are shown. These shall be referred to as electronically configurable light guiding louvers, or louvers. Using input from the position sensors a computer calculates the optimum configuration of the louvers.
  • louvers may have variable pitch in more than one axes; thereby they are able to guide light from the image display through imaginary tubes (item 219) towards the intended viewpoint.
  • the eye at point B In this case the eye at point B.
  • the eye at point A is not at an intended viewing location and therefore sees no light from the image when it is projected or guided to viewpoint B.
  • a first or left image may be viewed by the left eye and a second or rightimage may be viewed by the right eye.
  • the created images may be directed with correct optical association so that a 3D stereoscopic image is seen.
  • the location from which each image is seen is limited. This permits additional viewers to also receive 3D stereoscopic images which are different from the first viewer.
  • those images would be of the same 3D object image in the same location in space as viewed from each viewer's unique individual perspective.
  • Fig. 19 of the drawings there is shown an illustration of an embodiment of the present invention.
  • the dual louver methodin time sequence from 1 to 4.
  • the dual louvers direct a first image to viewpoint A.
  • the dual louvers direct a second image to viewpoint B.
  • the dual louvers direct a first image to viewpoint A.
  • the dual louvers direct a second image to viewpoint B.
  • the viewer may be the same as in sequences 1 and 2 or they may be a second viewer. In each case the image viewed has been created for the particular viewpoint. In this way multiple viewers may enjoy the 3D image regardless of their viewing orientation.
  • louver patterns in sequences 3 and 4 are slightly different than those of sequences 1 and 2. This is a technique which may be used to eliminate dark spots from occurring in the image where the same pixel would be blocked by dual louvers. By moving the louvers from frame to frame this problem can be alleviated.
  • Fig. 20 of the drawings there is shown an illustration of an embodiment of the present invention.
  • the display (item 1 14) has cross sections expanded so that louvers from various locations of the display may be further illustrated.
  • the viewpoint (item 530) is located directly in front of section 518 of the display (a location nearly centered in front of the display). So in order for the image to be seen form the viewpoint of item 530 the louvers of item 510 at the top left of the display must guide the light downward and towards the right.
  • louvers of item 516 of the upper right comer must guide the light from the image downwards and to the left.
  • the louvers of item 512 must guide light upwards and to the right.
  • the louvers of item 514 must guide the light upwards.
  • Those of item 520 must guide the light upwards and to the left. Those located directly in front of the viewing location should guide the light mostly straight ahead.
  • FIG. 21 of the drawings there is shown an illustration of an embodiment of the present invention.
  • the vertical component of the louvers is larger than the horizontal.
  • the taller axis may be rotated to correct for a tilted head angle of the viewer.
  • the taller portion is intended to coincide with the vertical axis of a viewer's face. This has the advantage of allowing more light to pass through the louvers while allowing one of a pair of viewpoints to see the image while the image is blocked form the other in a pair of viewpoints. By pair of viewpoints one may consider a left and right eye.
  • louvers are not meant to limit the shape or pattern of the louvers.
  • Fig. 22 of the drawings there is shown an illustration of an embodiment of the present invention. This illustrates how electronically configurable louvers may be applied so that the intended viewing location receives the correct opticalimage while other viewing locations do not.
  • a small portion (item 602) of the display (item 1 14) is expanded(item 610).
  • item 610 we see configurable louvers which operate in both the vertical andhorizontal directions to guide the light from the display image.
  • Item 630 shows an approximate area where the light from a first image may strike the intended side of a viewers face.
  • Item 650 shows an approximate area where the light from a second image may strike the other side of a viewers face. In this way a large area of light from the image is able to pass through the louvers to the intended viewers eye while limiting the light from the image which would be seen at another location.
  • FIG. 604 another small portion (item 604) of the display (item 1 14) is expanded (item 620).
  • item 620 we see configurable louvers which operate in both the vertical and horizontal directions to guide the light from the display image.
  • the viewers head is tilted at an angle relative to the display (item 1 14).
  • the configurable louvers (item 620) now tilt to match the angle of tilt of the viewers head.
  • Item 640 shows an approximate area where the light from a first image may strike the intended side of a viewers face.
  • Item 660 shows an approximate area where the light from a second image may strike the other side of a viewers face.
  • One means to accomplish this is for the sensors to sense objects which enable a facial recognition and therefore location and pairing information of the eyes.
  • Another method may involve a computing device which compares locations of eyes and creates pairs via an algorithm based on distance between eyes or some other method.
  • Other methods for locating paired eye positions include, but are not limited to sensing light reflective or light transmitting devices located on the face or on a wearable device such as glasses, a hat, necklace etc.
  • the means given to recognize a pair of eyes, viewpoints or facial features is for illustrative purposes only and is not meant to be limiting in any way.
  • the ability to guide the light from the display to a specific area allows a privacy mode.
  • This mode may use but not be limited to, facial recognition computation, eye pattern recognition or other means such as proximity are used to allow viewing by one person only.
  • the electronically configurable light guiding louvers of more than one axis function to channel the light to the eyes of a single viewer.
  • the electronically configurable light guiding louvers of more than one axis function to channel the light from the displayed image to the eyes of a single viewer. If desired, the number of people who may view the displayed image in privacy mode may be manually increased.
  • FIG. 23 of the drawings there is shown an illustration of an embodiment of the present invention.
  • a handheld device is shown which may be used in both portrait and landscape modes.
  • configurable louvers are used to create an auto stereoscopic 3D image.
  • the method of shutter glasses may also be applied.
  • a display orientation sensor is applied. This sensor may be gravity sensing, motion or inertia sensing, but is not limited to these technologies.
  • Fig. 33 of the drawings there is shown an illustration of an embodiment of the present invention.
  • a 3D stereoscopic image of a box isshown. The box is manipulated by use of a pointing tool (item 700).
  • This pointing tool may have a tip (item704) of emissive material, reflective material or other means to make it's location easily read by the sensors.
  • the pointer may also have one or more functional buttons (item 702). These buttons may operate in a similar fashion as buttons on a computer controller such as a mouse. By applying this pointer an object may be identified, grabbed and moved, sized or any number of functions commonly associated with the computer mouse. The difference being that the virtual objects and the pointer may be operated in 3 axis or dimensions.
  • a 3D stereoscopic image of a remote device is shown.
  • the virtual image of the remote device in space is approximately the same for most viewing locations.
  • it's virtual location in space and the virtual location in space of each individual key on the remote device may be calculated by the devices computers.
  • By comparing the calculated fixed virtual location with real world objects interaction may take place.
  • a virtual keyboard, virtual touch screen, virtual pottery wheel, or virtual musical instrument may be employed.
  • a pointer, body part or wearable device may be located by the sensors and their position in space may likewise be calculated or quantified.
  • a wearable device such as a glove may contain position markers of reflective or emissive materials which enable sensors to accurately determine it's location in space and for the case of a glove also the fingers.
  • An advanced sensor may be able to detect the location of fingers without the need for gloves with position markers.
  • either the method applying shutter glasses, or the method applying louvers may beused.
  • keyboard entries may be made. This is similar to what occurs on a 2D screen with touch sensing. The difference being the typing takes place on a virtual image as opposed to a solid surface.
  • either the method applying shutter glasses, or the method applying louvers may be used.
  • the virtual keyboard and any other virtual object may be interacted in a multitude of other ways. These include stretching and shrinking, twisting and turning and any other ways a 2D touch object could be manipulated.
  • the understanding is that for the 3D virtual touch object, 3axis rather than 2 axis, may be applied and manipulated. In this embodiment, either the method applying shutter glasses, or the method applying louvers may be used.
  • the virtual keyboard or any other virtual interactive device described may be brought forth and/or removed by user gestures sensed by the systems location sensors.
  • gestures sensed by the location sensors may be used for other functions, such as but not limited to turning the pages of an electronic book, changing stations on a television, orraising or lowering volume of the display system or other components.
  • FIG. 26 of the drawings there is shown an illustration of an embodiment of the present invention.
  • a 3D stereoscopic image of a game (item 196) is shown.
  • the 3D virtual game pieces may be created and also manipulated by any of the methods previously described. All of the properties described in illustration 25 apply.
  • the display system (item 1 14) may be made to lay flat so as to provide a better gaming surface. In this way board games and other types of games may be played and interacted with by the user or users. Virtual worlds may be created, viewed and/or interacted with. This embodiment of the present invention makes an excellent gaming system.
  • FIG. 27 of the drawings there is shown an illustration of an embodiment of the present invention.
  • a 3D stereoscopic virtual cave is shown which employs the technology previously illustrated.
  • the objects appear more real as they remain approximately fixed in space as the viewer and viewpoint location are changed.
  • the objects in the virtual cave may be interacted with in the manner which has been described above.
  • Fig. 28 of the drawings there is shown an illustration of an embodiment of the present invention.
  • Varying amounts of the simulator may be simulated depending on the wants of the user. It may be that only objects outside of the control environment are simulated. However it is possible for virtual controls, buttons, switches and other controlling devices to be simulated and interacted with, in the manner described above.
  • the interior environment of the simulator may be created virtually. This enables simulators whose configuration may be controlled by applying computer software. For example a virtual flight simulator could be used as a B-737 for one event and reconfigured as an A-320 for the next event. This would save money for the user as fewer simulators would be needed.
  • the present invention may be switched to other modes of operation. These include but are not limited to prior art 3D stereoscopic imaging where the 3D stereoscopic image location varies with viewer location. This may be a useful mode for viewing prior art technology 3D imagery such as 3D movies. Also, the display may be used to view 2D images in the manner of prior art. The switching among the various 3D and 2D modes may be automatic based on the format of the viewing material. In this embodiment, either the method applying shutter glasses, or the method applying louver technologies may be used.
  • the prior art in this area of technology encompasses displays of two types, one which produce a 3D stereoscopic effect when viewed through wearable shutter glasses, the second which produces a 3D stereoscopic image through the use of light guiding louvers.
  • This prior art is limited by viewing location.
  • the prior art is limited to 3D stereoscopic images which may not be seen in approximately the same location as viewpoint changes nor when viewed by different users. This does not allow users to communicate about a 3D stereoscopic image by gestures, for example pointing, or gesturing.
  • 3D stereoscopic images or virtual images may also be interacted with by the user(s). This is accomplished by applying location sensing technology and comparing the data with the computed 3D virtual object location.
  • Prior art utilizes parallax barriers to obtain 3D stereoscopic effects.
  • the prior art parallax barriers limit the eye placement of the viewer to a narrow range for large displays.
  • the louvers of prior art function in only one axis at a time they have difficulties sharing the 3D imagery with other viewers.
  • Prior art is also limited to small devices for virtual 3D auto stereoscopic display systems.
  • the instant invention improves upon the prior art by improving upon the parallax barriers.
  • the electronically configurable light guiding louvers have the advantage of variable pitch and multiple axis of blocking or guiding the light from the display. This allows multiple viewers to view large screen devices and share in the 3D
  • a 3D stereoscopic image may be created which remains approximately fixed in space.
  • Such a virtual image may be pointed at by one or more viewers. Because the virtual image is nearly fixed in space it's virtual location may be compared with a user's finger, other body parts or pointer. In this way a viewer may interact with a virtual 3D image by pointing or other gestures as sensed by the position sensors.
  • the position sensors may be used to interpret a variety of gestures which correspond to a variety of commands. By using the position sensors gestures may be made which cause the display device to react to the viewer. Examples include but are not limited to gestures which call fora virtual keyboard or remote to be displayed. They may also cause a station of a television tochange or the volume to increase or decrease. There are many more possibilities and this listof gestures and results is not intended to be limiting in any way.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

A stereoscopic display system and method includes an image display panel or screen, tracking sensors, and a means to create first and second stereoscopic images based on viewpoint. This allows the viewer to perceive the 3D stereoscopic image as approximately fixed in space. By comparing the tracked location of external objects with the fixed virtual location of the stereoscopic image, object interaction with the virtual image may be accomplished in real space.

Description

Patent Application of David P. Woods
for
STEREOSCOPIC DISPLAY
CROSS REFERENCE TO PRIORITY APPLICATION
This application claims the priority benefit of U.S. Provisional Patent Applications:
Serial No. U.S. 61/897,983, filed on October 31 , 2013; Serial No. U.S. 61/900,982, filed on November 06, 2013; Serial No. U.S. 61/920,755, filed on December 25, 2013; Serial No. U.S. 61/934,806, filed on February 02, 2014; Serial No. U.S. 62/035,477, filed on August 10, 2014; and also the priority benefit of U.S. Patent Applications: Serial No. U.S. 14/106,766, filed on December 15, 2013, and Serial No. U.S. 14/547,555 filed on November 19, 2014, the subject matters for which are incorporated herein by reference.
BACKGROUND OF THE INVENTION 1. Field of the Invention
The present invention relates to a stereo image display technique, by which a 3D image display may produce a stereoscopic image which takes viewpoint into account. By taking viewpoint into account a stereoscopic image may be created which appears to remain at approximately the same location in space as viewpoint changes.
2. Description of Related Art
Method s of implementing a 3D stereoscopic image are described as follows:
First of all, as mentioned in the following description, in order to implement a 3D stereoscopic image, a first image for a left eye and a second image for a right eye need to arrive at both eyes in a manner of being discriminated from each other. For this, various methods are explained as follows. These images shall be referred to as first or left image and second or right image.
Prior art displays which may be viewed as 3D images generally fall into four methods for display of 3D imagery. The first method employs polarized light images where the planes for left and rightimages are rotated by approximately 90 degrees. These polarized left and right images pass through polarized spectacles so that the corresponding image reaches the left and right eye. A viewer who tilted their head would degrade the 3D stereoscopic image.
Another similar method employs liquid crystal shutter spectacles which open and close left and right shutters so as to allow the corresponding image to reach the correct eye. Prior art employing liquid crystal shutters do not account for a change in viewing location from one viewer to the next. Therefore a 3D image would appear to be at different locations in space when viewed from differing viewpoints. Thus if one viewer pointed at a 3D stereoscopic object, a viewer at a second viewing location would have difficulty determining what is being pointed at.
A third method employs a lenticular screen provided between a display and both eyes. In particular, a propagating direction of light is refracted via lens on the lenticular screen, whereby different images arrive at both eyes, respectively.
A fourth method requires no spectacles and utilizes parallax barriers so that only the proper image is seen by each eye. This technology shall be referred to as auto stereoscopic.
Prior art applying this method required that the viewer remain in an optimal location for 3D viewing. Other spectators may not be able to see 3D imagery clearly. When it does account for tilting of the head or differing viewpoints it is limited to only one viewer. It is not possible for a second viewer to obtain a 3D image as well unless the second viewpoint is closely aligned with the first viewpoint. This limits prior 3D auto stereoscopic to smaller devices of the handheld variety. This technology is also unable to provide for a second viewer to determine which 3D stereoscopic object is being pointed at by a first viewer.
In one embodiment of the present invention parallax barriers at different locations of the display may have different pitch angles in relation to the display surface at the same time. The parallax barriers shall also be referred to as electronically configurable light guiding louvers, or louvers.
By varying the pitch angle of the louvers in relation to viewpoint for multiple locations of the display, a larger display may be viewed auto stereoscopically. Thus for a viewpoint centered in front of the display louvers on opposite sides of the display would guide the light from the display at angles of slightly differing directions. In this way light from each location of the display is guided towards the intended viewpoint.
In addition another embodiment of the present invention may apply these electronically configurable light guiding louvers in more than one axis concurrently. Thus when one viewer tilts his head the light passing through the louvers is guided to the intended viewing location and is blocked or shielded from other viewing locations.
In addition, prior art employing parallax barriers does not account for a change in viewing location from one viewer to the next. Therefore a 3D image would appear to be at different locations in space when viewed from differing viewpoints. Thus if one viewer pointed at a 3D stereoscopic object, a viewer at a second viewing location would have difficulty determining what is being pointed at. Thus the prior art is limited in interaction with the viewer(s). SUMMARY OF THE INVENTION
The instant invention employs a 3D stereoscopic method combined with position tracking technology to produce a 3D stereoscopic image which remains in approximately the same location in space even when viewed from various perspectives. This provides a 3D image which appears to be in approximately the same location despite the viewing location. The viewer may move towards or away, up or down, left or right yet the image remains in approximately the same location in space. It moves very little as the viewpoint changes. However, the 3D stereoscopic image will change to reflect how the viewer would see the 3D objects in the image from different perspectives. This 3D stereoscopic image that remains approximately fixed in spatial location may also be referred to as a virtually real image, virtual real image or virtual image. Position tracking or position sensing shall be used interchangeably and mean the same thing in this document.
The sensors on the display in combination with a computing device may detect when an external object or pointer is in close proximity to the virtual location of a 3D stereographic object whose position is stabilized in space. Thus stabilized, it becomes possible for viewers to interact with the 3D stereoscopic image. Prior art employing gestures or voice is limited in scope and does not allow the user to manipulate and interact with a stereoscopic 3D virtual image. This shall be further elucidated in the description of the instant invention. To accomplish this goal, the perspective position of the viewpoint must be sensed and measured. In this document position tracking and position sensing shall be understood to mean the same thing. From this information an image is created which is what an observer located at this position would see if the real object were present. This viewpoint is what one would expect to see if viewed in a monocular fashion (i.e. from one eye with the other closed). Therefore, for each viewing location (or eye location) a new image must be created. So the sensor must be able to calculate the position of each eye or lens of the viewer(s). The created images should take into account both the angular position of the viewpoint and the distance. The viewing angle and distance shall be referred to as viewing perspective, or viewpoint. In the instant invention, the viewer is able to interact with the stabilized virtual image. This enables many applications. Input devices such as keyboards, remote controllers, musical instruments, virtual caves, virtual simulators and interactive 3D gaming systems are a few such applications, however the instant invention is not meant to be limited to these systems or devices. Some of these systems or devices will be further described in the following detailed description. Prior art that utilizes gestures to interact with a 2D image display are common. However these do not allow interaction with a 3D virtual image.
Prior art in exists which images are created based on viewpoint perspective in real time 2D displays. Many video games at this time employ said technology. The image may be completely computer generated or interpolated from photographic images taken from various perspectives. This same technology may be adapted to create stereographic pairs of images, which when viewed by the stereographic methods of the instant invention may produce the desired stabilized 3D image. In addition, current motion picture creation employs sensors which track location of body parts that are then used to create images. Such sensing technology could be used to track the eyes or lenses of glasses. Some location sensing methods employ small round objects that emit light, while others do not. These sensors may also be used to track the location of pointers, or body parts. They may also be used to track wearable devices to include, but not be limited to gloves, glasses, and hats. By tracking these wearable objects or by tracking body parts the viewpoint may be calculated or inferred. Wearable devices may or may not include objects or markers, which may emit or reflect light to the sensors. Pulses, frequency or other method to enable sensors to differentiate location may code the emitted or reflected light. The light may be visible, infrared, or of other frequencies. Other position sensing technologies that employ magnetism, accelerometers, or gravitation sensing may be employed to improve tracking of objects with the intent of improvement of speed and accuracy.
Finally, the correct image must reach the correct lens or eye. One of several methods is used to achieve this. In the first embodiment anaglyph glasses are employed. The left or first image is color coordinated to pass through the left or first lens of the anaglyph glasses. The right or second image is color coordinated to pass through the right or second lens of the anaglyph glasses. In this way the viewer may see a 3D stereographic image. In a second embodiment passively polarized glasses are employed. The left or first image has polarization coordinated to pass through the left or first lens of the passively polarized glasses. The second or right image has polarization coordinated to pass through the right or second lens of the passively polarized glasses. In this way the viewer may see a 3D stereographic image. Another embodiment employs a combination of anaglyph and passively polarized glasses.
The instant invention may also display 3D stereographic images in the manner of prior art whereby the first and second image do not use information from the sensors to vary the image based on viewpoint location. This method shall be referred to as prior art 3D.
This method may be employed for viewing medium such as movies or games which have been created for prior art 3D. Furthermore, the instant invention enables switching between 2D and 3D modes. In 2D mode multiple viewers may view multiple images. So two or more viewers may use the display to watch different things.
Also, the display of the instant invention may be presented in portrait or landscape mode. The landscape or portrait mode may be manually or automatically changed by means of an orientation sensor of various types. So a tablet, phone, or other handheld device may use the display of this invention.
To sum up the process, method, or system, of creating and viewing the virtual image is as follows:
A left or first viewing perspective is sensed and location quantified in space. A left or first image is created corresponding to what would be seen by a viewer with said left or first perspective.
The left or first image is displayed in conjunction with technology, which limits the viewing to the left or first perspective. This may be accomplished via anaglyph glasses, passively polarized, or a combination of anaglyph and passively polarized glasses.
A right or second viewing perspective is sensed and location quantified in space.
A right or second image is created corresponding to what would be seen by a viewer with said right or second perspective.
The right or second image is displayed in conjunction with technology, which limits the viewing to the right or second perspective. This may be accomplished via anaglyph glasses, passively polarized glasses, or a combination of anaglyph and passively polarized glasses.
The process is repeated for each viewer in sequence in a continuous loop. However, the sequence may vary in order so long as the image is coordinated with the stereoscopic method so that the correct image reaches the intended eye.
In this manner a 3D stereoscopic image may be seen whose location remains approximately fixed in space when viewed from different perspectives. The display may be a liquid crystal display device, an electroluminescent display device, an organic light emitting display device, a plasma display device, or a projected display image. However, this list of display types is for illustrative purposes only and is not intended to be limiting in any way. There are many ways of accomplishing this end. There are endless variations of placement of parts, methods of generating image patterns, different ordering of parts, and/or display images which accomplish the same objective. Someone practiced in the art will be able to design and construct many variations, which include but are not limited to those above. Hence the invention is what is in the claims and includes more than the embodiments described below.
BRIEF DESCRIPTION OF THE DRAWINGS FIGURE 1 is a schematic diagram illustrating prior art in which the 3D stereoscopic images virtual location moves as viewpoint shifts.
FIGURE 2 is a schematic diagram illustrating prior art in which the 3D virtual image is unable to be viewed stereoscopically when the viewers head is angularly tilted in relation to the display.
FIGURE 3 is a schematic diagram illustrating prior art 3D auto stereoscopic displays which limit viewing location.
FIGURE 4 is a schematic diagram illustrating an embodiment where the 3D stereoscopic image remains fixed and viewable as the viewers head is angularly tilted in relation to the display.
FIGURE 5 is a schematic diagram illustrating an embodiment where the 3D stereoscopic image remains fixed in space as the viewing location is moved closer or fartherfrom the display. FIGURE 6 is a schematic diagram illustrating an embodiment where the 3D virtual object is seen from different viewpoints yet remains fixed in space.
FIGURE 7 is a schematic diagram illustrating a flow diagram of an embodiment. FIGURE 8 is a schematic diagram illustrating an embodiment applying viewpoint sensors and
shutter glasses.
FIGURE 9 is a schematic diagram illustrating an embodiment where images may be displayed as time progresses.
FIGURE 10 is a schematic diagram illustrating an embodiment applying shutter glasses and viewpoint location sensing where images may be displayed as time progresses. FIGURE 1 1 is a schematic diagram illustrating an embodiment applying shutter glasses and viewpoint position sensing where images may be displayed as time progresses. FIGURE 12 is a schematic diagram illustrating an embodiment applying shutterglasses and viewpoint position sensing where images may be displayed as time progresses.
FIGURE 13 is a schematic diagram illustrating an embodiment of anaglyph glasses. FIGURE 14 is a schematic diagram illustrating an embodiment of passively polarized glasses.
FIGURE 15 is a schematic diagram illustrating an embodiment of passively polarized anaglyph glasses.
FIGURE 16 is a schematic diagram illustrating prior art directional louvers and also an embodiment applying directional louvers in both horizontal and vertical directions.
FIGURE 17 is a schematic diagram illustrating an embodiment applying directional louvers in both horizontal and vertical directions and applying viewpoint location sensing. FIGURE 18 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
FIGURE 19 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
FIGURE 20 is a schematic diagram illustrating an embodiment applying louvers and
position sensors.
FIGURE 21 is a schematic diagram illustrating an embodiment applying louvers and position sensors. FIGURE 22 is a schematic diagram illustrating an embodiment applying louvers and position sensors.
FIGURE 23 is a schematic diagram illustrating an embodiment in portrait and landscape modes.
FIGURE 24 is a schematic diagram illustrating an embodiment applying position
sensors and illustrating user interaction with a virtual image.
FIGURE 25 is a schematic diagram illustrating an embodiment applying position sensors and illustrating user interaction with the virtual image. FIGURE 26 is a schematic diagram illustrating an embodiment applying position sensors and illustrating a virtual gaming system. FIGURE 27 is a schematic diagram illustrating an embodiment applying position sensors and illustrating a virtual cave.
FIGURE 28 is a schematic diagram illustrating an embodiment applying position sensors and illustrating a virtual simulator.
DETAILED DESCRIPTION OF THE INVENTION
With reference now to Fig. 1 of the drawings, there is shown an illustration of prior art. A 3D stereoscopic image is presented to viewers positioned at A and B. The left or first image (item 160) as well as the right or second image (item 170) locations is fixed on the image display (item 1 14) for either viewing from position A or B. The result is 3D image object locations 180 and 182 which differ in space. Each tends to be more in front of the
viewing position. With reference now to Fig. 2 of the drawings, there is shown an illustration of prior art. It is apparent that changing viewing angle results in less than optimal 3D image or possibly failure of 3D imaging.
With reference now to Fig. 3 of the drawings, there is shown an illustration of prior art, a 3D stereoscopic device which employs current louvers to aim or guide the light froman image to the viewer's eyes. The limitation is because the louvers are fixed and not configurable based on viewing location. Therefore the viewing location is limited.
With reference now to Fig. 4 of the drawings, there is shown an illustration of an embodiment of the instant invention. Sensors or markers (item 1 16) locate the viewpoint perspectives. These sensors may be passive receivers, or may be emissive and receptive of signals, or of other methods to determine viewpoint locations. Facial or object recognition may be used in lieu of sensors or markers to determine viewpoint locations. Other position sensing technologies that employ magnetism, accelerometers, or gravitation sensing may be employed to improve tracking of objects with the intent of improvement of speed and accuracy. Based on where the viewpoint perspective is sensed, an image is created corresponding to how the intended image would be seen from that viewpoint.
For a viewer located at A, the first or left displayed image (item 160) is a function of the position of the left eye of viewer A. The second or right displayed image (item 170) is a function of the position of the right eye of viewer A.
In this illustration, the viewer located at B has his head tilted in relation to the display (item 1 14). For the viewer located at B, the first or left displayed image (item 162) is a function of the position of the left eye of viewer located at B. The second or right displayed image (item 172) is a function of the position of the right eye of viewer located at B. As a result, the 3D stereoscopic object image (item 190) is now seen in approximately the same location in space from both viewpoints A and B. The viewer located at B is able to see the 3D stereoscopic image in approximately the same location in space as when the viewer is located at A, even though his head is tilted with respect to the display.
The 3D stereographic images location remains approximately fixed in space. This allows it's fixed position coordinates to be determined. These may then be compared with the sensed location of a viewer's body part, wearable object or pointer. In this manner it becomes possible for one or more users to interact with the 3D stereographic objects or images. Other position sensing or tracking technologies such as magnetic, accelerometers, inertial, or gravitation sensing may be employed with the intent of improvement of speed and accuracy.
With reference now to Fig. 5 of the drawings, there is shown an illustration of an embodiment of the present invention. This illustrates the fact that in addition to viewing angle, the viewing distance also is measured in order to create the correct display image presentations (items 260 and 270). In this manner both viewpoint 1 (item 220) and 2 (item222) are able to see the virtual object image (item 292) in approximately the same location in space. Sensors (item 1 16) locate the viewpoint perspectives. These sensors may be passive receivers, or may be emissive and receptive of signals, or of other methods to determine viewpoint locations. Based on where the viewpoint perspective is sensed, an image is created corresponding to how the intended image would be seen from that viewpoint.
With reference now to Fig. 6 of the drawings, there is shown an illustration of an embodiment showing how an object might appear when viewed from different perspectives in the instant invention.
With reference now to Fig. 7of the drawings, a flow diagram of an embodiment the instant invention is presented which shows a process for creating 3D stereoscopic images which are seen in the same location in space when viewed from different perspectives. One means to accomplish this is for the sensors to track an object, use facial recognition. Magnetic, acceleration, and gravitational data may also be employed to determine the first and second viewpoints.
The viewpoints correspond to the positions of first or left and second or right eye. The other methods for locating these viewpoint locations include but are not limited to markers that may reflect or transmit light and or sound, or create a magnetic field. These markers may be located on the face, body or on a wearable object. The methods given to recognize and locate a pair of eyes, glasses or facial feature viewpoints is for illustrative purposes only and is not meant to be limiting in any way.
With reference now to Fig. 8 of the drawings, there is shown an illustration of anembodiment of the present invention. An object image (item 128), in this case a cylinder, would be presented as different images to perspective viewing locations represented by items 108 and 120.
With reference now to Fig. 9 of the drawings, there is shown an illustration of anembodiment of the present invention. This illustration shows progression through time. Item 200 shows how as viewing location is changed, the 3D stereoscopic images location remains unchanged in space. Item 240 shows how this is accomplished by enabling each prospective viewpoint to see an image created based on the viewpoints perspective as viewing location differs.
With reference now to Fig. 10 of the drawings, there is shown an illustration of anembodiment of the present invention. This illustration shows progression through time. Item 200 shows how as viewing location is changed, the 3D stereoscopic images location remains unchanged in space. Item 240 shows how this is accomplished by enabling each prospective viewpoint to see an image created based on the viewpoints perspective as viewing location differs. In this illustration, item 240 shows employment of shutter glasses to accomplish this effect.
With reference now to Figs. H and 12 of the drawings, there is shown an illustration of an embodiment of the present invention. Images are created based on the perspective locations sensed of the lenses (items 109, 1 10,
122 and 124) of the glasses (items 108 and 120) through which the image will be viewed. As the shutters open and close
the image is presented with correct optical association so that a 3D image will be seen. Said 3D image is seen from various perspectives as it would be seen were the object immovable. Therefore the 3D object image appears in the same location no matter the viewing angle or distance. The viewing location is only limited by the size of the screen (item 1 14).
With reference now to Fig. 13 of the drawings, there is shown an illustration of anaglyph glasses. A first lens (item 204) allows light of a different color to pass than that of a second lens (item 206).
With reference now to Fig. 14 of the drawings, there is shown an illustration of passively polarized glasses. A first lens (item 304) allows light of an opposing polarization direction to pass than that of a second lens (item 306). The polarization may be linear, circular, or elliptical.
With reference now to Fig. 15 of the drawings, there is shown an illustration of passively polarized
anaglyph glasses. In illustration A the planes of polarization in is the same for both lenses of a pair of glasses, while the color of the lenses is different. Between glasses 802 and 812 the polarization orientation is different. The polarization may be linear, circular, or elliptical. In illustration B the polarization pattern is in opposition between lenses of the same pair of glasses. The color in the first and second lens of the glasses is the same. However the colors of one pair of glasses (item 852) differs from the colors of the second pair of glasses (item 862). These would allow two users to interact with different images.
Examples would be a game of scrabble or poker. However these examples are not intended to limit the use of this device in any way.
With reference now to Fig. 16 of the drawings, there is shown an illustration of the prior art and also an embodiment of the present invention.
In part A prior art is shown which uses louvers created by layers of liquid crystals which have a blocking function in the position of a "Z" shape. Since it is created of liquid crystals it may be reconfigured frame by frame to allow light to pass to the left or right eye in correct optical association with a first or second image so that a 3D stereoscopic effect is achieved without the need for glasses. The present invention improves upon this by enabling the louvers to vary position and rotational angle. Thereby a single viewer can see a 3D stereoscopic image in the same location in space as his viewing perspective changes and/or the head is tilted.
In part B the present invention improves upon the concept of louvers by using them in both vertical as well as horizontal planes. However, the louvers may be configured along
any combination of axis in any shape or pattern. Several shapes or patterns of louvers will be illustrated further in the description and endless varieties are possible. The result is guiding or aiming light as if through straws.
The cross section of the guiding straws may be one of many shapes or patterns. The aiming or guiding viewpoint location is the location picked up by the location sensors. The louvers are created to optimize viewing at the correct perspective location. In the present invention they may be angled differently at different locations of the screen to optimize this effect. This allows the viewpoint to be in any plane or angle. In this configuration it is possible for two or more viewers to observe the intended 3D stereoscopic image in the same location in space.
With reference now to Fig. 17 of the drawings, there is shown an illustration of an embodiment of the present invention. This shows how louvers may be employed to direct the correct image with optical association to the proper viewpoint as determined by sensors (item 1 16), so a 3D stereoscopic image is seen. Note the 3D stereoscopic object image does not change location in space as viewpoint is changed. With reference now to Fig. 18 of the drawings, there is shown an illustration of an embodiment of the present invention. Louvers (item 217) with horizontal and vertical components are shown. These shall be referred to as electronically configurable light guiding louvers, or louvers. Using input from the position sensors a computer calculates the optimum configuration of the louvers. These louvers may have variable pitch in more than one axes; thereby they are able to guide light from the image display through imaginary tubes (item 219) towards the intended viewpoint. In this case the eye at point B. It should be noted that the in this illustration the eye at point A is not at an intended viewing location and therefore sees no light from the image when it is projected or guided to viewpoint B. In this way a first or left image may be viewed by the left eye and a second or rightimage may be viewed by the right eye. In this way the created images may be directed with correct optical association so that a 3D stereoscopic image is seen. In addition the location from which each image is seen is limited. This permits additional viewers to also receive 3D stereoscopic images which are different from the first viewer. In the case of this invention, those images would be of the same 3D object image in the same location in space as viewed from each viewer's unique individual perspective.
With reference now to Fig. 19 of the drawings, there is shown an illustration of an embodiment of the present invention. In this illustration, we see the dual louver methodin time sequence from 1 to 4. In the first sequence the dual louvers direct a first image to viewpoint A. In the second sequence the dual louvers direct a second image to viewpoint B. In the third sequence the dual louvers direct a first image to viewpoint A. In the fourth sequence the dual louvers direct a second image to viewpoint B. In sequences 3 and 4 the viewer may be the same as in sequences 1 and 2 or they may be a second viewer. In each case the image viewed has been created for the particular viewpoint. In this way multiple viewers may enjoy the 3D image regardless of their viewing orientation. Furthermore, in this illustration the louver patterns in sequences 3 and 4 are slightly different than those of sequences 1 and 2. This is a technique which may be used to eliminate dark spots from occurring in the image where the same pixel would be blocked by dual louvers. By moving the louvers from frame to frame this problem can be alleviated.
With reference now to Fig. 20 of the drawings, there is shown an illustration of an embodiment of the present invention. In this illustration, the relationship between the viewpoint position sensed by the sensor (item 1 16) and the electronically configurable louver patterns is explained in further detail. In this illustration, the display (item 1 14) has cross sections expanded so that louvers from various locations of the display may be further illustrated. In this illustration, the viewpoint (item 530) is located directly in front of section 518 of the display (a location nearly centered in front of the display). So in order for the image to be seen form the viewpoint of item 530 the louvers of item 510 at the top left of the display must guide the light downward and towards the right. Likewise the louvers of item 516 of the upper right comer must guide the light from the image downwards and to the left. The louvers of item 512 must guide light upwards and to the right. The louvers of item 514 must guide the light upwards. Those of item 520 must guide the light upwards and to the left. Those located directly in front of the viewing location should guide the light mostly straight ahead.
It must be noted that as viewing location changes both the perspective image must be changed, as well as the angle of the louvers at all locations across the viewing display. The viewing location is sensed and a correct image creation and sequence is produced for viewing by left and right eyes of one or more viewers, in correct optical association with the electronically configurable louvers so that a 3D stereoscopic image is seen by one or more viewers. With reference now to Fig. 21 of the drawings, there is shown an illustration of an embodiment of the present invention. This illustrates sample electronically configurable louver patterns. In these examples the vertical component of the louvers is larger than the horizontal. The taller axis may be rotated to correct for a tilted head angle of the viewer. The taller portion is intended to coincide with the vertical axis of a viewer's face. This has the advantage of allowing more light to pass through the louvers while allowing one of a pair of viewpoints to see the image while the image is blocked form the other in a pair of viewpoints. By pair of viewpoints one may consider a left and right eye. These examples
are not meant to limit the shape or pattern of the louvers.
With reference now to Fig. 22 of the drawings, there is shown an illustration of an embodiment of the present invention. This illustrates how electronically configurable louvers may be applied so that the intended viewing location receives the correct opticalimage while other viewing locations do not.
In this illustration, a small portion (item 602) of the display (item 1 14) is expanded(item 610). In item 610 we see configurable louvers which operate in both the vertical andhorizontal directions to guide the light from the display image. Item 630 shows an approximate area where the light from a first image may strike the intended side of a viewers face. Item 650 shows an approximate area where the light from a second image may strike the other side of a viewers face. In this way a large area of light from the image is able to pass through the louvers to the intended viewers eye while limiting the light from the image which would be seen at another location.
In this illustration, another small portion (item 604) of the display (item 1 14) is expanded (item 620). In item 620 we see configurable louvers which operate in both the vertical and horizontal directions to guide the light from the display image. In this case the viewers head is tilted at an angle relative to the display (item 1 14). The configurable louvers (item 620) now tilt to match the angle of tilt of the viewers head. Item 640 shows an approximate area where the light from a first image may strike the intended side of a viewers face. Item 660 shows an approximate area where the light from a second image may strike the other side of a viewers face. By applying louvers which are taller than they are wide, a larger amount of light from the image is able to pass through the louvers to the intended viewer's eye while limiting the light from the image which would be seen at another location.
One means to accomplish this is for the sensors to sense objects which enable a facial recognition and therefore location and pairing information of the eyes. Another method may involve a computing device which compares locations of eyes and creates pairs via an algorithm based on distance between eyes or some other method. Other methods for locating paired eye positions include, but are not limited to sensing light reflective or light transmitting devices located on the face or on a wearable device such as glasses, a hat, necklace etc. The means given to recognize a pair of eyes, viewpoints or facial features is for illustrative purposes only and is not meant to be limiting in any way.
Additionally, the ability to guide the light from the display to a specific area allows a privacy mode. This mode may use but not be limited to, facial recognition computation, eye pattern recognition or other means such as proximity are used to allow viewing by one person only. The electronically configurable light guiding louvers of more than one axis function to channel the light to the eyes of a single viewer. The electronically configurable light guiding louvers of more than one axis function to channel the light from the displayed image to the eyes of a single viewer. If desired, the number of people who may view the displayed image in privacy mode may be manually increased.
With reference now to Fig. 23 of the drawings, there is shown an illustration of an embodiment of the present invention. In this illustration, a handheld device is shown which may be used in both portrait and landscape modes. In this illustration configurable louvers are used to create an auto stereoscopic 3D image. However, the method of shutter glasses may also be applied. To accomplish this end a display orientation sensor is applied. This sensor may be gravity sensing, motion or inertia sensing, but is not limited to these technologies. With reference now to Fig. 33 of the drawings, there is shown an illustration of an embodiment of the present invention. In this illustration a 3D stereoscopic image of a box isshown. The box is manipulated by use of a pointing tool (item 700). This pointing tool may have a tip (item704) of emissive material, reflective material or other means to make it's location easily read by the sensors. The pointer may also have one or more functional buttons (item 702). These buttons may operate in a similar fashion as buttons on a computer controller such as a mouse. By applying this pointer an object may be identified, grabbed and moved, sized or any number of functions commonly associated with the computer mouse. The difference being that the virtual objects and the pointer may be operated in 3 axis or dimensions.
With reference now to Fig. 25 of the drawings, there is shown an illustration of an embodiment of the present invention. In this illustration a 3D stereoscopic image of a remote device is shown. The virtual image of the remote device in space is approximately the same for most viewing locations. As such, it's virtual location in space and the virtual location in space of each individual key on the remote device may be calculated by the devices computers. By comparing the calculated fixed virtual location with real world objects interaction may take place. In the same manner a virtual keyboard, virtual touch screen, virtual pottery wheel, or virtual musical instrument may be employed. In addition a pointer, body part or wearable device may be located by the sensors and their position in space may likewise be calculated or quantified. A wearable device such as a glove may contain position markers of reflective or emissive materials which enable sensors to accurately determine it's location in space and for the case of a glove also the fingers. An advanced sensor may be able to detect the location of fingers without the need for gloves with position markers. In this embodiment, either the method applying shutter glasses, or the method applying louvers may beused.
As the location of the 3D stereoscopic keyboard and also a pointer or pointers location is known, it may now be possible through computation to determine when the bodypart or pointer is in proximity to places on the keyboard. In this manner keyboard entries may be made. This is similar to what occurs on a 2D screen with touch sensing. The difference being the typing takes place on a virtual image as opposed to a solid surface. In this embodiment, either the method applying shutter glasses, or the method applying louvers may be used.
The virtual keyboard and any other virtual object may be interacted in a multitude of other ways. These include stretching and shrinking, twisting and turning and any other ways a 2D touch object could be manipulated. The understanding is that for the 3D virtual touch object, 3axis rather than 2 axis, may be applied and manipulated. In this embodiment, either the method applying shutter glasses, or the method applying louvers may be used.
In addition, the virtual keyboard or any other virtual interactive device described may be brought forth and/or removed by user gestures sensed by the systems location sensors. In addition gestures sensed by the location sensors may be used for other functions, such as but not limited to turning the pages of an electronic book, changing stations on a television, orraising or lowering volume of the display system or other components.
With reference now to Fig. 26 of the drawings, there is shown an illustration of an embodiment of the present invention. In this illustration a 3D stereoscopic image of a game (item 196) is shown. The 3D virtual game pieces may be created and also manipulated by any of the methods previously described. All of the properties described in illustration 25 apply. The display system (item 1 14) may be made to lay flat so as to provide a better gaming surface. In this way board games and other types of games may be played and interacted with by the user or users. Virtual worlds may be created, viewed and/or interacted with. This embodiment of the present invention makes an excellent gaming system.
With reference now to Fig. 27 of the drawings, there is shown an illustration of an embodiment of the present invention. In this illustration a 3D stereoscopic virtual cave is shown which employs the technology previously illustrated. In such a cave the objects appear more real as they remain approximately fixed in space as the viewer and viewpoint location are changed. The objects in the virtual cave may be interacted with in the manner which has been described above.
With reference now to Fig. 28 of the drawings, there is shown an illustration of an embodiment of the present invention. In this illustration a 3D stereoscopic image of an aircraft simulator is shown. Varying amounts of the simulator may be simulated depending on the wants of the user. It may be that only objects outside of the control environment are simulated. However it is possible for virtual controls, buttons, switches and other controlling devices to be simulated and interacted with, in the manner described above. In addition the interior environment of the simulator may be created virtually. This enables simulators whose configuration may be controlled by applying computer software. For example a virtual flight simulator could be used as a B-737 for one event and reconfigured as an A-320 for the next event. This would save money for the user as fewer simulators would be needed.
Other virtual simulations lend application to, but are not limited to, law enforcement and the military. In this embodiment, either the method applying shutter glasses, or the method applying louvers may be used.
The present invention may be switched to other modes of operation. These include but are not limited to prior art 3D stereoscopic imaging where the 3D stereoscopic image location varies with viewer location. This may be a useful mode for viewing prior art technology 3D imagery such as 3D movies. Also, the display may be used to view 2D images in the manner of prior art. The switching among the various 3D and 2D modes may be automatic based on the format of the viewing material. In this embodiment, either the method applying shutter glasses, or the method applying louver technologies may be used.
By way of conclusion, the prior art in this area of technology encompasses displays of two types, one which produce a 3D stereoscopic effect when viewed through wearable shutter glasses, the second which produces a 3D stereoscopic image through the use of light guiding louvers. This prior art, such as the references cited herein, is limited by viewing location. In addition the prior art is limited to 3D stereoscopic images which may not be seen in approximately the same location as viewpoint changes nor when viewed by different users. This does not allow users to communicate about a 3D stereoscopic image by gestures, for example pointing, or gesturing. In the present invention 3D stereoscopic images or virtual images may also be interacted with by the user(s). This is accomplished by applying location sensing technology and comparing the data with the computed 3D virtual object location.
Additional prior art utilizes parallax barriers to obtain 3D stereoscopic effects. There is prior art which enables the parallax barriers to function in different display orientations. However, the prior art parallax barriers limit the eye placement of the viewer to a narrow range for large displays. In addition, since the louvers of prior art function in only one axis at a time they have difficulties sharing the 3D imagery with other viewers. Prior art is also limited to small devices for virtual 3D auto stereoscopic display systems.
The instant invention improves upon the prior art by improving upon the parallax barriers. The electronically configurable light guiding louvers have the advantage of variable pitch and multiple axis of blocking or guiding the light from the display. This allows multiple viewers to view large screen devices and share in the 3D
experience. It also allows a privacy mode.
In addition, a 3D stereoscopic image may be created which remains approximately fixed in space. Such a virtual image may be pointed at by one or more viewers. Because the virtual image is nearly fixed in space it's virtual location may be compared with a user's finger, other body parts or pointer. In this way a viewer may interact with a virtual 3D image by pointing or other gestures as sensed by the position sensors. In addition the position sensors may be used to interpret a variety of gestures which correspond to a variety of commands. By using the position sensors gestures may be made which cause the display device to react to the viewer. Examples include but are not limited to gestures which call fora virtual keyboard or remote to be displayed. They may also cause a station of a television tochange or the volume to increase or decrease. There are many more possibilities and this listof gestures and results is not intended to be limiting in any way.
These and other advantages are readily apparent to one who has viewed the accompanying figures and read the descriptions.
Exemplary embodiments have been disclosed herein, and although specific terms are employed, they are used and are to be interpreted in a generic and descriptive sense only and not for purpose of limitation. Accordingly, it will be understood by those of ordinary skill in the art that various changes in form and details may be made without departing from the spirit and scope of the present invention as set forth in the following claims.
While the foregoing written description of the invention enables one of ordinary skill to make and use what is considered presently to be the best mode thereof, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the
invention as defined by the appended claims. Therefore, the invention is not to be limited by the above described embodiment, methods and examples, but by all embodiments and methods within the scope and spirit of the invention as claimed.

Claims

CLAIMS: I Claim:
1. A stereoscopic image display system comprising: one or more sensors defined to track positions of objects in relation to the display; and an image generating system which can create first and second images, one for each eye, based on viewpoint perspectives; a display panel on which first and second images are displayed; and a stereoscopic means to coordinate which image is seen by each eye;
Whereby first and second images that together form a 3D stereoscopic image, which remains approximately fixed in space as said viewpoint perspectives vary, may be seen when said display is viewed through said stereoscopic means.
2. The system of claim 1 wherein said display panel is one of a liquid crystal display device, an electroluminescent display device, an organic light emitting display device, a plasma display device, and a projected image display device.
3. The system of claim 1 wherein said sensors apply object recognition, facial recognition
technology, gyroscopic, acceleration sensing, gravitational sensing, magnetic fields, light emitting or reflecting markers, or a combination of these methods to track objects.
4. The system of claim 1 wherein said sensors sense generated or reflected light.
5. The system of claim 4 wherein said light is infrared.
6. The system of claim 1 wherein said stereoscopic means employ anaglyph glasses, shutter glasses, passively polarized glasses, or a combination of anaglyph and passively polarized glasses, or other combination of glasses.
7. The system of claim 1 wherein said stereoscopic means consists of electronically configurable louvers that channel light in one or more axis from the image display panel.
8. The system of claim 1 wherein 2D or 3D mode is switched automatically based on the 2D or 3D format of the image displayed.
9. The system of claim 1 wherein the viewer(s) may interact with said 3d stereographic image by means of a point, a touch, a gesture, or a sound.
10. The system of claim 9 wherein said 3D stereographic image may be manipulated by applying a pointing device, a glove or other wearable device, which has position locator markings, with or without control buttons.
1 1. The system of claim 9 wherein the said 3D stereographic image is a virtual keyboard, virtual remote controller, virtual musical instrument, virtual pottery wheel, virtual cave, simulator, or a game.
12. A stereoscopic image display method comprising: one or more sensors defined to track positions of objects in relation to the display; and an image generating system which can create first and second images, one for each eye, based on viewpoint perspectives; a display panel on which first and second images are displayed; and a stereoscopic means to coordinate which image is seen by each eye;
Whereby first and second images that together form a 3D stereoscopic image, which remains approximately fixed in space as said viewpoint perspectives vary, may be seen when said display is viewed through said stereoscopic means.
13. The method of claim 12 wherein said display panel is one of a liquid crystal display device, an electroluminescent display device, an organic light emitting display device, a plasma display device, and a projected image display device.
The method of claim 12 wherein said sensors apply object recognition, facial recognition technology, gyroscopic, acceleration sensing, gravitational sensing, magnetic fields, light emitting or reflecting markers, or a combination of these methods to track objects.
15. The method of claim 12 wherein said sensors sense generated or reflected light.
16. The method of claim 15 wherein said light is infrared.
17. The method of claim 12 wherein said stereoscopic means employ anaglyph glasses, shutter
glasses, passively polarized glasses, or a combination of anaglyph and passively polarized glasses, or other combination of glasses.
18. The method of claim 12 wherein said stereoscopic means consists of electronically configurable louvers that channel light in one or more axis from the image display panel.
19. The method of claim 12 wherein 2D or 3D mode is switched automatically based on the 2D or 3D format of the image displayed.
20. The method of claim 12 wherein the viewer(s) may interact with said 3d stereographic image by means of a point, a touch, a gesture, or a sound.
21. The method of claim 20 wherein said 3D stereographic image may be manipulated by applying a pointing device, a glove or other wearable device, which has position locator markings, with or without control buttons.
22. The method of claim 20 wherein the said 3D stereographic image is a virtual keyboard, virtual remote controller, virtual musical instrument, virtual pottery wheel, virtual cave, simulator, or a game.
PCT/US2014/072419 2013-10-31 2014-12-26 Stereoscopic display WO2015066734A1 (en)

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
US201361897983P 2013-10-31 2013-10-31
US61/897,983 2013-10-31
US201361900982P 2013-11-06 2013-11-06
US61/900,982 2013-11-06
US14/106,766 US10116914B2 (en) 2013-10-31 2013-12-15 Stereoscopic display
US14/106,766 2013-12-15
US201361920755P 2013-12-25 2013-12-25
US61/920,755 2013-12-25
US201461934806P 2014-02-02 2014-02-02
US61/934,806 2014-02-02
US201462035477P 2014-08-10 2014-08-10
US62/035,477 2014-08-10
US14/547,555 2014-11-19
US14/547,555 US9883173B2 (en) 2013-12-25 2014-11-19 Stereoscopic display

Publications (1)

Publication Number Publication Date
WO2015066734A1 true WO2015066734A1 (en) 2015-05-07

Family

ID=53005320

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/072419 WO2015066734A1 (en) 2013-10-31 2014-12-26 Stereoscopic display

Country Status (1)

Country Link
WO (1) WO2015066734A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100103516A1 (en) * 2008-10-27 2010-04-29 Real D Head-tracking enhanced stereo glasses
US20100149182A1 (en) * 2008-12-17 2010-06-17 Microsoft Corporation Volumetric Display System Enabling User Interaction
WO2012044272A1 (en) * 2010-09-29 2012-04-05 Thomson Licensing Automatically switching between three dimensional and two dimensional contents for display

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100103516A1 (en) * 2008-10-27 2010-04-29 Real D Head-tracking enhanced stereo glasses
US20100149182A1 (en) * 2008-12-17 2010-06-17 Microsoft Corporation Volumetric Display System Enabling User Interaction
WO2012044272A1 (en) * 2010-09-29 2012-04-05 Thomson Licensing Automatically switching between three dimensional and two dimensional contents for display

Similar Documents

Publication Publication Date Title
US10116914B2 (en) Stereoscopic display
US10469834B2 (en) Stereoscopic display
US9864495B2 (en) Indirect 3D scene positioning control
US6084594A (en) Image presentation apparatus
US10739936B2 (en) Zero parallax drawing within a three dimensional display
US9986228B2 (en) Trackable glasses system that provides multiple views of a shared display
EP3106963B1 (en) Mediated reality
US20170150108A1 (en) Autostereoscopic Virtual Reality Platform
US11051006B2 (en) Superstereoscopic display with enhanced off-angle separation
JP4413203B2 (en) Image presentation device
US10866820B2 (en) Transitioning between 2D and stereoscopic 3D webpage presentation
CN114402589A (en) Smart stylus beam and secondary probability input for element mapping in 2D and 3D graphical user interfaces
US9703400B2 (en) Virtual plane in a stylus based stereoscopic display system
US10652525B2 (en) Quad view display system
WO2007100204A1 (en) Stereovision-based virtual reality device
JPH075978A (en) Input device
WO2017009529A1 (en) Mediated reality
CN111566596A (en) Real world portal for virtual reality display
US10216357B2 (en) Apparatus and method for controlling the apparatus
US9696842B2 (en) Three-dimensional cube touchscreen with database
US20180053338A1 (en) Method for a user interface
EP3260950A1 (en) Mediated reality
US11443487B2 (en) Methods, apparatus, systems, computer programs for enabling consumption of virtual content for mediated reality
US20170372522A1 (en) Mediated reality
US20230071571A1 (en) Image display method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14858877

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14858877

Country of ref document: EP

Kind code of ref document: A1