US20210012557A1 - Systems and associated methods for creating a viewing experience - Google Patents
Systems and associated methods for creating a viewing experience Download PDFInfo
- Publication number
- US20210012557A1 US20210012557A1 US17/033,496 US202017033496A US2021012557A1 US 20210012557 A1 US20210012557 A1 US 20210012557A1 US 202017033496 A US202017033496 A US 202017033496A US 2021012557 A1 US2021012557 A1 US 2021012557A1
- Authority
- US
- United States
- Prior art keywords
- dimensional model
- participant
- spectator
- viewing experience
- viewpoint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 64
- 230000008569 process Effects 0.000 claims abstract description 23
- 230000000694 effects Effects 0.000 claims description 45
- 230000000007 visual effect Effects 0.000 claims description 38
- 238000013507 mapping Methods 0.000 claims description 33
- 230000004886 head movement Effects 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims 2
- 230000003190 augmentative effect Effects 0.000 abstract description 11
- 230000009471 action Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 6
- 238000012800 visualization Methods 0.000 description 5
- 238000004880 explosion Methods 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- VVQNEPGJFQJSBK-UHFFFAOYSA-N Methyl methacrylate Chemical compound COC(=O)C(C)=C VVQNEPGJFQJSBK-UHFFFAOYSA-N 0.000 description 1
- 229920005372 Plexiglas® Polymers 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/016—Input arrangements with force or tactile feedback as computer generated output to the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/292—Multi-camera tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30221—Sports video; Sports image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/16—Using real world measurements to influence rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/024—Multi-user, collaborative environment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H04N5/247—
Definitions
- Viewing a sport using a feed from a camera positioned and/or controlled to capture action of the sport provides a perspective of the action limited by camera position. Multiple cameras may be used to provide multiple perspectives, but each of these perspectives is still limited by the individual camera position.
- a process generates a viewing experience by determining location data and movement data of (a) at least one object and (b) at least one participant within an event area; determining a three-dimensional model of the event area, the participant and the object based upon the location data and the movement data; determining a viewpoint of a spectator, the viewpoint defining an origin, relative to the three-dimensional model, and a direction of the viewing experience; and generating the viewing experience for the viewpoint at least in part from the three-dimensional model.
- generating includes blurring parts of the viewing experience that are less important to reduce latency of generating the viewing experience.
- determining the location data and the movement data further includes capturing light-field data relative to the object and the participant to enhance the three-dimensional model.
- determining the viewpoint further includes capturing light-field data relative to the viewpoint to enhance the three-dimensional model, wherein the light-field data comprises light intensity, light direction, and light color.
- determining the location data and the movement data further includes determining relative location data, of the object and the participant, with respect to one or more of (i) a permanent object location at the event area, (ii) a second object at the event area, (iii) a second participant at the event area, and (iv) a secondary grid at the event area.
- determining the viewpoint further includes determining relative location data of the viewpoint, with respect to one or more of (i) a permanent object location at the arena, (ii) the at least one object, (iii) the at least one participant, and (iv) a secondary grid at the arena.
- the secondary grid is a secondary virtual grid positioned between the viewpoint and the object or the participant.
- Certain embodiments further include receiving primary images from a plurality of cameras positioned at the event area; and mapping at least one of the primary images to the three-dimensional model.
- mapping further includes mapping light field data to the three-dimensional model.
- determining the viewpoint further includes determining, based on viewing directives received from the spectator, a virtual camera defining a virtual origin, relative to the three-dimensional model, and a virtual direction of the viewing experience.
- Certain embodiments further include: generating a virtual image, having the object and/or the participant, based upon (i) the three-dimensional model and (ii) the viewpoint or the virtual camera; and sending one or both of (i) the three-dimensional model and (ii) at least a portion of the virtual image to a viewing device configured to provide the viewing experience.
- Certain embodiments further include: determining when an obstruction is located between (i) one of the viewpoint and the virtual camera and (ii) one of the object and the participant; and adding at least a portion of the virtual image, corresponding to a location of the obstruction, to the viewing experience to at least partially remove the obstruction from the viewing experience.
- mapping further includes mapping at least a portion of one of the primary images identified by a section of a secondary grid corresponding to the participant or object.
- Certain embodiments further include adding visual special effects and audible special effects to the viewing experience, the special effects being generated based upon one or both of (i) the location data and the movement data of the object and/or the participant and (ii) an occurrence of interest detected within the event area.
- Certain embodiments further include: receiving sound feeds from a plurality of microphones positioned at the event area; mapping the sound feeds to the three-dimensional model; and generating the viewing experience to include sounds based upon the three-dimensional model.
- Certain embodiments further include providing haptic feedback to the spectator based at least in part upon one or more of (a) the virtual camera and the location data of the object and the participant, (b) an occurrence of interest detected within the event area and the visual and audio special effects, and (c) feedback from other spectators sharing the viewing experience.
- a system generates a free-viewpoint experience for a spectator.
- the system includes a plurality of cameras positioned at an event area to capture primary images of the event area; tracking apparatus configured to determine location data and movement data of (a) at least one object and (b) at least one participant within the event area; and a server having a processor and memory storing machine readable instructions that when executed by the processor are capable of: receiving the primary images from the plurality of cameras; determining a three-dimensional model of the event area, the participant and the object based upon the location data and the movement data of the participant and the object; and sending an output to a viewing device for providing the free-viewpoint experience, having at least one virtual image, to the spectator.
- the system further includes machine readable instructions that, when processed by the server, are capable of: determining, based on viewing directives received from the spectator, a virtual camera defining an origin within the three-dimensional model and a direction of the free-viewpoint experience; and generating the at least one virtual image having a portion of the three-dimensional model, based upon the virtual camera.
- the output includes one or both of the virtual image, and the three-dimensional model.
- a process generates a viewing experience.
- the process determines location data and movement data of (a) at least one object and (b) at least one participant within an event area; determines a three-dimensional model of the event area, the participant and the object based upon the location data and movement data; determines a viewpoint of the spectator, the viewpoint defining an origin, relative to the three-dimensional model, and a direction of the viewing experience; and generates the viewing experience at least in part from the three-dimensional model.
- the generating includes blurring parts of the viewing experience that are less important to reduce latency of generating the viewing experience.
- determining location data and movement data further includes capturing light-field data relative to the object and the participant to enhance the three-dimensional model.
- determining a viewpoint further includes capturing light-field data relative to the viewpoint to enhance the three-dimensional model.
- the light field data includes light intensity, light direction, and light color.
- determining location data and movement data further includes determining relative location data, of the object and the participant, with respect to one or more of (i) a permanent object location at the arena, (ii) a second object at the arena, (iii) a second participant at the arena, and (iv) a secondary grid at the arena.
- determining a viewpoint further includes determining relative location data, of the viewpoint, with respect to one or more of (i) a permanent object location at the arena, (ii) the at least one object, (iii) the at least one participant, and (iv) a secondary grid at the arena.
- the secondary grid is a secondary virtual grid positioned between the viewpoint and the object or the participant.
- determining location data and movement data further includes triangulating signals that are one or the combination of (i) emitted from and (ii) received by an object location unit and a participant location unit; the signals selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- determining a viewpoint further includes triangulating signals that are one or the combination of (i) emitted from and (ii) received by a spectator location unit; the signals selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- Certain embodiments further include: receiving primary images from a plurality of cameras positioned at an event area; and mapping at least one of the primary images to the three-dimensional model.
- mapping further includes mapping light field data to the three-dimensional model.
- determining the viewpoint further includes: determining, based on viewing directives received from the spectator, a virtual camera defining a virtual origin, relative to the three-dimensional model, and a virtual direction of the viewing experience.
- Certain embodiments further include: within the three-dimensional model, determining, around each of the participant and the object, a virtual grid having a plurality of cells; and the step of mapping further comprising: mapping at least a portion of one of the primary images identified by at least one cell of the virtual grid corresponding to the participant or object.
- mapping further includes: mapping at least a portion of one of the primary images identified by a section of the secondary grid corresponding to the participant or object.
- mapping further includes interpolating between any two of the primary images.
- Certain embodiments further include generating a virtual image, having the object and/or the participant, based upon (i) the three-dimensional model and (ii) the viewpoint or the virtual camera.
- Certain embodiments further include sending the location data of the object and the participant to a viewing device configured to provide the viewing experience.
- Certain embodiments further include sending one or both of (i) the three-dimensional model and (ii) at least a portion of the virtual image to a viewing device configured to provide the viewing experience.
- Certain embodiments further include: determining an occurrence of interest; and adding visual special effects and audible special effects to the viewing experience, the special effects based on (i) the location data and movement data of the object and/or the participant and (ii) the occurrence of interest.
- Certain embodiments further include determining when an obstruction is located between (i) the viewpoint and (ii) the object or the participant.
- Certain embodiments further include determining when an obstruction is located between (i) the virtual camera and (ii) the object or the participant.
- Certain embodiments further include adding at least a portion of the virtual image, corresponding to the location of the obstruction, to the viewing experience.
- Certain embodiments further include removing the obstruction from the viewing experience.
- Certain embodiments further include receiving sound feeds from a plurality of microphones positioned at the event area; mapping the sound feeds to the three-dimensional model; and determining the viewing experience to include sounds based upon the three-dimensional model.
- Certain embodiments further include providing haptic feedback to the spectator based on the virtual camera and the location data of the object and the participant.
- Certain embodiments further include providing haptic feedback to the spectator based on the occurrence of interest and the visual and audio special effects.
- Certain embodiments further include providing haptic feedback to the spectator based on feedback from other spectators sharing the viewing experience.
- a system generates a viewing experience for a spectator.
- the system includes event tracking apparatus configured to determine location data and movement data of (i) an object and (ii) a participant within an event area; spectator tracking apparatus configured to determine spectator location data and spectator viewing direction data; and a server having a processor and memory storing machine readable instructions that when executed by the processor are capable of: determining a three-dimensional model of the event area, the model having the participant and the object based upon the location data and movement data of the participant and the object; and determining a spectator viewpoint based on the spectator location data and spectator viewing direction data; the viewpoint defining an origin, relative to the three-dimensional model, and a direction of the viewing experience.
- Certain embodiments further include a plurality of cameras positioned at an event area to capture primary images of the event area.
- the event tracking apparatus determines location data and movement data of the participant and the object using triangulation of signals that are one or the combination of (i) emitted from and (ii) received by an object location unit and a participant location unit; the object location unit and the participant location unit attached to the object and to the participant, respectively; the signals being selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- the event tracking apparatus determines location data and movement data of the participant and the object using light field data captured by one or the combination of (i) the event tracking apparatus and (ii) the object location unit and the participant location unit.
- the spectator tracking apparatus determines spectator location data and spectator viewing direction data using triangulation of signals that are one or the combination of (i) emitted from and (ii) received by a spectator location unit; the signals selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- the spectator tracking apparatus determines spectator location data and spectator viewing direction data using light field data captured by one or the combination of (i) the spectator tracking apparatus and (ii) the spectator location unit.
- the light field data includes light intensity, light direction, and light color.
- the machine-readable instructions when processed by the server, are further capable of: determining an occurrence of interest based on the three-dimensional model and the spectator viewpoint; the occurrence having at least an identity and coordinates relative to the three-dimensional model.
- Certain embodiments further includes a software module that when executed by a processor of a viewing device is capable of: augmenting the viewing experience for the spectator based on the occurrence of interest received from the server.
- the software module augments the viewing experience via providing visual special effects and audible special effects.
- the machine readable instructions when processed by the server, are further capable of: receiving the primary images from the plurality of cameras; determining, within the three-dimensional model, around each of the participant and the object, a virtual grid having a plurality of cells; mapping at least a portion of one of the primary images identified by at least one cell of the virtual grid corresponding to the participant or object; and generating a virtual image having a portion of the three-dimensional model corresponding to the participant or the object based on the spectator viewpoint.
- the machine-readable instructions when processed by the server, are further capable of: correcting the virtual image based on at least a portion of one of the primary images identified by at least one cell of a secondary grid corresponding to the participant or object; the secondary grid positioned between the viewpoint and the participant or object.
- the secondary grid is a virtual secondary grid.
- the machine-readable instructions when processed by the server, are further capable of: interpolating between portions of the primary images.
- the software module further augments the viewing experience via providing the virtual image received from the server.
- the machine-readable instructions when processed by the server, are further capable of: determining, based on the three-dimensional model and the spectator viewpoint, when an obstruction is located between (i) the viewpoint and (ii) the object or the participant; and sending directives to the software module to display at least a portion of the virtual image corresponding to the obstruction.
- the machine-readable instructions when processed by the server, are further capable of: receiving sound feeds from a plurality of microphones positioned at the event area; mapping the sound feeds to the three-dimensional model; and generating a sound output based on one or the combination of (i) the spectator viewpoint and (ii) the occurrence of interest.
- the software module further augments the viewing experience via providing the sound output received from the server.
- the software module further augments the viewing experience via providing haptic feedback based on the occurrence of interest.
- a system generates a free-viewpoint experience for a spectator.
- the system includes a plurality of cameras positioned at an event area to capture primary images of the event area; tracking apparatus configured to determine location data and movement data of (a) at least one object and (b) at least one participant within the event area; and a server having a processor and memory storing machine readable instructions that when executed by the processor are capable of: receiving the primary images from the plurality of cameras; determining a three-dimensional model of the event area, the participant and the object based upon the location data and movement data of the participant and the object; and sending an output to a viewing device for providing the free-viewpoint experience, having at least one virtual image, to the spectator.
- the tracking apparatus determines location data and movement data of the participant and the object using triangulation of signals that are one or the combination of (i) emitted from and (ii) received by an object location unit and a participant location unit; the object location unit and the participant location unit attached to the object and to the participant, respectively; the signals selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- the tracking apparatus determines location data and movement data of the participant and the object using light field data captured by one or the combination of (i) the tracking apparatus and (ii) the object location unit and the participant location unit.
- the machine-readable instructions when processed by the server, are further capable of: determining, based on viewing directives received from the spectator, a virtual camera defining an origin within the three-dimensional model and a direction of the free-viewpoint experience; and generating the virtual image having a portion of the three-dimensional model, based upon the virtual camera.
- the output is the virtual image.
- the output is the three-dimensional model.
- Certain embodiments further include a software module, having machine readable instructions, that when executed by a processor of the viewing device is capable of: determining, based on viewing directives received from the spectator, a virtual camera defining an origin within the three-dimensional model and a direction of the free-viewpoint experience; and generating the virtual image having a portion of the three-dimensional model, based upon the virtual camera.
- the machine-readable instructions when processed by the server, are further capable of: determining, within the three-dimensional model, around each of the participant and the object, a virtual grid having a plurality of cells; and mapping at least a portion of one of the primary images identified by at least one cell of the virtual grid corresponding to the participant or object.
- the machine-readable instructions when processed by the server, are further capable of: correcting the virtual image based on at least a portion of one of the primary images identified by at least one cell of a secondary grid corresponding to the participant or object; the secondary grid positioned between the virtual camera and the participant or object.
- the software module is further capable of correcting the virtual image based on at least a portion of one of the primary images identified by at least one cell of a secondary grid corresponding to the participant or object; the secondary grid positioned between the virtual camera and the participant or object.
- the secondary grid is a virtual secondary grid.
- the machine-readable instructions are further capable of: interpolating between any two of the primary images.
- the software module is further capable of interpolating between any two of the primary images.
- the machine-readable instructions when processed by the server, are further capable of: determining, based on the three-dimensional model and the virtual camera, when an obstruction is located between (i) the virtual camera and (ii) the object or the participant; and sending directives to the software module to display at least a portion of the virtual image corresponding to the obstruction.
- the software module is further capable of removing an obstruction from the virtual image, the obstruction located between the virtual camera and the participant or object within the virtual image.
- the machine-readable instructions when processed by the server, are further capable of determining an occurrence of interest based on the three-dimensional model and the virtual camera; the occurrence having at least an identity and coordinates relative to the three-dimensional model.
- the machine-readable instructions when processed by the server, are further capable of sending directives to the software module to provide visual special effects and audible special effects, within the free-viewpoint experience, based on the three-dimensional model, virtual camera, and occurrence of interest.
- the machine-readable instructions when processed by the server, are further capable of adding, to the virtual image, visual special effects and audible special effects based on the three-dimensional model, virtual camera, and occurrence of interest.
- the software module is further capable of determining an occurrence of interest based on the three-dimensional model and the virtual camera; the occurrence having at least an identity and coordinates relative to the three-dimensional model.
- the software module is further capable of providing visual special effects and audible special effects, within the free-viewpoint experience, based on the three-dimensional model, virtual camera, and occurrence of interest.
- the machine-readable instructions when processed by the server, are further capable of: receiving sound feeds from a plurality of microphones positioned at the event area; and mapping the sound feeds to the three-dimensional model.
- the output, of the server further includes sounds based on the three-dimensional model and the virtual camera.
- the software module is further capable of providing sounds, within the free-viewpoint experience, based on the three-dimensional model and the virtual camera.
- the software module is further capable of providing haptic feedback, within the free-viewpoint experience, based on the virtual camera and the occurrence of interest.
- FIG. 1 is a schematic diagram illustrating one example system for creating a viewing experience, according to an embodiment.
- FIG. 2 illustrates one example viewing experience created by the system of FIG. 1 , according to an embodiment.
- FIG. 3 shows the system of FIG. 1 in further example detail, for creating a viewing experience from a 3D model based upon a spectator controlled viewpoint, according to an embodiment.
- FIG. 4 shows the system of FIG. 3 further including a spectator tracking apparatus, according to an embodiment.
- FIG. 5 shows the viewing device of FIGS. 3 and 4 in further example detail, according to an embodiment.
- FIG. 6 shows the system of FIG. 5 further including at least one microphone and illustrating a virtual camera, according to an embodiment.
- FIG. 7 shows the system of FIG. 6 further illustrating generation of special effects to enhance the viewing experience, according to an embodiment.
- FIG. 8 shows the system of FIG. 7 further illustrating generation of haptic feedback with the viewing experience, according to an embodiment.
- FIG. 9 shows the system of FIGS. 1-8 with a plurality of virtual cameras illustratively shown within the event arena.
- FIG. 10 shows one example participant configured with one of the microphones and one of the cameras of the system of FIGS. 1-9 , and further configured with a plurality of participant location units, in an embodiment.
- FIGS. 11A-11C depict a virtual grid around a participant, in an embodiment.
- FIG. 12A-12C show a portion of the event arena of FIG. 1 having a surrounding border forming a secondary grid, in an embodiment.
- FIGS. 13A and 13B show an obstruction positioned between a spectator viewpoint or virtual camera and a participant.
- FIG. 14A shows a portion of a viewing experience where an obstruction blocks part of the participant
- FIG. 14B shows the virtual experience where the participant is displayed through the obstruction, in embodiments.
- FIGS. 15A-19B are flowcharts illustrating a method for creating a viewing experience, according to certain embodiments.
- FIG. 20 is a schematic overview of the systems of FIGS. 1, and 3-14 , in embodiments.
- FIG. 21 is a playout workflow of the systems of FIGS. 1, and 3-14 , in embodiments.
- a spectator of an event has a view that is limited in perspective either because of a location of the spectator relative to the action in the event, or by the location of cameras capturing images of the event.
- Systems and associated methods disclosed herein create an enhanced viewing experience for a spectator that includes one or more of augmented reality, mixed reality, extended reality, and virtual reality. These viewing experiences may be uniquely created by the spectator and shared socially.
- FIG. 1 is a schematic diagram illustrating one example system 100 for creating a viewing experience.
- FIG. 2 illustrates an example viewing experience 200 generated by system 100 of FIG. 1 .
- FIGS. 1 and 2 are best viewed together with the following description.
- System 100 includes a plurality of cameras 106 , an event tracking apparatus 108 , and a server 110 .
- Event tracking apparatus 108 tracks the position (location, orientation, movements, etc.) of participants 102 and objects 104 (e.g., a ball, player equipment, and so on) within an event area 103 .
- Event area 103 is any area that may be tracked by system 100 , such as a soccer field where the event is a soccer game, an American football field where the event is American football, an ice rink where the event is an ice hockey game, a stage where the event is a concert, and office where the event is a conference, and so on.
- Cameras 106 ( 1 )-( 4 ) are positioned around, above and within event area 103 to capture live images of an event within event area 103 . Captured images may be streamed to server 110 as image feeds (see, e.g., image feeds F 1 -F 4 FIG. 3 ) and stored in a database 113 . Although shown with four cameras 106 , system 100 may include more of fewer cameras without departing from the scope hereof. One or more of cameras 106 may be configured to capture infrared images, or images using other wavelengths, without departing from the scope hereof.
- Tracking information which may include occurrences of interest, sensor data, and other information, is also sent from the event tracking apparatus 108 to server 110 (e.g., see feed F 5 , FIG. 3 ) where it may be stored together with information of image feeds F 1 -F 4 in database 113 .
- database 113 may be part of server 110 .
- Tracked events, or portions thereof may be given a unique identifier (also referred to as a “Tag”), that is tracked within database 113 , and/or provided via an external BlockChain ledger for example, to allow the event (or portion thereof) to be referenced by internal and external systems.
- spectators 101 may trade access records (tags) identifying the specific events, or portions thereof, that they have watched. Such tags may allow other spectators to replay these identified events, or portions thereof, based upon the tag.
- Server 110 uses information stored within database 113 to replay content of recorded events, or portions thereof; server 110 generates a three-dimensional model 111 ( FIG. 3 ) of corresponding data in database 113 for this replay.
- this replay of events allows spectator 101 to review actions and events from different viewpoints at a later time, as compared to the viewpoint he or she had when watching the event live, for example.
- spectator 101 may adjust timing of replayed action. For example, when watching a scene with several participants 102 , spectator 101 may adjust replay speed of one of the participants such that scene dynamics are changed. For example, a trainer may use replay of a captured scenario and change the speed of different participants to illustrate possible results that might have occurred had one of the participants moved 20% slower or faster. Such adjustment of replay timing to see alternative results may for example be implemented through telestration with a vision cone.
- event tracking apparatus 108 determines location data and movement data of each participant 102 and/or each object 104 within the event area 103 using triangulation.
- event tracking apparatus 108 may include three or more receivers positioned around the event area 103 to receive signals from one or more location units (see location unit 1002 of FIG. 10 ) positioned on each participant 102 and/or object 104 .
- event tracking apparatus 108 may determine a location of each participant 102 and/or object 104 based upon signals received from the location units.
- the signals used for triangulation may for example be sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, or any combinations thereof.
- the location units on object 104 and/or participant 102 each include transponders emitting radio wave signals that are triangulated to determine location by event tracking apparatus 108 .
- each of the location units may periodically and/or aperiodically determine and report its location to the event tracking apparatus 108 .
- the location units separately includes capability (for example triangulation is determined on board based on fixed transponders around event area 103 ) to determine and repetitively report a unique position.
- triangulation is determined on board based on fixed transponders around event area 103
- such location units may even employ GPS
- Event tracking apparatus 108 may also receive sensor data from location devices (e.g., location devices 1002 ) attached to each of the participants 102 and/or objects 104 .
- the location devices may include one or more sensors (e.g., accelerometers) that detect movement of participants 102 and/or objects 104 .
- Location devices may be positioned on the participants 102 and/or objects 104 to detect particular movement.
- a head mounted location device may detect head movement of participant 102
- a hand mounted location device may detect hand movement of participant 102 , and so on.
- These physical sensors may also be configured to detect specific posture moves of participant 102 , such as reaching, squatting, laying, bending, and so on.
- server 110 may thus determine the location, orientation, and posture of participants 102 , based on the location devices, such that three-dimensional model 111 accurately portrays the event within event area 103 .
- event tracking apparatus 108 may send this information to server 110 to generate, in real-time, a three-dimensional model 111 of the event area 103 , along with the participants 102 and the objects 104 .
- Server 110 may also use images from the cameras 106 ( 1 )-( 4 ) to enhance three-dimensional model 111 , as described in detail below.
- cameras 106 may be positioned around, within, and above event area 103 to capture primary images of the event occurring within event area 103 .
- system 100 generates the viewing experience 200 on a viewing device 112 for each spectator 101 of an event in real time (i.e., live).
- the viewing experience 200 may be based upon one or more of augmented, mixed, virtual reality, and extended reality, and may include visual and/or audible special effects 202 ( FIG. 2 ) that are generated by system 100 , to enhance the viewing experience 200 for spectators 101 .
- system 100 may generate viewing experience 200 to include real-time visual and/or audible special effects 202 of a dazzling fireworks display with corresponding sounds.
- system 100 generates viewing experience 200 for spectator 101 ( 1 ) based upon a viewpoint 320 that is freely selectable by spectator 101 ( 1 ) and may resemble a viewpoint captured by a virtual camera (see FIG. 6,9 ) that is virtually positioned anywhere by the spectator 101 ( 1 ).
- spectator 101 ( 1 ) may for example position the virtual camera 606 near the soccer player or view towards the goal as the soccer player kicks the ball, thereby having a previously unobtainable viewpoint 320 of live action.
- FIG. 3 shows system 100 of FIG. 1 in further example detail.
- system 100 creates a viewing experience from a 3D model based upon a spectator viewpoint 320 that may be controlled by the spectator 101 .
- Server 110 includes at least one processor 302 and memory 304 storing machine-readable instructions 306 that, when executed by the processor 302 , control the at least one processor 302 to generate three-dimensional model 111 of the event area 103 , participants 102 and objects 104 based upon the location data and movement data captured by the event tracking apparatus 108 .
- Instructions 306 may also control the at least one processor 302 to determine spectator viewpoint 320 based on the spectator location data and spectator viewing direction data.
- the spectator viewpoint 320 may define a location of spectator 101 , relative to the three-dimensional model 111 , and a direction of view of the spectator 101 such that the server 110 then generates viewing experience 200 from the three-dimensional model 111 based upon the spectator viewpoint.
- Location units 1002 may be placed with spectator 101 to determine location of spectator 101 ; or cameras 106 may be used to determine location of spectator 101 ; or viewing device 112 may have its own location capability to determine spectator location, for example.
- the viewing device 112 includes user controls 310 that allow spectator 101 to control the spectator viewpoint 320 , and thereby the spectator viewing experience 200 displayed on a display 312 of viewing device 112 .
- the spectator viewpoint 320 may include spectator coordinate information based upon a grid used by the three-dimensional model 111 , wherein the user controls 310 allows spectator 101 to reposition the spectator viewpoint 320 within three-dimensional model 111 such that spectator 101 watches the event from other desired perspectives.
- instructions 306 when executed by processor 302 , control processor 302 to implement artificial intelligence to estimate images needed to complete viewing experience 200 , by learning how to provide data that might be missing from feeds F 1 -F 7 (see, e.g., FIGS. 6,7 ).
- system 100 may learn to store portions of images and information that may be used to correct and/or complete three-dimensional model 111 under certain conditions when such information may be missing from feeds F 1 -F 7 . For example, based upon positioning of cameras 106 and/or obstruction of one participant 102 by another or building structure, if image feeds F 1 -F 5 ( FIGS. 3,4,5 ) do not include certain portions of participant 102 or object 104 , system 100 may use images and/or data from database 113 to complete three-dimensional model 111 so that the spectator can replay the event without obstruction.
- FIG. 4 shows the system of FIG. 3 further including a spectator tracking apparatus 402 that may be configured to determine spectator location data and spectator viewing direction data for each spectator 101 , illustratively shown as a spectator location and viewing direction data feed F 6 to server 110 .
- location for the spectator 101 may be derived in various ways for inclusion in feed F 6 .
- FIG. 5 shows viewing device 112 of FIGS. 3 and 4 in further example detail.
- machine readable instructions 306 when processed by the server 110 , are capable of determining an occurrence of interest 130 (e.g., an action, situation, etc., within the event area, as shown in FIGS. 1 and 2 ) based on the three-dimensional model 111 and the spectator viewpoint 320 .
- the occurrence of interest 130 may have, at least, an identity and coordinates relative to the three-dimensional model 111 .
- viewing device 112 includes a processor 502 and memory 504 storing a software module 506 that, when executed by processor 502 , controls processor 502 to augment viewing experience 200 for spectator 101 when instructed by server 110 .
- Viewing device 112 may for example be a screen held by, or positioned in front of, spectator 101 or positioned in front of the spectator, or a device worn by spectator 101 , such as a helmet, goggles, glasses, and contact lenses. Viewing device 112 thereby positions viewing experience 200 in front of the spectator's eye(s), projects viewing experience 200 into the spectator's field of vision, or projects viewing experience 200 into the spectator's eye(s).
- viewing device 112 may be a tablet, a computer, or a mobile device (e.g., a smartphone).
- viewing device 112 may be an Oculus GoTM device, an iPadTM, an augmented reality display, and so on.
- Viewing device 112 may include one or more sensors that sense input, such as movement, noise, location, selection, and so on, by spectator 101 . This input may be used to direct spectator viewpoint 320 and/or the virtual camera for example.
- FIG. 7 shows the system of FIG. 6 further illustrating generation of special effects to enhance the viewing experience 200 , according to an embodiment.
- server 110 generates visual and/or audible special effects 202 that are added to 3D model 111 .
- Visual and/or audible special effects 202 may be added to three-dimensional model 111 as if they are part of live action, wherein the viewing experience 200 generated from three-dimensional model 111 includes the special effects 202 .
- Visual and/or audible special effects 202 may be included within three-dimensional model 111 as codes and/or instructions that may be sent to viewing device 112 when the corresponding viewpoint 320 includes the visual and/or audible special effects 202 .
- software module 506 is configured to control processor 502 to augment viewing experience 200 by providing visual and/or audible special effects 202 .
- Visual and/or audible special effects 202 may include, for example, one or more of fireworks, an explosion, and a comet tail and may be associated with images of participants 102 , objects 104 , or other computer-generated images.
- Visual and/or audible special effects 202 may also include outlining one or more participants 102 and/or object 104 in viewing experience 200 .
- Visual and/or audible special effects 202 may also include visual manipulation of images of participants 102 and/or objects 104 .
- Visual and/or audible special effects 202 may further include annotating information that provides spectator 101 with additional information on the event within event area 103 .
- the annotation information may be selected based at least in part upon one or more of the event, occurrence of interest, participant 102 , and/or object 104 .
- viewing device 112 may display annotation data that includes scoring statistics of a basketball player during a live match.
- any audible portion of visual and/or audible special effects 202 may correspond to the visual portion of the visual and/or audible special effects 202 .
- the audible portion may include the sound of an explosion corresponding to the visual portion that shows an explosion.
- the visual and audible portions of the visual and/or audible special effects 202 may be independent of each other.
- microphones 602 placed at the event area 103 may provide direct audio data as sound feeds 604 collected at feed F 7 .
- Software module 506 may receive visualization data from server 110 such that the software module 506 augments the visual experience 200 by applying the visualizations to the body of participant 102 and/or object 104 .
- the server 110 may apply the visualizations to three-dimensional model 111 such that viewing experience 200 is generated by server 110 with the applied visualizations.
- These visualizations may for example indicate a status of participant 102 and/or object 104 within the game play, such as one or more of health, power, weapon status, and so on.
- FIG. 10 shows one example participant 102 configured with one of the microphones 602 and one of the cameras 106 of the system of FIGS. 1-9 , and further configured with a plurality of participant location units 1002 . More than one camera 106 and more than one microphone 602 may be affixed to the participant 102 without departing from the scope hereof. Similarly, one (or a plurality of) cameras 106 may be affixed to one or more objects 104 . When attached to participants 102 and/or objects 104 that move, event tracking apparatus 108 may also track location and movement of the attached camera 106 and/or microphone 602 .
- participant 102 wears a suit that is configured with a combination of location units 1002 , cameras 106 , microphones 602 , and other sensors (e.g., biometric sensors) that provide data to server 110 .
- sensors e.g., biometric sensors
- One or more these sensors may be inside the body or attached to the body of participant 102 .
- the event tracking apparatus 108 and/or the spectator tracking apparatus 402 may be integrated, at least in part, with server 110 , such that event tracking apparatus 108 and/or the spectator tracking apparatus 402 is integrated with server 110 .
- Event tracking apparatus 108 and/or spectator tracking apparatus 402 may instead be a computer based server (like server 110 ) that includes a processor and memory storing instructions that control the server to use sensor data to track location of the participants 102 , objects 104 and/or spectators 101 .
- These servers, and server 110 may be a video processing server, for example.
- event area 103 may be a sporting arena, a stage, an outdoor field, a street, or a room, for example.
- the event occurring within event area 103 may thus be a sporting event, a concert, a play, an opera, a march, or other event (such as a conference in a conference room) that may have spectators 101 .
- system 100 provides multiple viewing experiences 200 to the spectators 101 .
- Instructions 306 when executed by processor 302 , may control processor 302 to generate three-dimensional model 111 based upon event area 103 , wherein three-dimensional model 111 may represent physical construction of event area 103 .
- three-dimensional model 111 may alternatively have a representation that differs from event area 103 .
- three-dimensional model 111 may be generated to represent certain structure that is not present within the actual event area 103 , and is therefore unrelated to physical structure at event area 103 .
- three-dimensional model 111 may in part be generated from images and data stored within a database (e.g., database 113 ) that define structure unconnected with event area 103 .
- Three-dimensional model 111 may for example represent multiple adjoining event areas whereas the actual event area 103 does not physically adjoin these other event areas represented within three-dimensional model 111 .
- representation of event area 103 by three-dimensional model 111 may be selected by one or more of spectator 101 , participant 102 , and/or crowd-sourced selection (e.g., multiple spectators 101 ).
- spectator 101 may control three-dimensional model 111 to represent event area 103 as a mountain top, even though the actual event area 103 is a room.
- spectator 101 may change the representation of the stage to be on a mountain top, wherein the participants 102 and objects 104 (e.g., performers and instruments) are shown within viewing experience 200 and being on the mountain top.
- participants 102 and objects 104 e.g., performers and instruments
- Server 110 may provide multiple functions. For example, in FIGS. 3-9 , event tracking apparatus 108 may provide location and movement data (shown as data stream F 5 ) to server 110 , while the plurality of cameras 106 may also provide images (shown as image streams F 1 -F 4 ) to server 110 .
- FIG. 10 which shows a plurality of location units 1002 positioned on one participant 102 ; these location units 1002 may also be positioned on, or configured, with objects 104 . Tracking of objects 104 and/or participants 102 may further include multiple input, multiple output (MIMO) protocols understood by server 110 .
- Event tracking apparatus 108 may for example use image analysis to identify a location of, and a position of, participant 102 and/or object 104 .
- MIMO multiple input, multiple output
- Event tracking apparatus 108 may use images captured by at least two cameras 106 at event area 103 to triangulate location of participants 102 and/or objects 104 .
- the location units 1002 may include reflective and/or emissive visual markers that may be detected within the images captured by cameras 106 .
- Event tracking apparatus 108 may alternatively determine location data and movement data of participants 102 and/or objects 104 using light field data captured by one or more of (i) event tracking apparatus 108 (e.g., using cameras 106 and/or other cameras) and (ii) location units 1002 at the object 104 and/or the participant 102 .
- event tracking apparatus 108 may include or be connected to one or more light-field cameras positioned to capture light-field data of the event area 103 .
- the light-field data may include one or more of light intensity, light direction, and light color.
- spectator tracking apparatus 402 may include components, features, and functionality similar to event tracking apparatus 108 to track location and/or viewing direction of each spectator 101 .
- Spectator tracking apparatus 402 may determine spectator location data and spectator viewing direction data using triangulation of signals that are one or the combination of (i) emitted from and (ii) received by a spectator location unit (similar to the location unit 1002 ).
- the signals may for example be sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and combinations thereof.
- the spectator tracking apparatus 402 may determine spectator location data and spectator viewing direction data using light field data captured by one or the combination of (i) the spectator tracking apparatus and (ii) the spectator location unit 1002 .
- Spectator location and viewing direction may also be determined by image analysis of images captured by a camera on the viewing device 112 used by the spectator 101 . Spectator location and viewing direction may be determined through image analysis of images captured by multiple viewing devices 112 , each from a different spectator 101 .
- FIGS. 4-8 illustrate spectator tracking apparatus 402 providing information of spectator viewpoint 320 to server 110 as feed F 6 .
- event tracking apparatus 108 includes functionality to track spectators 101 and generate spectator location and viewing direction via data feed F 6 ; and in this case spectator tracking apparatus 402 is not even used and yet system 100 retains all functionality.
- Sharing of viewing experiences may be accomplished in several ways. For example, as shown in FIGS. 1 and 9 , spectator 101 ( 1 ) may share viewing experience 200 with another spectator 101 ( 6 ). In another example, a celebrity (e.g., famous player, movie star, etc.) may create and share viewing experience 200 with other spectators 101 (followers). In yet another example, as a participant 102 at a conference, spectator 101 ( 1 ) may share viewing experience 200 with many other spectators 101 not at the conference.
- a celebrity e.g., famous player, movie star, etc.
- spectator 101 ( 1 ) may share viewing experience 200 with many other spectators 101 not at the conference.
- machine readable instructions 306 when executed by processor 302 , control processor 302 to: (a) receive the primary images (e.g., image feeds F 1 -F 4 in the example of FIGS.
- Virtual grid 1102 may be used to enhance three-dimensional model 111 and/or viewing experience 200 that is based on the three-dimensional model 111 by more accurately, and with higher resolution, rendering images of participant 102 and/or object 104 .
- virtual grid 1102 has a longitudinal direction and appears multi-sided when viewed in the longitudinal direction.
- virtual grid 1102 may be hexagonal in shape when viewed in the longitudinal direction, as illustrated in FIGS. 11A-11C .
- FIGS. 11A-11C further illustrate mapping of portions of primary images (e.g., from image feeds F 1 -F 4 ) to cells 1104 of virtual grid 1102 .
- FIG. 11A illustrates virtual grid 1102 around participant 102 with no mapping and FIGS.
- 11B and 11C show different amounts of virtual grid cells 1104 mapped with portions of primary images from image feeds F 1 -F 4 .
- virtual grid cells that do not correspond to a portion of participant 102 may be unmapped, for example to save processing time.
- cell 1104 may correspond to a real-world dimension of between one and ten centimeters.
- instructions 306 when executed by processor 302 , may control processor 302 to generate a virtual image based at least in part upon the primary images (e.g., from feeds F 1 -F 4 ) identified by at least one cell 1104 ( FIG. 11A-C ) or portion of a secondary grid 1202 corresponding to participant 102 or object 104 .
- secondary grid 1202 may be positioned between spectator 101 and participant 102 and/or object 104 .
- system 100 When system 100 generates viewing experience 200 in real-time based upon image feeds F 1 -F 4 , location and movement data feed F 5 , spectator location and viewing direction data feed F 6 , and sound feeds F 7 , latency of system 100 is low to maintain integrity of viewing experience 200 , particularly where viewing experience 200 shows augmented reality or extended reality. Accordingly, the amount of processing required to generate viewing experience 200 may be reduced by determining spectator viewpoint 320 based upon a location of spectator 101 relative to event area 103 and by providing only the information needed to viewing device 112 to generate viewing experience 200 .
- three-dimensional model 111 is an event area 103
- spectator viewpoint 320 may not include all of event area 103 , and thus only part of three-dimensional model 111 may actually be used to generate viewing experience 200 .
- the use of secondary grid 1202 may further reduce processing necessary to generate viewing experience 200 , by identifying cells 1104 of virtual grid 1102 that are needed to generate viewing experience 200 ; cells that are not needed do not require intensive image processing, thereby reducing latency in system 100 .
- Latency may be further reduced by implementing bokeh within one or both of instructions 306 of server 110 and software module 506 of viewing device 112 .
- Bokeh causes blurring of less important portions of an image (e.g., background and/or foreground), which reduces the required resolution for those portions of viewing experience 200 . Accordingly, fewer pixels need be rendered to generate viewing experience 200 based upon three-dimensional model 111 , thereby reducing latency of system 100 .
- Bokeh may also highlight the portion of interest (e.g., occurrence of interest 130 ) to the user within viewing experience 200 since this portion appears in more detail and attracts the attention of the eye of spectator 101 , whereas the blurred foreground/background has reduced detail that does not attract the eye's attention.
- Secondary grid 1202 may be a physical grid such as a net, a windshield, or border (collectively referred to as border 1206 ) positioned around event area 103 as shown in FIG. 12A .
- event area 103 may have an upright border that contains a grid, and which may be visible to the human eye.
- the grid may be undetectable to the human eye but may be detected by features (e.g., sensors) of viewing device 112 , such as a camera of viewing device 112 , wherein the border 1206 may allow viewing device 112 to determine its orientation and/or location relative to event area 103 .
- the border grid may comprise features and/or components capable of emitting, reflecting, or detecting visible light, infrared light, ultraviolet light, microwaves, and/or radio waves.
- secondary grid 1202 may be worn by the spectator 101 over the spectator's eye(s) and/or integrated with viewing device 112 such that the secondary grid 1202 appears within the spectator's viewpoint (e.g., in front of the spectator's eyes) and thus over participant 102 and/or object 104 .
- secondary grid 1202 may be virtual and determined by server 110 or viewing device 112 .
- secondary grid 1202 may be generated based upon virtual camera 606 .
- the secondary grid 1202 may be positioned perpendicular to the viewing direction of spectator 101 .
- secondary grid 1202 may move and/or rotate as the location and/or viewing direction of spectator 101 changes.
- cells of secondary grid 1202 may provide references, in combination with the virtual grid 1102 , to enhance three-dimensional model 111 and/or viewing experience 200 based on three-dimensional model 111 , and to render participant 102 and/or object 104 in more detail.
- instructions 306 when executed by processor 302 , may control processor 302 to interpolate between portions of the primary images (e.g., feeds F 1 -F 4 ) to generate viewing experience 200 .
- instructions 306 when executed by processor 302 , may control processor 302 to augment viewing experience 200 provided to the spectator 101 by providing the virtual image received from server 110 .
- server 110 may send one or more virtual images, generated from three-dimensional model 111 , to viewing device 112 such that viewing device 112 may selectively enhance viewing experience 200 .
- instructions 306 when executed by processor 302 , may control processor 302 to: (a) determine, based on three-dimensional model 111 and spectator viewpoint 320 , when an obstruction is located between (i) the spectator viewpoint 320 and (ii) object 104 or participant 102 ; and (b) send directives to software module 506 to display at least a portion of the virtual image corresponding to the desired view without the obstruction.
- an obstruction 1302 is located in event area 103 and positioned between spectator viewpoint 320 and participant 102 .
- obstruction 1302 may be a partial wall that obstructs a conventional view of participant 102 by spectator 101 .
- FIG. 14B illustrates viewing experience 200 generated by server 110 of participant 102 displayed through obstruction 1302 such that spectator 101 may still fully view participant 102 .
- FIG. 6 shows system 100 of FIG. 5 further including at least one microphone 602 positioned around and/or within event area 103 .
- instructions 306 when executed by processor 302 , may control processor 302 to: (a) receive sound feeds F 7 from at least one of microphones 602 positioned at and/or within event area 103 ; (b) map sound feeds F 7 to three-dimensional model 111 ; and (c) generate viewing experience 200 to include sound based on one or more of (i) the spectator viewpoint 320 and (ii) occurrence of interest 130 .
- any number of microphones 602 may be positioned within, around, and/or above event area 103 .
- one or more microphones 602 may be positioned on participant 102 , such as shown in FIG. 10 , and/or object 104 .
- sound feeds F 7 from microphones 602 are input to server 110 .
- software module 506 when executed by processor 503 , controls processor 502 to augment viewing experience 200 by providing at least part of sound feed F 7 and provided by server 110 .
- viewing device 112 may generate viewing experience 200 to include sounds associated with the event, such as when participant 102 scores a goal in a sporting event.
- spectator 101 may hear words as they are spoken by participant 102 .
- FIG. 8 shows the system of FIG. 7 further illustrating generation of haptic feedback with viewing experience 200 .
- Software module 506 when executed by processor 503 , controls processor 502 to further augment viewing experience 200 by providing haptic feedback based at least in part upon occurrence of interest 130 .
- Viewing device 112 may include a haptic feedback actuator 802 that include vibration-generating components.
- occurrence of interest 130 may occur when two participants 102 hit each other, wherein haptic feedback actuator 802 is controlled such that spectator 101 feels a vibration.
- spectator 101 may receive haptic feedback, via haptic feedback actuator 802 , from the other spectator.
- the other spectator may applaud or cheer, causing the feedback to be received and output by the viewing apparatus 112 of spectator 101 .
- FIG. 9 illustrates event area 103 of FIGS. 1 and 3-8 with spectators 101 located around event area 103 with free-viewpoint experiences.
- system 100 may create a free-viewpoint experience for spectator 101 by generating viewing experience 200 as an entirely virtual reality (as opposed to augmented reality based upon adding virtual images to an image of reality).
- server 110 may generate viewing experience 200 based upon at least one virtual image generated from three-dimensional model 111 and send viewing experience 200 to viewing device 112 to provide the free-viewpoint experience to spectator 101 .
- instructions 306 when executed by processor 302 , may control processor 302 to generate viewing experience 200 as a virtual image based at least in part upon at least a portion of three-dimensional model 111 .
- spectator 101 receives viewing experience 200 as a real-time virtual experience generated from three-dimensional model 111 of the event occurring within the event area 103 .
- spectator 101 may control a virtual camera 606 to create a free-viewpoint that is similar to spectator viewpoint 320 , but need not be based upon a location of spectator 101 relative to event area 103 .
- instructions 306 when executed by processor 302 , control processor 302 to: (a) determine, based on viewing directives received from viewing device 112 through interaction of spectator 101 with user controls 310 , a virtual camera 606 defining an origin within three-dimensional model 111 and a corresponding viewing direction; and (b) generate viewing experience 200 as a virtual image based at least in part upon three-dimensional model 111 and corresponding virtual camera 606 .
- spectator 101 ( 3 ) controls virtual camera 606 ( 1 ), via a virtual link 904 ( 1 ), spectator 101 ( 4 ) controls virtual camera 606 ( 2 ), via a virtual link 904 ( 2 ), and spectator 101 ( 5 ) controls virtual camera 606 ( 3 ), via a virtual link 904 ( 3 ).
- Virtual camera 606 and virtual link 904 are terms used to define the free-viewpoint as controlled by spectator 101 to create the desired viewing experience 200 .
- spectator 101 ( 3 ) may have a seat that is distant from event area 103 , but may interact with server 110 (using user controls 310 of viewing device 112 ) to position virtual camera 606 ( 1 ) in a desired location to generate viewing experience 200 with a more favorable view of, or within, event area 103 .
- spectator 101 ( 3 ) may position virtual camera 606 ( 1 ) in front of the drum player on the stage.
- Spectator 101 ( 3 ) thus received and view viewing experience 200 as based upon the defined free-viewpoint that is different from his/her physical location.
- the drum player is participant 102
- the drums are object 104
- the stage is event area 103
- the concert is the event being performed within the event area 103 .
- Server 110 may simultaneously provide a different viewing experience 200 to each spectator 101 , where certain ones of the viewing experiences may be based upon spectator viewpoints 320 derived from location of the spectator as determined by spectator tracking apparatus 402 , and certain other of the viewing experiences are based upon virtual cameras 606 controlled by the corresponding spectator. Spectators 101 may switch between these different types of viewing experiences.
- spectator 101 watching a performer on a stage uses a mobile device (e.g., an iPad, or similar device), to position virtual camera 606 near the singer such that the mobile device displays viewing experience 200 with a close-up view of the singer.
- a mobile device e.g., an iPad, or similar device
- system 100 may allow a first spectator 101 to view viewing experience 200 controlled by a second spectator 101 , wherein the first spectator does not control, manipulate, or influence the viewing experience 200 , since this viewing experience 200 is controlled by the second spectator.
- software module 506 within viewing device 112 may include instructions, that when executed by processor 502 , control processor 502 to: (a) determine, based on viewing directives received from the spectator via user controls 310 , interact with server 110 to create and control virtual camera 606 to define an origin within the three-dimensional model and a viewing direction of the free-viewpoint experience, and (b) generate viewing experience 200 as virtual images showing at least a portion of three-dimensional model 111 , based at least in part upon the corresponding virtual camera 606 .
- server 110 may send at least a portion of three-dimensional model 111 to viewing device 112 , wherein virtual camera 606 may be implemented within viewing device 112 and software module 506 generates viewing experience 200 using the three-dimensional model and the free-viewpoint defined by the virtual camera.
- instructions 306 and/or software module 506 may correct generation of viewing experience 200 (e.g., the virtual image) using primary images of video feeds F 1 -F 4 corresponding to at least one cell of secondary grid 1202 corresponding to participant 102 and/or object 104 within the viewing experience 200 .
- server 110 may send at least part of three-dimensional model 111 , and/or virtual images thereof, to viewing device 112 , which may enhance and/or correct the virtual image and/or three-dimensional model 111 based on secondary grid 1202 .
- software module 506 may also interpolate between any two of the primary images, for example when correcting the virtual image.
- instructions 306 when executed by processor 302 , control processor 302 to: (a) determine, based on three-dimensional model 111 and virtual camera 606 , when an obstruction is located between virtual camera 606 and object 104 and/or participant 102 ; and (b) send directives to software module 506 to display at least a portion of the virtual image corresponding to the obstruction.
- instructions 306 when executed by processor 302 , control processor 302 to remove an obstruction from the virtual image, when the obstruction is located between virtual camera 606 and participant 102 and/or object 104 within the virtual image. In the example of FIG. 13B , an obstruction 1302 is between participant 102 and virtual camera 606 .
- FIG. 13B an obstruction 1302 is between participant 102 and virtual camera 606 .
- FIG. 14A shows a portion of participant 102 is hidden by obstruction 1302
- viewing experience 200 shows obstruction 1302 removed, at least in part from the corresponding virtual image.
- participant 102 may be overlaid, using the corresponding virtual image, over obstruction 1302 , to generate viewing experience 200 to show the participant to the spectator 101 .
- instructions 306 when executed by processor 302 , control processor 302 to determine occurrence of interest 130 ( FIG. 1 ) based at least in part upon three-dimensional model 111 and virtual camera 606 , determining at least an identity and coordinates, relative to three-dimensional model 111 , for occurrence of interest 130 .
- occurrence of interest 130 may be tagged such that it may be selected and viewed by spectators 101 .
- server 110 may send directives to software module 506 to provide visual and/or audible special effects 202 , within viewing experience 200 , based at least in part upon three-dimensional model 111 , virtual camera 606 , and occurrence of interest 130 .
- instruction 306 when executed by processor 302 , may control processor 302 of server 110 to add, to the virtual image, visual and/or audible special effects 202 based at least in part upon three-dimensional model 111 , virtual camera 606 , and occurrence of interest 130 .
- software module 506 may control processor 502 to determine occurrence of interest 130 based at least in part upon three-dimensional model 111 and virtual camera 606 , determining at least an identity and coordinates, relative to three-dimensional model 111 , for occurrence of interest 130 .
- Software module 506 may also control processor 502 to generate visual and/or audible special effects 202 within viewing experience 200 , based at least in part upon three-dimensional model 111 , virtual camera 606 , and occurrence of interest 130 .
- server 110 may be configured to generate viewing experience 200 with sounds based at least in part upon three-dimensional model 111 and virtual camera 606 . These sounds may be determined by processing and mapping sound feeds F 7 received from microphones 602 at event area 103 . For example, sound feeds F 7 may be processed and mapped based upon the location of virtual camera 606 within three-dimensional model 111 , such that viewing experience 200 has sounds according to that location.
- software module 506 when executed by processor 502 , may control processor 502 to process sounds stored within three-dimensional model 111 , and/or sounds of sound feeds F 7 , to generate sounds within viewing experience 200 based at least in part upon three-dimensional model 111 and virtual camera 606 .
- software module 506 may also control processor 502 to generate haptic feedback, using haptic feedback actuator 802 , to further enhance viewing experience 200 , based at least in part upon virtual camera 606 and occurrence of interest 130 .
- the haptic feedback may be generated based at least in part upon a location of virtual camera 606 within three-dimensional model 111 relative to participant 102 , object 104 , a border of event area 103 , and/or one or more permanent objects within event area 103 .
- software module 506 may control haptic feedback actuator 802 to generate the haptic feedback (e.g., a vibration) to indicate that the location of virtual camera 606 is not valid.
- software module 506 may control haptic feedback actuator 802 to generate the haptic feedback (e.g., a vibration) when spectator 101 maneuvers virtual camera 606 to virtually “bump” into participant 102 and/or object 104 .
- participant 102 is a quarterback and object 104 is an American football.
- the quarterback throws the football to a point in space.
- System 100 generates viewing experience 200 based upon virtual camera 606 positioned at the point in space and facing the quarterback.
- Spectator 101 appears to receive the football from the quarterback using viewing experience 200 viewed by on viewing device 112 (e.g., an iPad or similar device).
- Accelerometer, gyroscopes, and/or other sensors within viewing device 112 may sense movement of viewing device 112 by spectator 101 ; and this sensed movement may manipulate virtual camera 606 , such that spectator 101 may attempt to manipulate virtual camera 606 into the path of the ball.
- system 100 may generate haptic feedback on viewing device 112 so simulate the ball being caught by spectator 101 .
- Viewing experience 200 (of the attempted catch) may be shared with followers of spectator 101 , wherein the followers may also cause haptic feedback on viewing device 112 in an attempt to distract spectator 101 from making the catch.
- viewing experience 200 may be shared through social media networks, wherein messaging of the social media networks may be used for the feedback from the followers.
- rendering of three-dimensional model 111 may be enhanced by mapping light-field data onto at least a portion of three-dimensional model 111 , in addition to mapping of portions of the image feeds F 1 -F 4 onto three-dimensional model 111 .
- Capture and mapping of light-field data may also include capturing and mapping of light data corresponding to reflections, as noted previously.
- FIG. 20 shows a high level operational overview 2000 of system 100 of FIGS. 1, and 3-14 .
- Overview 2000 shows five stages of operation of system 100 .
- system 100 tracks and captures data from event area 103 .
- cameras 106 , event tracking apparatus 108 , spectator tracking apparatus 402 , and microphones 602 generate data feeds F 1 -F 7 of movement and activity of participants 102 and objects 104 within event area 103 .
- system 100 catalogs the data feeds F 1 -F 7 and stores them within collective database 113 .
- system 100 generates three-dimensional model 111 as at least part of the computer-generated image graphic engine.
- system 100 tags the event, potions thereof, and occurrences of interest 130 within database 113 and/or a BlockChain ledger.
- system 100 uses viewing devices 112 to display viewing experiences 200 generated from three-dimensional model 111 .
- FIGS. 15-19 are flowcharts that collectively show one example method 1500 for creating a viewing experience.
- Method 1500 includes steps 1502 - 1506 , as shown in FIG. 15A , and may further include any combination of steps 1508 - 1560 shown in FIGS. 15B, 16A, 16B, 17A, 17B, 18A, 18B, 19A, and 19B .
- step 1502 method 1500 determines location and movement data. In one example of step 1502 , event tracking apparatus 108 determines location data and movement data of participants 102 and objects 104 .
- step 1504 method 1500 determines a three-dimensional model. In one example of step 1504 , server 110 generates three-dimensional model 111 based upon location and event data feed F 5 and image feeds F 1 -F 4 .
- step 1506 method 1500 determines a spectator viewpoint. In one example of step 1506 , instructions 306 , when executed by processor 302 , control processor 302 to determine spectator viewpoint 320 defining an origin, relative to three-dimensional model 111 , and a direction for viewing experience 200 .
- step 1508 FIG. 15B
- method 1500 captures light-field data relative to the object and the participant.
- server 110 processes image feeds F 1 -F 4 and other sensed data (e.g., feeds F 5 , F 6 , F 7 ) to determine light-field data for one or more of object 104 and participant 102 (and even the spectator 101 ).
- step 1510 method 1500 determines relative location data, of the object and the participant.
- step 1510 server 110 determines relative location data for each of objects 104 and participants 102 , with respect to one or more of (i) a permanent object location at the arena, (ii) other objects 104 within event area 103 , (iii) other participants 102 within event area 103 , and (iv) secondary grid at the arena.
- step 1512 method 1500 triangulates signals from and/or at location units.
- event tracking apparatus 108 triangulates signals from location units 1002 , where the signals are selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- FIG. 16A shows steps 1514 - 1518 .
- method 1500 captures light-field data relative to the spectator viewpoint.
- server 110 determines light-field data from image feeds F 1 -F 4 with respect to spectator viewpoint 320 .
- method 1500 determines relative location data, of the viewpoint, with respect to one or more of (i) a permanent object location at the arena, (ii) the at least one object, (iii) the at least one participant, and (iv) a secondary grid at the arena.
- step 1516 server 110 determines relative locations of spectator viewpoint 320 with respect to one or more of three-dimensional model 111 , object 104 , participant 102 , and/or secondary grid 1202 .
- step 1518 method 1500 triangulates signals that are emitted from and/or received by a location unit configured with the spectator.
- spectator tracking apparatus 402 triangulates signals received from location unit 1002 attached to spectator 101 , where the signal is selected from the group comprising sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- FIG. 16B shows steps 1520 - 1524 .
- step 1520 method 1500 receives primary images from a plurality of cameras positioned at an event area.
- server 110 receives image feeds F 1 -F 4 from cameras 106 .
- step 1522 method 1500 maps at least one of the images, from cameras 106 , to the three-dimensional model.
- server 110 maps at least part of images from image feeds F 1 -F 4 to three-dimensional model 111 .
- method 1500 maps light-field data to the three-dimensional model.
- server 110 maps light-field data to three-dimensional model 111 .
- FIG. 17A shows step 1526 , where method 1500 determines, based on viewing directives received from the spectator, a virtual camera defining a virtual origin, relative to the three-dimensional model, and a virtual direction of the viewing experience.
- instructions 306 when executed by processor 302 , control processor 302 to receive input from viewing device 112 , to manipulate a virtual camera 606 within three-dimensional model 111 to have a particular location and viewing direction such that server 110 and/or viewing device 112 generates a desired viewing experience 200 .
- FIG. 17B shows steps 1528 - 1540 .
- step 1528 method 1500 determines, within the three-dimensional model, around each of the participant and the object, a virtual grid having a plurality of cells.
- instructions 306 when executed by processor 302 , control processor 302 to determine virtual grid 1102 around participant 102 within three-dimensional model 111 .
- step 1530 method 1500 maps at least a portion of one of the primary images identified by at least one cell of the virtual grid corresponding to the participant or object.
- step 1530 instructions 306 , when executed by processor 302 , control processor 302 to map corresponding portions of images from image feeds F 1 -F 4 to virtual grid cells 1104 within three-dimensional model 111 .
- step 1532 method 1500 maps at least a portion of one of the primary images identified by a section of the secondary grid corresponding to the participant or object.
- instructions 306 when executed by processor 302 , control processor 302 to map, based upon secondary grid 1202 corresponding to participant 102 and/or object 104 , at least a portion of primary images from primary image feeds F 1 -F 4 to participant 102 and/or object 104 .
- step 1534 method 1500 interpolates between any two of the primary images.
- instructions 306 when executed by processor 302 , control processor 302 to interpolate between at least two images of image feeds F 1 -F 4 when mapping.
- step 1536 method 1500 generates a virtual image, having the object and/or the participant, based upon (i) the three-dimensional model and (ii) the viewpoint or the virtual camera.
- instructions 306 when executed by processor 302 , control processor 302 to generate a virtual image from three-dimensional model 111 based upon spectator viewpoint 320 and/or virtual camera 606 .
- step 1538 method 1500 sends the location data of the object and the participant to a viewing device configured to provide the viewing experience.
- instructions 306 when executed by processor 302 , control processor 302 to send the location of object 104 and/or participant 102 to viewing device 112 .
- step 1540 method 1500 sends one or both of (i) at least a portion of the three-dimensional model and (ii) at least a portion of the virtual image, to a viewing device configured to provide the viewing experience.
- instructions 306 when executed by processor 302 , control processor 302 to send at least part of three-dimensional model 111 and/or at least part of viewing experience 200 to viewing device 112 .
- FIG. 18A shows steps 1542 and 1544 .
- step 1542 method 1500 determines an occurrence of interest.
- instructions 306 when executed by processor 302 , control processor 302 to determine occurrence of interest 130 within three-dimensional model 111 based upon one or more of participant 102 , object 104 , spectator viewpoint 320 , and virtual camera 606 .
- step 1544 method 1500 adds visual special effects and audible special effects to the viewing experience.
- instructions 306 when executed by processor 302 , control processor 302 to generate visual and/or audible special effects 202 for viewing experience 200 based at least in part upon (i) the location data and movement data of object 104 and/or participant 102 and/or (ii) occurrence of interest 130 .
- FIG. 18B shows steps 1546 - 1552 .
- step 1546 method 1500 determines when an obstruction is located between (i) the viewpoint and (ii) the object or the participant.
- instructions 306 when executed by processor 302 , control processor 302 to process three-dimensional model 111 to determine when obstruction 1302 is between spectator viewpoint 320 and participant 102 and/or object 104 .
- step 1548 method 1500 determines when an obstruction is located between (i) the virtual camera and (ii) the object or the participant.
- step 1548 instructions 306 , when executed by processor 302 , control processor 302 to process three-dimensional model 111 to determine when obstruction 1302 is between virtual camera 606 and object 104 and/or participant 102 .
- step 1550 method 1500 adds at least a portion of the virtual image, corresponding to the location of the obstruction, to the viewing experience.
- instructions 306 when executed by processor 302 , control processor 302 to generate viewing experience 200 from at least one virtual image created from three-dimensional model 111 based upon the location of the obstruction.
- step 1552 method 1500 removes the obstruction from the viewing experience.
- instructions 306 when executed by processor 302 , control processor 302 to remove at least part of obstruction 1302 from viewing experience 200 .
- FIG. 19A shows steps 1554 - 1558 .
- step 1554 method 1500 receives sound feeds from a plurality of microphones positioned at the event area.
- server 110 receives sound feeds F 7 from microphones 602 positioned around and within event area 103 .
- step 1556 method 1500 maps the sound feeds to the three-dimensional model.
- instructions 306 when executed by processor 302 , control processor 302 to map sounds from sound feeds F 7 to three-dimensional model 111 based upon the location of the microphones 602 relative to the event area 103 .
- step 1558 method 1500 generates the viewing experience to include sounds based upon the three-dimensional model.
- instructions 306 when executed by processor 302 , control processor 302 to generate viewing experience 200 to include sounds based upon three-dimensional model 111 .
- FIG. 19B shows step 1560 .
- method 1500 provides haptic feedback to the spectator based on the virtual camera and the location data of the object and the participant.
- server 110 and viewing device 112 cooperate to control haptic feedback actuator 802 to provide haptic feedback to spectator 101 based at least in part upon one or more of a location of a corresponding virtual camera 606 within three-dimensional model 111 , a location of participant 102 within the three-dimensional model 111 , and a location of object 104 within the three-dimensional model 111 .
- server 110 and viewing device 112 cooperate to control haptic feedback actuator 802 to provide haptic feedback to spectator 101 based at least in part upon occurrence of interest 130 and/or visual and/or audible special effects 202 .
- Example 1 Triangulation of Known and Estimated Points to Create Viewpoint for a Virtual Camera
- Example 1 provides a description of systems and methods for creating a viewpoint including a model of a designated geometric shape where data is derived from multiple known and estimated points resulting in multiple data registries to be used in perceived and actual reality.
- the result of this method is a set of data points capable of augmenting and re-creating particular moments in time in a defined multi-dimensional space.
- the present disclosure relates to systems and methods configured to facilitate live and recorded mixed, augmented reality, virtual reality, and extended reality environments.
- a viewpoint is created by solving for the human condition of defining when and where a spectator is viewing an event within an area by accounting for ocular device(s) and spatially separated equilibrium/sound input device(s) inside a determined area (devices can be but are not limited to cameras, microphones, pressure sensors.)
- a virtual logarithmic netting is determined around each key individual area (see, e.g., FIGS. 11-12 ).
- the MAVR objects are applied into a multidimensional landscape by using spatial X+Y+Z coordinates and +time for each MAVR object to create the MAVR core.
- This core netting provides multiple specific data points to see what is happening in relation to the experiencer (A) the object(s) of focus (C) and the logarithmic net (B). These three points create a very specific range
- the spectator When a spectator is in the stands, the spectator knows his/her location where the pitcher is, but more accuracy is gained from having an intermediate reference point. If the spectator is behind home plate, the spectator may be looking through a net.
- the net acts as a logarithmic medium for which to segment the viewing experience into small micro-chambers.
- the net is for example used as an X/Y graph.
- the X/Y graph is applied to that of the spectator's right eye and the spectator's left eye, and because of the offset, the spectator's brain determines the spatial relationship and occludes the net from the spectator's sight.
- a game may be played wherein the entire arena is enclosed in a large plexiglass cage. Where the cage is joined for each panel there is a sensor capable of being a grid marker for MAVR devices.
- Each player in the game wears an array of sensors and cameras.
- Each physical structure in the game has an array of sensors and cameras and each has known, fixed values.
- Each flying ball has a tracking device in it. All of these features have live data sets captured at fifteen times a second or more. In an embodiment, at least a portion of the live data sets are captured periodically at a certain speed (e.g., one hundred and twenty times per second, although other speeds may be used).
- the location and movement data is exported into a server to model the data and determine unknowns based on the model. Missing data points are determined using known data points.
- the model of the game is fixed into a known space, the existing cameras are used to create visual mesh models of the objects that are moving and not moving based on lighting conditions (e.g., live texture mapping of real and estimated real objects).
- the grid space (or, virtual grid) is cross cut horizontally to create the start of a grid system for the object.
- the space is layered vertically to create virtual/assumed sections of volumetric space where the object may be sliced into smaller data sets. None changes physically to the object inside the grid.
- the grid space is used to measure and render the object in relation to the data intersections at all dimensions within the grid. In using this model only a small portion of what is actually visible may be visible to the optical (viewing) device using the MAVR core.
- a combination of multiple ocular devices using the core capture different cross sections of the grid and send the location data back to the server to create a virtualized and accurate model of the subject inside the grid space while the primary camera only has limited view.
- the estimated and the real images are layered into a depth map that matches true reality.
- a further aspect of the systems and methods of this example is establishing an intermediary (secondary) grid between the spectator and the object or participant.
- This increases accuracy at distance by increasing the points of data through adding a secondary grid system.
- This system is three dimensional yet flat in its presentation to the viewing camera as opposed to the gridagonal approach.
- Adding the secondary grid to the already created model gives the second layer of accuracy and can be at any angle to the original grid model. This is relevant to accuracy at a distance.
- a spectator behind home plate, for example, looking through the net has a different viewing angle than a spectator is sitting a few feet away.
- the secondary grid is used to increase model accuracy.
- the secondary grid is flat to the eyes and stays fixed to the head's rotation.
- Having two layers of grids allows more points of data to increase the accuracy of tracking movement in the pixel grid.
- the distance between the intermediary grid and the virtual grid helps delineate movement at a greater accuracy inside the virtual grid. Layering the two grid systems on top of each increases the accuracy and ability to create a free viewpoint camera system.
- a further embodiment of this example is triangulation of different angles to create the grid model of objects.
- the spectator off to a side of the event area view the event through the virtual and secondary grids.
- the model can be filled in.
- FIG. 21 is a playout workflow 2100 of the systems of FIGS. 1, and 3-14 .
- Workflow 2100 illustrates a live CGI producer application 2102 generating a playout that may be viewed by a live CGI application 2104 .
- Live CGI producer application 2102 may be implemented within server 110 of FIG. 1
- live CGI application 2104 may be implemented within viewing device 112 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Architecture (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 15/994,840, filed May 31, 2018, which claims priority to U.S. Patent Application Ser. No. 62/513,198, filed May 31, 2017, and which are both incorporated herein by reference in their entirety.
- Viewing a sport using a feed from a camera positioned and/or controlled to capture action of the sport provides a perspective of the action limited by camera position. Multiple cameras may be used to provide multiple perspectives, but each of these perspectives is still limited by the individual camera position.
- In one embodiments, a process generates a viewing experience by determining location data and movement data of (a) at least one object and (b) at least one participant within an event area; determining a three-dimensional model of the event area, the participant and the object based upon the location data and the movement data; determining a viewpoint of a spectator, the viewpoint defining an origin, relative to the three-dimensional model, and a direction of the viewing experience; and generating the viewing experience for the viewpoint at least in part from the three-dimensional model.
- In certain embodiments, generating includes blurring parts of the viewing experience that are less important to reduce latency of generating the viewing experience.
- In certain embodiments, determining the location data and the movement data further includes capturing light-field data relative to the object and the participant to enhance the three-dimensional model.
- In certain embodiments, determining the viewpoint further includes capturing light-field data relative to the viewpoint to enhance the three-dimensional model, wherein the light-field data comprises light intensity, light direction, and light color.
- In certain embodiments, determining the location data and the movement data further includes determining relative location data, of the object and the participant, with respect to one or more of (i) a permanent object location at the event area, (ii) a second object at the event area, (iii) a second participant at the event area, and (iv) a secondary grid at the event area.
- In certain embodiments, determining the viewpoint further includes determining relative location data of the viewpoint, with respect to one or more of (i) a permanent object location at the arena, (ii) the at least one object, (iii) the at least one participant, and (iv) a secondary grid at the arena.
- In certain embodiments, the secondary grid is a secondary virtual grid positioned between the viewpoint and the object or the participant.
- Certain embodiments further include receiving primary images from a plurality of cameras positioned at the event area; and mapping at least one of the primary images to the three-dimensional model.
- In certain embodiments, mapping further includes mapping light field data to the three-dimensional model.
- In certain embodiments, determining the viewpoint further includes determining, based on viewing directives received from the spectator, a virtual camera defining a virtual origin, relative to the three-dimensional model, and a virtual direction of the viewing experience.
- Certain embodiments further include: generating a virtual image, having the object and/or the participant, based upon (i) the three-dimensional model and (ii) the viewpoint or the virtual camera; and sending one or both of (i) the three-dimensional model and (ii) at least a portion of the virtual image to a viewing device configured to provide the viewing experience.
- Certain embodiments further include: determining when an obstruction is located between (i) one of the viewpoint and the virtual camera and (ii) one of the object and the participant; and adding at least a portion of the virtual image, corresponding to a location of the obstruction, to the viewing experience to at least partially remove the obstruction from the viewing experience.
- : determining, within the three-dimensional model and around each of the participant and the object, a virtual grid having a plurality of cells; and mapping at least a portion of one of the primary images identified by at least one cell of the virtual grid corresponding to the participant or object.
- In certain embodiments, mapping further includes mapping at least a portion of one of the primary images identified by a section of a secondary grid corresponding to the participant or object.
- Certain embodiments further include adding visual special effects and audible special effects to the viewing experience, the special effects being generated based upon one or both of (i) the location data and the movement data of the object and/or the participant and (ii) an occurrence of interest detected within the event area.
- Certain embodiments further include: receiving sound feeds from a plurality of microphones positioned at the event area; mapping the sound feeds to the three-dimensional model; and generating the viewing experience to include sounds based upon the three-dimensional model.
- Certain embodiments further include providing haptic feedback to the spectator based at least in part upon one or more of (a) the virtual camera and the location data of the object and the participant, (b) an occurrence of interest detected within the event area and the visual and audio special effects, and (c) feedback from other spectators sharing the viewing experience.
- In another embodiment, a system generates a free-viewpoint experience for a spectator. The system includes a plurality of cameras positioned at an event area to capture primary images of the event area; tracking apparatus configured to determine location data and movement data of (a) at least one object and (b) at least one participant within the event area; and a server having a processor and memory storing machine readable instructions that when executed by the processor are capable of: receiving the primary images from the plurality of cameras; determining a three-dimensional model of the event area, the participant and the object based upon the location data and the movement data of the participant and the object; and sending an output to a viewing device for providing the free-viewpoint experience, having at least one virtual image, to the spectator.
- In certain embodiments, the system further includes machine readable instructions that, when processed by the server, are capable of: determining, based on viewing directives received from the spectator, a virtual camera defining an origin within the three-dimensional model and a direction of the free-viewpoint experience; and generating the at least one virtual image having a portion of the three-dimensional model, based upon the virtual camera.
- In certain embodiments, the output includes one or both of the virtual image, and the three-dimensional model.
- In another embodiment, a process generates a viewing experience. The process: determines location data and movement data of (a) at least one object and (b) at least one participant within an event area; determines a three-dimensional model of the event area, the participant and the object based upon the location data and movement data; determines a viewpoint of the spectator, the viewpoint defining an origin, relative to the three-dimensional model, and a direction of the viewing experience; and generates the viewing experience at least in part from the three-dimensional model.
- In certain embodiments, the generating includes blurring parts of the viewing experience that are less important to reduce latency of generating the viewing experience.
- In certain embodiments, determining location data and movement data further includes capturing light-field data relative to the object and the participant to enhance the three-dimensional model.
- In certain embodiments, determining a viewpoint further includes capturing light-field data relative to the viewpoint to enhance the three-dimensional model.
- In certain embodiments, the light field data includes light intensity, light direction, and light color.
- In certain embodiments, determining location data and movement data further includes determining relative location data, of the object and the participant, with respect to one or more of (i) a permanent object location at the arena, (ii) a second object at the arena, (iii) a second participant at the arena, and (iv) a secondary grid at the arena.
- In certain embodiments, determining a viewpoint further includes determining relative location data, of the viewpoint, with respect to one or more of (i) a permanent object location at the arena, (ii) the at least one object, (iii) the at least one participant, and (iv) a secondary grid at the arena.
- In certain embodiments, the secondary grid is a secondary virtual grid positioned between the viewpoint and the object or the participant.
- In certain embodiments, determining location data and movement data further includes triangulating signals that are one or the combination of (i) emitted from and (ii) received by an object location unit and a participant location unit; the signals selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- In certain embodiments, determining a viewpoint further includes triangulating signals that are one or the combination of (i) emitted from and (ii) received by a spectator location unit; the signals selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- Certain embodiments further include: receiving primary images from a plurality of cameras positioned at an event area; and mapping at least one of the primary images to the three-dimensional model.
- In certain embodiments, mapping further includes mapping light field data to the three-dimensional model.
- In certain embodiments, determining the viewpoint further includes: determining, based on viewing directives received from the spectator, a virtual camera defining a virtual origin, relative to the three-dimensional model, and a virtual direction of the viewing experience.
- Certain embodiments further include: within the three-dimensional model, determining, around each of the participant and the object, a virtual grid having a plurality of cells; and the step of mapping further comprising: mapping at least a portion of one of the primary images identified by at least one cell of the virtual grid corresponding to the participant or object.
- In certain embodiments, mapping further includes: mapping at least a portion of one of the primary images identified by a section of the secondary grid corresponding to the participant or object.
- In certain embodiments, mapping further includes interpolating between any two of the primary images.
- Certain embodiments further include generating a virtual image, having the object and/or the participant, based upon (i) the three-dimensional model and (ii) the viewpoint or the virtual camera.
- Certain embodiments further include sending the location data of the object and the participant to a viewing device configured to provide the viewing experience.
- Certain embodiments further include sending one or both of (i) the three-dimensional model and (ii) at least a portion of the virtual image to a viewing device configured to provide the viewing experience.
- Certain embodiments further include: determining an occurrence of interest; and adding visual special effects and audible special effects to the viewing experience, the special effects based on (i) the location data and movement data of the object and/or the participant and (ii) the occurrence of interest.
- Certain embodiments further include determining when an obstruction is located between (i) the viewpoint and (ii) the object or the participant.
- Certain embodiments further include determining when an obstruction is located between (i) the virtual camera and (ii) the object or the participant.
- Certain embodiments further include adding at least a portion of the virtual image, corresponding to the location of the obstruction, to the viewing experience.
- Certain embodiments further include removing the obstruction from the viewing experience.
- Certain embodiments further include receiving sound feeds from a plurality of microphones positioned at the event area; mapping the sound feeds to the three-dimensional model; and determining the viewing experience to include sounds based upon the three-dimensional model.
- Certain embodiments further include providing haptic feedback to the spectator based on the virtual camera and the location data of the object and the participant.
- Certain embodiments further include providing haptic feedback to the spectator based on the occurrence of interest and the visual and audio special effects.
- Certain embodiments further include providing haptic feedback to the spectator based on feedback from other spectators sharing the viewing experience.
- In another embodiment, a system generates a viewing experience for a spectator. The system includes event tracking apparatus configured to determine location data and movement data of (i) an object and (ii) a participant within an event area; spectator tracking apparatus configured to determine spectator location data and spectator viewing direction data; and a server having a processor and memory storing machine readable instructions that when executed by the processor are capable of: determining a three-dimensional model of the event area, the model having the participant and the object based upon the location data and movement data of the participant and the object; and determining a spectator viewpoint based on the spectator location data and spectator viewing direction data; the viewpoint defining an origin, relative to the three-dimensional model, and a direction of the viewing experience.
- Certain embodiments further include a plurality of cameras positioned at an event area to capture primary images of the event area.
- In certain embodiments, the event tracking apparatus determines location data and movement data of the participant and the object using triangulation of signals that are one or the combination of (i) emitted from and (ii) received by an object location unit and a participant location unit; the object location unit and the participant location unit attached to the object and to the participant, respectively; the signals being selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- In certain embodiments, the event tracking apparatus determines location data and movement data of the participant and the object using light field data captured by one or the combination of (i) the event tracking apparatus and (ii) the object location unit and the participant location unit.
- In certain embodiments, the spectator tracking apparatus determines spectator location data and spectator viewing direction data using triangulation of signals that are one or the combination of (i) emitted from and (ii) received by a spectator location unit; the signals selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- In certain embodiments, the spectator tracking apparatus determines spectator location data and spectator viewing direction data using light field data captured by one or the combination of (i) the spectator tracking apparatus and (ii) the spectator location unit.
- In certain embodiments, the light field data includes light intensity, light direction, and light color.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of: determining an occurrence of interest based on the three-dimensional model and the spectator viewpoint; the occurrence having at least an identity and coordinates relative to the three-dimensional model.
- Certain embodiments, further includes a software module that when executed by a processor of a viewing device is capable of: augmenting the viewing experience for the spectator based on the occurrence of interest received from the server.
- In certain embodiments, the software module augments the viewing experience via providing visual special effects and audible special effects.
- In certain embodiments, the machine readable instructions, when processed by the server, are further capable of: receiving the primary images from the plurality of cameras; determining, within the three-dimensional model, around each of the participant and the object, a virtual grid having a plurality of cells; mapping at least a portion of one of the primary images identified by at least one cell of the virtual grid corresponding to the participant or object; and generating a virtual image having a portion of the three-dimensional model corresponding to the participant or the object based on the spectator viewpoint.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of: correcting the virtual image based on at least a portion of one of the primary images identified by at least one cell of a secondary grid corresponding to the participant or object; the secondary grid positioned between the viewpoint and the participant or object.
- In certain embodiments, the secondary grid is a virtual secondary grid.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of: interpolating between portions of the primary images.
- In certain embodiments, the software module further augments the viewing experience via providing the virtual image received from the server.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of: determining, based on the three-dimensional model and the spectator viewpoint, when an obstruction is located between (i) the viewpoint and (ii) the object or the participant; and sending directives to the software module to display at least a portion of the virtual image corresponding to the obstruction.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of: receiving sound feeds from a plurality of microphones positioned at the event area; mapping the sound feeds to the three-dimensional model; and generating a sound output based on one or the combination of (i) the spectator viewpoint and (ii) the occurrence of interest.
- In certain embodiments, the software module further augments the viewing experience via providing the sound output received from the server.
- In certain embodiments, the software module further augments the viewing experience via providing haptic feedback based on the occurrence of interest.
- In another embodiment, a system generates a free-viewpoint experience for a spectator. The system includes a plurality of cameras positioned at an event area to capture primary images of the event area; tracking apparatus configured to determine location data and movement data of (a) at least one object and (b) at least one participant within the event area; and a server having a processor and memory storing machine readable instructions that when executed by the processor are capable of: receiving the primary images from the plurality of cameras; determining a three-dimensional model of the event area, the participant and the object based upon the location data and movement data of the participant and the object; and sending an output to a viewing device for providing the free-viewpoint experience, having at least one virtual image, to the spectator.
- In certain embodiments, the tracking apparatus determines location data and movement data of the participant and the object using triangulation of signals that are one or the combination of (i) emitted from and (ii) received by an object location unit and a participant location unit; the object location unit and the participant location unit attached to the object and to the participant, respectively; the signals selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof.
- In certain embodiments, the tracking apparatus determines location data and movement data of the participant and the object using light field data captured by one or the combination of (i) the tracking apparatus and (ii) the object location unit and the participant location unit.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of: determining, based on viewing directives received from the spectator, a virtual camera defining an origin within the three-dimensional model and a direction of the free-viewpoint experience; and generating the virtual image having a portion of the three-dimensional model, based upon the virtual camera.
- In certain embodiments, the output is the virtual image.
- In certain embodiments, the output is the three-dimensional model.
- Certain embodiments further include a software module, having machine readable instructions, that when executed by a processor of the viewing device is capable of: determining, based on viewing directives received from the spectator, a virtual camera defining an origin within the three-dimensional model and a direction of the free-viewpoint experience; and generating the virtual image having a portion of the three-dimensional model, based upon the virtual camera.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of: determining, within the three-dimensional model, around each of the participant and the object, a virtual grid having a plurality of cells; and mapping at least a portion of one of the primary images identified by at least one cell of the virtual grid corresponding to the participant or object.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of: correcting the virtual image based on at least a portion of one of the primary images identified by at least one cell of a secondary grid corresponding to the participant or object; the secondary grid positioned between the virtual camera and the participant or object.
- In certain embodiments, the software module is further capable of correcting the virtual image based on at least a portion of one of the primary images identified by at least one cell of a secondary grid corresponding to the participant or object; the secondary grid positioned between the virtual camera and the participant or object.
- In certain embodiments, the secondary grid is a virtual secondary grid.
- In certain embodiments, the machine-readable instructions are further capable of: interpolating between any two of the primary images.
- In certain embodiments, the software module is further capable of interpolating between any two of the primary images.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of: determining, based on the three-dimensional model and the virtual camera, when an obstruction is located between (i) the virtual camera and (ii) the object or the participant; and sending directives to the software module to display at least a portion of the virtual image corresponding to the obstruction.
- In certain embodiments, the software module is further capable of removing an obstruction from the virtual image, the obstruction located between the virtual camera and the participant or object within the virtual image.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of determining an occurrence of interest based on the three-dimensional model and the virtual camera; the occurrence having at least an identity and coordinates relative to the three-dimensional model.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of sending directives to the software module to provide visual special effects and audible special effects, within the free-viewpoint experience, based on the three-dimensional model, virtual camera, and occurrence of interest.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of adding, to the virtual image, visual special effects and audible special effects based on the three-dimensional model, virtual camera, and occurrence of interest.
- In certain embodiments, the software module is further capable of determining an occurrence of interest based on the three-dimensional model and the virtual camera; the occurrence having at least an identity and coordinates relative to the three-dimensional model.
- In certain embodiments, the software module is further capable of providing visual special effects and audible special effects, within the free-viewpoint experience, based on the three-dimensional model, virtual camera, and occurrence of interest.
- In certain embodiments, the machine-readable instructions, when processed by the server, are further capable of: receiving sound feeds from a plurality of microphones positioned at the event area; and mapping the sound feeds to the three-dimensional model.
- In certain embodiments, the output, of the server, further includes sounds based on the three-dimensional model and the virtual camera.
- In certain embodiments, the software module is further capable of providing sounds, within the free-viewpoint experience, based on the three-dimensional model and the virtual camera.
- In certain embodiments, the software module is further capable of providing haptic feedback, within the free-viewpoint experience, based on the virtual camera and the occurrence of interest.
-
FIG. 1 is a schematic diagram illustrating one example system for creating a viewing experience, according to an embodiment. -
FIG. 2 illustrates one example viewing experience created by the system ofFIG. 1 , according to an embodiment. -
FIG. 3 shows the system ofFIG. 1 in further example detail, for creating a viewing experience from a 3D model based upon a spectator controlled viewpoint, according to an embodiment. -
FIG. 4 shows the system ofFIG. 3 further including a spectator tracking apparatus, according to an embodiment. -
FIG. 5 shows the viewing device ofFIGS. 3 and 4 in further example detail, according to an embodiment. -
FIG. 6 shows the system ofFIG. 5 further including at least one microphone and illustrating a virtual camera, according to an embodiment. -
FIG. 7 shows the system ofFIG. 6 further illustrating generation of special effects to enhance the viewing experience, according to an embodiment. -
FIG. 8 shows the system ofFIG. 7 further illustrating generation of haptic feedback with the viewing experience, according to an embodiment. -
FIG. 9 shows the system ofFIGS. 1-8 with a plurality of virtual cameras illustratively shown within the event arena. -
FIG. 10 shows one example participant configured with one of the microphones and one of the cameras of the system ofFIGS. 1-9 , and further configured with a plurality of participant location units, in an embodiment. -
FIGS. 11A-11C depict a virtual grid around a participant, in an embodiment. -
FIG. 12A-12C show a portion of the event arena ofFIG. 1 having a surrounding border forming a secondary grid, in an embodiment. -
FIGS. 13A and 13B show an obstruction positioned between a spectator viewpoint or virtual camera and a participant. -
FIG. 14A shows a portion of a viewing experience where an obstruction blocks part of the participant, andFIG. 14B shows the virtual experience where the participant is displayed through the obstruction, in embodiments. -
FIGS. 15A-19B are flowcharts illustrating a method for creating a viewing experience, according to certain embodiments. -
FIG. 20 is a schematic overview of the systems ofFIGS. 1, and 3-14 , in embodiments. -
FIG. 21 is a playout workflow of the systems ofFIGS. 1, and 3-14 , in embodiments. - Conventionally, a spectator of an event has a view that is limited in perspective either because of a location of the spectator relative to the action in the event, or by the location of cameras capturing images of the event. Systems and associated methods disclosed herein create an enhanced viewing experience for a spectator that includes one or more of augmented reality, mixed reality, extended reality, and virtual reality. These viewing experiences may be uniquely created by the spectator and shared socially.
-
FIG. 1 is a schematic diagram illustrating oneexample system 100 for creating a viewing experience.FIG. 2 illustrates anexample viewing experience 200 generated bysystem 100 ofFIG. 1 .FIGS. 1 and 2 are best viewed together with the following description.System 100 includes a plurality ofcameras 106, anevent tracking apparatus 108, and aserver 110.Event tracking apparatus 108 tracks the position (location, orientation, movements, etc.) ofparticipants 102 and objects 104 (e.g., a ball, player equipment, and so on) within anevent area 103.Event area 103 is any area that may be tracked bysystem 100, such as a soccer field where the event is a soccer game, an American football field where the event is American football, an ice rink where the event is an ice hockey game, a stage where the event is a concert, and office where the event is a conference, and so on. - Cameras 106(1)-(4) are positioned around, above and within
event area 103 to capture live images of an event withinevent area 103. Captured images may be streamed toserver 110 as image feeds (see, e.g., image feeds F1-F4FIG. 3 ) and stored in adatabase 113. Although shown with fourcameras 106,system 100 may include more of fewer cameras without departing from the scope hereof. One or more ofcameras 106 may be configured to capture infrared images, or images using other wavelengths, without departing from the scope hereof. - Tracking information, which may include occurrences of interest, sensor data, and other information, is also sent from the
event tracking apparatus 108 to server 110 (e.g., see feed F5,FIG. 3 ) where it may be stored together with information of image feeds F1-F4 indatabase 113. Although shown separate fromserver 110, in certain embodiments,database 113 may be part ofserver 110. Tracked events, or portions thereof, may be given a unique identifier (also referred to as a “Tag”), that is tracked withindatabase 113, and/or provided via an external BlockChain ledger for example, to allow the event (or portion thereof) to be referenced by internal and external systems. For example,spectators 101 may trade access records (tags) identifying the specific events, or portions thereof, that they have watched. Such tags may allow other spectators to replay these identified events, or portions thereof, based upon the tag. -
Server 110 uses information stored withindatabase 113 to replay content of recorded events, or portions thereof;server 110 generates a three-dimensional model 111 (FIG. 3 ) of corresponding data indatabase 113 for this replay. Advantageously, this replay of events allowsspectator 101 to review actions and events from different viewpoints at a later time, as compared to the viewpoint he or she had when watching the event live, for example. - When replaying an event from
database 113,spectator 101 may adjust timing of replayed action. For example, when watching a scene withseveral participants 102,spectator 101 may adjust replay speed of one of the participants such that scene dynamics are changed. For example, a trainer may use replay of a captured scenario and change the speed of different participants to illustrate possible results that might have occurred had one of the participants moved 20% slower or faster. Such adjustment of replay timing to see alternative results may for example be implemented through telestration with a vision cone. - In one embodiment,
event tracking apparatus 108 determines location data and movement data of eachparticipant 102 and/or eachobject 104 within theevent area 103 using triangulation. For example,event tracking apparatus 108 may include three or more receivers positioned around theevent area 103 to receive signals from one or more location units (seelocation unit 1002 ofFIG. 10 ) positioned on eachparticipant 102 and/orobject 104. Accordingly,event tracking apparatus 108 may determine a location of eachparticipant 102 and/or object 104 based upon signals received from the location units. The signals used for triangulation may for example be sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, or any combinations thereof. In one example, the location units onobject 104 and/orparticipant 102 each include transponders emitting radio wave signals that are triangulated to determine location byevent tracking apparatus 108. - Alternatively, in an embodiment, each of the location units may periodically and/or aperiodically determine and report its location to the
event tracking apparatus 108. In this embodiment, the location units separately includes capability (for example triangulation is determined on board based on fixed transponders around event area 103) to determine and repetitively report a unique position. Depending on the event, when slower update rates are requested, such location units may even employ GPS -
Event tracking apparatus 108 may also receive sensor data from location devices (e.g., location devices 1002) attached to each of theparticipants 102 and/or objects 104. For example, the location devices may include one or more sensors (e.g., accelerometers) that detect movement ofparticipants 102 and/or objects 104. Location devices may be positioned on theparticipants 102 and/orobjects 104 to detect particular movement. For example, a head mounted location device may detect head movement ofparticipant 102, a hand mounted location device may detect hand movement ofparticipant 102, and so on. These physical sensors may also be configured to detect specific posture moves ofparticipant 102, such as reaching, squatting, laying, bending, and so on. Advantageously,server 110 may thus determine the location, orientation, and posture ofparticipants 102, based on the location devices, such that three-dimensional model 111 accurately portrays the event withinevent area 103. In particular,event tracking apparatus 108 may send this information toserver 110 to generate, in real-time, a three-dimensional model 111 of theevent area 103, along with theparticipants 102 and theobjects 104. -
Server 110 may also use images from the cameras 106(1)-(4) to enhance three-dimensional model 111, as described in detail below. For example,cameras 106 may be positioned around, within, and aboveevent area 103 to capture primary images of the event occurring withinevent area 103. - In embodiments, and such as shown in
FIGS. 3,4,5,6,7,8 ,system 100 generates theviewing experience 200 on aviewing device 112 for eachspectator 101 of an event in real time (i.e., live). Theviewing experience 200 may be based upon one or more of augmented, mixed, virtual reality, and extended reality, and may include visual and/or audible special effects 202 (FIG. 2 ) that are generated bysystem 100, to enhance theviewing experience 200 forspectators 101. For example, for a spectator of a soccer match, when the participant 102 (e.g., a soccer player) scores a goal,system 100 may generateviewing experience 200 to include real-time visual and/or audiblespecial effects 202 of a dazzling fireworks display with corresponding sounds. - In certain embodiments, such as shown in
FIG. 3-5 ,system 100 generatesviewing experience 200 for spectator 101(1) based upon aviewpoint 320 that is freely selectable by spectator 101(1) and may resemble a viewpoint captured by a virtual camera (seeFIG. 6,9 ) that is virtually positioned anywhere by the spectator 101(1). Continuing with the soccer example, spectator 101(1) may for example position thevirtual camera 606 near the soccer player or view towards the goal as the soccer player kicks the ball, thereby having a previouslyunobtainable viewpoint 320 of live action. -
FIG. 3 showssystem 100 ofFIG. 1 in further example detail. In this embodiment,system 100 creates a viewing experience from a 3D model based upon aspectator viewpoint 320 that may be controlled by thespectator 101.Server 110 includes at least oneprocessor 302 andmemory 304 storing machine-readable instructions 306 that, when executed by theprocessor 302, control the at least oneprocessor 302 to generate three-dimensional model 111 of theevent area 103,participants 102 andobjects 104 based upon the location data and movement data captured by theevent tracking apparatus 108.Instructions 306 may also control the at least oneprocessor 302 to determinespectator viewpoint 320 based on the spectator location data and spectator viewing direction data. For example, thespectator viewpoint 320 may define a location ofspectator 101, relative to the three-dimensional model 111, and a direction of view of thespectator 101 such that theserver 110 then generatesviewing experience 200 from the three-dimensional model 111 based upon the spectator viewpoint.Location units 1002 may be placed withspectator 101 to determine location ofspectator 101; orcameras 106 may be used to determine location ofspectator 101; orviewing device 112 may have its own location capability to determine spectator location, for example. In certain embodiments, theviewing device 112 includes user controls 310 that allowspectator 101 to control thespectator viewpoint 320, and thereby thespectator viewing experience 200 displayed on adisplay 312 ofviewing device 112. For example, thespectator viewpoint 320 may include spectator coordinate information based upon a grid used by the three-dimensional model 111, wherein the user controls 310 allowsspectator 101 to reposition thespectator viewpoint 320 within three-dimensional model 111 such thatspectator 101 watches the event from other desired perspectives. - In certain embodiments,
instructions 306, when executed byprocessor 302,control processor 302 to implement artificial intelligence to estimate images needed to completeviewing experience 200, by learning how to provide data that might be missing from feeds F1-F7 (see, e.g.,FIGS. 6,7 ). Accordingly,system 100 may learn to store portions of images and information that may be used to correct and/or complete three-dimensional model 111 under certain conditions when such information may be missing from feeds F1-F7. For example, based upon positioning ofcameras 106 and/or obstruction of oneparticipant 102 by another or building structure, if image feeds F1-F5 (FIGS. 3,4,5 ) do not include certain portions ofparticipant 102 orobject 104,system 100 may use images and/or data fromdatabase 113 to complete three-dimensional model 111 so that the spectator can replay the event without obstruction. -
FIG. 4 shows the system ofFIG. 3 further including aspectator tracking apparatus 402 that may be configured to determine spectator location data and spectator viewing direction data for eachspectator 101, illustratively shown as a spectator location and viewing direction data feed F6 toserver 110. As noted above, location for thespectator 101 may be derived in various ways for inclusion in feed F6. -
FIG. 5 showsviewing device 112 ofFIGS. 3 and 4 in further example detail. In certain embodiments, machinereadable instructions 306, when processed by theserver 110, are capable of determining an occurrence of interest 130 (e.g., an action, situation, etc., within the event area, as shown inFIGS. 1 and 2 ) based on the three-dimensional model 111 and thespectator viewpoint 320. The occurrence ofinterest 130 may have, at least, an identity and coordinates relative to the three-dimensional model 111. - In certain embodiments,
viewing device 112 includes aprocessor 502 andmemory 504 storing asoftware module 506 that, when executed byprocessor 502, controlsprocessor 502 to augmentviewing experience 200 forspectator 101 when instructed byserver 110.Viewing device 112 may for example be a screen held by, or positioned in front of,spectator 101 or positioned in front of the spectator, or a device worn byspectator 101, such as a helmet, goggles, glasses, and contact lenses.Viewing device 112 thereby positionsviewing experience 200 in front of the spectator's eye(s),projects viewing experience 200 into the spectator's field of vision, orprojects viewing experience 200 into the spectator's eye(s). In certain embodiments,viewing device 112 may be a tablet, a computer, or a mobile device (e.g., a smartphone). For example,viewing device 112 may be an Oculus Go™ device, an iPad™, an augmented reality display, and so on.Viewing device 112 may include one or more sensors that sense input, such as movement, noise, location, selection, and so on, byspectator 101. This input may be used todirect spectator viewpoint 320 and/or the virtual camera for example. -
FIG. 7 shows the system ofFIG. 6 further illustrating generation of special effects to enhance theviewing experience 200, according to an embodiment. In embodiments,server 110 generates visual and/or audiblespecial effects 202 that are added to3D model 111. Visual and/or audiblespecial effects 202 may be added to three-dimensional model 111 as if they are part of live action, wherein theviewing experience 200 generated from three-dimensional model 111 includes thespecial effects 202. Visual and/or audiblespecial effects 202 may be included within three-dimensional model 111 as codes and/or instructions that may be sent toviewing device 112 when thecorresponding viewpoint 320 includes the visual and/or audiblespecial effects 202. - In embodiments,
software module 506 is configured to controlprocessor 502 to augmentviewing experience 200 by providing visual and/or audiblespecial effects 202. Visual and/or audiblespecial effects 202 may include, for example, one or more of fireworks, an explosion, and a comet tail and may be associated with images ofparticipants 102, objects 104, or other computer-generated images. Visual and/or audiblespecial effects 202 may also include outlining one ormore participants 102 and/orobject 104 inviewing experience 200. Visual and/or audiblespecial effects 202 may also include visual manipulation of images ofparticipants 102 and/or objects 104. Visual and/or audiblespecial effects 202 may further include annotating information that providesspectator 101 with additional information on the event withinevent area 103. In one example, the annotation information may be selected based at least in part upon one or more of the event, occurrence of interest,participant 102, and/orobject 104. For example,viewing device 112 may display annotation data that includes scoring statistics of a basketball player during a live match. - Any audible portion of visual and/or audible
special effects 202 may correspond to the visual portion of the visual and/or audiblespecial effects 202. For example, the audible portion may include the sound of an explosion corresponding to the visual portion that shows an explosion. But the visual and audible portions of the visual and/or audiblespecial effects 202 may be independent of each other. As shown inFIG. 7 ,microphones 602 placed at theevent area 103 may provide direct audio data as sound feeds 604 collected at feed F7. -
Software module 506 may receive visualization data fromserver 110 such that thesoftware module 506 augments thevisual experience 200 by applying the visualizations to the body ofparticipant 102 and/orobject 104. Theserver 110 may apply the visualizations to three-dimensional model 111 such thatviewing experience 200 is generated byserver 110 with the applied visualizations. These visualizations may for example indicate a status ofparticipant 102 and/or object 104 within the game play, such as one or more of health, power, weapon status, and so on. -
FIG. 10 shows oneexample participant 102 configured with one of themicrophones 602 and one of thecameras 106 of the system ofFIGS. 1-9 , and further configured with a plurality ofparticipant location units 1002. More than onecamera 106 and more than onemicrophone 602 may be affixed to theparticipant 102 without departing from the scope hereof. Similarly, one (or a plurality of)cameras 106 may be affixed to one ormore objects 104. When attached toparticipants 102 and/orobjects 104 that move,event tracking apparatus 108 may also track location and movement of the attachedcamera 106 and/ormicrophone 602. In certain embodiments,participant 102 wears a suit that is configured with a combination oflocation units 1002,cameras 106,microphones 602, and other sensors (e.g., biometric sensors) that provide data toserver 110. One or more these sensors may be inside the body or attached to the body ofparticipant 102. - In certain embodiments, the
event tracking apparatus 108 and/or thespectator tracking apparatus 402 may be integrated, at least in part, withserver 110, such thatevent tracking apparatus 108 and/or thespectator tracking apparatus 402 is integrated withserver 110.Event tracking apparatus 108 and/orspectator tracking apparatus 402 may instead be a computer based server (like server 110) that includes a processor and memory storing instructions that control the server to use sensor data to track location of theparticipants 102,objects 104 and/orspectators 101. These servers, andserver 110, may be a video processing server, for example. - As noted earlier,
event area 103 may be a sporting arena, a stage, an outdoor field, a street, or a room, for example. The event occurring withinevent area 103 may thus be a sporting event, a concert, a play, an opera, a march, or other event (such as a conference in a conference room) that may havespectators 101. Regardless,system 100 providesmultiple viewing experiences 200 to thespectators 101. -
Instructions 306, when executed byprocessor 302, may controlprocessor 302 to generate three-dimensional model 111 based uponevent area 103, wherein three-dimensional model 111 may represent physical construction ofevent area 103. However, three-dimensional model 111 may alternatively have a representation that differs fromevent area 103. For example, three-dimensional model 111 may be generated to represent certain structure that is not present within theactual event area 103, and is therefore unrelated to physical structure atevent area 103. Accordingly, three-dimensional model 111 may in part be generated from images and data stored within a database (e.g., database 113) that define structure unconnected withevent area 103. Three-dimensional model 111 may for example represent multiple adjoining event areas whereas theactual event area 103 does not physically adjoin these other event areas represented within three-dimensional model 111. In certain embodiments, whereviewing experience 200 is generated as virtual reality, representation ofevent area 103 by three-dimensional model 111 may be selected by one or more ofspectator 101,participant 102, and/or crowd-sourced selection (e.g., multiple spectators 101). For example,spectator 101 may control three-dimensional model 111 to representevent area 103 as a mountain top, even though theactual event area 103 is a room. In another example, whenevent area 103 is a stage andspectator 101 is watching a concert,spectator 101 may change the representation of the stage to be on a mountain top, wherein theparticipants 102 and objects 104 (e.g., performers and instruments) are shown withinviewing experience 200 and being on the mountain top. -
Server 110 may provide multiple functions. For example, inFIGS. 3-9 ,event tracking apparatus 108 may provide location and movement data (shown as data stream F5) toserver 110, while the plurality ofcameras 106 may also provide images (shown as image streams F1-F4) toserver 110. In addition, considerFIG. 10 , which shows a plurality oflocation units 1002 positioned on oneparticipant 102; theselocation units 1002 may also be positioned on, or configured, withobjects 104. Tracking ofobjects 104 and/orparticipants 102 may further include multiple input, multiple output (MIMO) protocols understood byserver 110.Event tracking apparatus 108 may for example use image analysis to identify a location of, and a position of,participant 102 and/orobject 104.Event tracking apparatus 108 may use images captured by at least twocameras 106 atevent area 103 to triangulate location ofparticipants 102 and/or objects 104. Thelocation units 1002 may include reflective and/or emissive visual markers that may be detected within the images captured bycameras 106. -
Event tracking apparatus 108 may alternatively determine location data and movement data ofparticipants 102 and/orobjects 104 using light field data captured by one or more of (i) event tracking apparatus 108 (e.g., usingcameras 106 and/or other cameras) and (ii)location units 1002 at theobject 104 and/or theparticipant 102. In these embodiments,event tracking apparatus 108 may include or be connected to one or more light-field cameras positioned to capture light-field data of theevent area 103. The light-field data may include one or more of light intensity, light direction, and light color. - In certain embodiments,
spectator tracking apparatus 402 may include components, features, and functionality similar toevent tracking apparatus 108 to track location and/or viewing direction of eachspectator 101.Spectator tracking apparatus 402 may determine spectator location data and spectator viewing direction data using triangulation of signals that are one or the combination of (i) emitted from and (ii) received by a spectator location unit (similar to the location unit 1002). The signals may for example be sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and combinations thereof. Alternatively, thespectator tracking apparatus 402 may determine spectator location data and spectator viewing direction data using light field data captured by one or the combination of (i) the spectator tracking apparatus and (ii) thespectator location unit 1002. Spectator location and viewing direction may also be determined by image analysis of images captured by a camera on theviewing device 112 used by thespectator 101. Spectator location and viewing direction may be determined through image analysis of images captured bymultiple viewing devices 112, each from adifferent spectator 101. For example,FIGS. 4-8 illustratespectator tracking apparatus 402 providing information ofspectator viewpoint 320 toserver 110 as feed F6. In certain embodiments,event tracking apparatus 108 includes functionality to trackspectators 101 and generate spectator location and viewing direction via data feed F6; and in this casespectator tracking apparatus 402 is not even used and yetsystem 100 retains all functionality. - Sharing of viewing experiences may be accomplished in several ways. For example, as shown in
FIGS. 1 and 9 , spectator 101(1) may shareviewing experience 200 with another spectator 101(6). In another example, a celebrity (e.g., famous player, movie star, etc.) may create and shareviewing experience 200 with other spectators 101 (followers). In yet another example, as aparticipant 102 at a conference, spectator 101(1) may shareviewing experience 200 with manyother spectators 101 not at the conference. - In certain embodiments, and with reference to
FIGS. 11A-11C , machinereadable instructions 306, when executed byprocessor 302,control processor 302 to: (a) receive the primary images (e.g., image feeds F1-F4 in the example ofFIGS. 3-8 ) from the plurality ofcameras 106; (b) generate, within three-dimensional model 111, a virtual grid formed of a plurality of cells around eachparticipant 102 andobject 104; (c) map at least a portion of one of the primary images (e.g., from feeds F1-F4) identified by at least onecell 1104 of thevirtual grid 1102 corresponding to theparticipant 102 or object 104 to the three-dimensional model 111; and (d) generate a virtual image having a portion of the three-dimensional model 111 corresponding to participant 102 and/or object 104 based at least in part uponspectator viewpoint 320.Virtual grid 1102 may be used to enhance three-dimensional model 111 and/orviewing experience 200 that is based on the three-dimensional model 111 by more accurately, and with higher resolution, rendering images ofparticipant 102 and/orobject 104. In certain embodiments,virtual grid 1102 has a longitudinal direction and appears multi-sided when viewed in the longitudinal direction. For example,virtual grid 1102 may be hexagonal in shape when viewed in the longitudinal direction, as illustrated inFIGS. 11A-11C .FIGS. 11A-11C further illustrate mapping of portions of primary images (e.g., from image feeds F1-F4) tocells 1104 ofvirtual grid 1102. In particular,FIG. 11A illustratesvirtual grid 1102 aroundparticipant 102 with no mapping andFIGS. 11B and 11C show different amounts ofvirtual grid cells 1104 mapped with portions of primary images from image feeds F1-F4. As shown inFIG. 11C , virtual grid cells that do not correspond to a portion ofparticipant 102 may be unmapped, for example to save processing time. In certain embodiments,cell 1104 may correspond to a real-world dimension of between one and ten centimeters. - Continuing with
FIG. 12A, 12B, 12C ,instructions 306, when executed byprocessor 302, may controlprocessor 302 to generate a virtual image based at least in part upon the primary images (e.g., from feeds F1-F4) identified by at least one cell 1104 (FIG. 11A-C ) or portion of asecondary grid 1202 corresponding to participant 102 orobject 104. As shown inFIG. 12A-C ,secondary grid 1202 may be positioned betweenspectator 101 andparticipant 102 and/orobject 104. - When
system 100 generatesviewing experience 200 in real-time based upon image feeds F1-F4, location and movement data feed F5, spectator location and viewing direction data feed F6, and sound feeds F7, latency ofsystem 100 is low to maintain integrity ofviewing experience 200, particularly whereviewing experience 200 shows augmented reality or extended reality. Accordingly, the amount of processing required to generateviewing experience 200 may be reduced by determiningspectator viewpoint 320 based upon a location ofspectator 101 relative toevent area 103 and by providing only the information needed toviewing device 112 to generateviewing experience 200. For example, although three-dimensional model 111 is anevent area 103,spectator viewpoint 320 may not include all ofevent area 103, and thus only part of three-dimensional model 111 may actually be used to generateviewing experience 200. As described below, the use ofsecondary grid 1202 may further reduce processing necessary to generateviewing experience 200, by identifyingcells 1104 ofvirtual grid 1102 that are needed to generateviewing experience 200; cells that are not needed do not require intensive image processing, thereby reducing latency insystem 100. - Latency may be further reduced by implementing bokeh within one or both of
instructions 306 ofserver 110 andsoftware module 506 ofviewing device 112. Bokeh causes blurring of less important portions of an image (e.g., background and/or foreground), which reduces the required resolution for those portions ofviewing experience 200. Accordingly, fewer pixels need be rendered to generateviewing experience 200 based upon three-dimensional model 111, thereby reducing latency ofsystem 100. Bokeh may also highlight the portion of interest (e.g., occurrence of interest 130) to the user withinviewing experience 200 since this portion appears in more detail and attracts the attention of the eye ofspectator 101, whereas the blurred foreground/background has reduced detail that does not attract the eye's attention. -
Secondary grid 1202 may be a physical grid such as a net, a windshield, or border (collectively referred to as border 1206) positioned aroundevent area 103 as shown inFIG. 12A . For example,event area 103 may have an upright border that contains a grid, and which may be visible to the human eye. In another example, the grid may be undetectable to the human eye but may be detected by features (e.g., sensors) ofviewing device 112, such as a camera ofviewing device 112, wherein the border 1206 may allowviewing device 112 to determine its orientation and/or location relative toevent area 103. In certain embodiments, the border grid may comprise features and/or components capable of emitting, reflecting, or detecting visible light, infrared light, ultraviolet light, microwaves, and/or radio waves. In another example,secondary grid 1202 may be worn by thespectator 101 over the spectator's eye(s) and/or integrated withviewing device 112 such that thesecondary grid 1202 appears within the spectator's viewpoint (e.g., in front of the spectator's eyes) and thus overparticipant 102 and/orobject 104. - In certain embodiments,
secondary grid 1202 may be virtual and determined byserver 110 orviewing device 112. For example,secondary grid 1202 may be generated based uponvirtual camera 606. As shown inFIGS. 12B and 12C , thesecondary grid 1202 may be positioned perpendicular to the viewing direction ofspectator 101. In certain embodiments,secondary grid 1202 may move and/or rotate as the location and/or viewing direction ofspectator 101 changes. As shown inFIGS. 12B and 12C , cells ofsecondary grid 1202 may provide references, in combination with thevirtual grid 1102, to enhance three-dimensional model 111 and/orviewing experience 200 based on three-dimensional model 111, and to renderparticipant 102 and/orobject 104 in more detail. - In certain embodiments,
instructions 306, when executed byprocessor 302, may controlprocessor 302 to interpolate between portions of the primary images (e.g., feeds F1-F4) to generateviewing experience 200. - In certain embodiments,
instructions 306, when executed byprocessor 302, may controlprocessor 302 to augmentviewing experience 200 provided to thespectator 101 by providing the virtual image received fromserver 110. For example, whereviewing experience 200 is augmented reality (or extended reality),server 110 may send one or more virtual images, generated from three-dimensional model 111, toviewing device 112 such thatviewing device 112 may selectively enhanceviewing experience 200. - In certain embodiments,
instructions 306, when executed byprocessor 302, may controlprocessor 302 to: (a) determine, based on three-dimensional model 111 andspectator viewpoint 320, when an obstruction is located between (i) thespectator viewpoint 320 and (ii) object 104 orparticipant 102; and (b) send directives tosoftware module 506 to display at least a portion of the virtual image corresponding to the desired view without the obstruction. As shown in the example ofFIG. 13A , anobstruction 1302 is located inevent area 103 and positioned betweenspectator viewpoint 320 andparticipant 102. As illustrated in the example ofFIG. 14A ,obstruction 1302 may be a partial wall that obstructs a conventional view ofparticipant 102 byspectator 101.FIG. 14B illustratesviewing experience 200 generated byserver 110 ofparticipant 102 displayed throughobstruction 1302 such thatspectator 101 may still fully viewparticipant 102. -
FIG. 6 showssystem 100 ofFIG. 5 further including at least onemicrophone 602 positioned around and/or withinevent area 103. In certain embodiments,instructions 306, when executed byprocessor 302, may controlprocessor 302 to: (a) receive sound feeds F7 from at least one ofmicrophones 602 positioned at and/or withinevent area 103; (b) map sound feeds F7 to three-dimensional model 111; and (c) generateviewing experience 200 to include sound based on one or more of (i) thespectator viewpoint 320 and (ii) occurrence ofinterest 130. In certain embodiments, any number ofmicrophones 602 may be positioned within, around, and/or aboveevent area 103. In other embodiments, one ormore microphones 602 may be positioned onparticipant 102, such as shown inFIG. 10 , and/orobject 104. InFIGS. 6-8 , sound feeds F7 frommicrophones 602 are input toserver 110. - In certain embodiments,
software module 506, when executed by processor 503, controlsprocessor 502 to augmentviewing experience 200 by providing at least part of sound feed F7 and provided byserver 110. For example,viewing device 112 may generateviewing experience 200 to include sounds associated with the event, such as whenparticipant 102 scores a goal in a sporting event. In another example,spectator 101 may hear words as they are spoken byparticipant 102. -
FIG. 8 shows the system ofFIG. 7 further illustrating generation of haptic feedback withviewing experience 200.Software module 506, when executed by processor 503, controlsprocessor 502 to further augmentviewing experience 200 by providing haptic feedback based at least in part upon occurrence ofinterest 130.Viewing device 112 may include ahaptic feedback actuator 802 that include vibration-generating components. In one example of operation, occurrence ofinterest 130 may occur when twoparticipants 102 hit each other, whereinhaptic feedback actuator 802 is controlled such thatspectator 101 feels a vibration. In another example, whereviewing experience 200 is shared with another spectator,spectator 101 may receive haptic feedback, viahaptic feedback actuator 802, from the other spectator. For example, where the other spectator likes the shared viewing experience as controlled byspectator 101, the other spectator may applaud or cheer, causing the feedback to be received and output by theviewing apparatus 112 ofspectator 101. -
FIG. 9 illustratesevent area 103 ofFIGS. 1 and 3-8 withspectators 101 located aroundevent area 103 with free-viewpoint experiences. In certain embodiments,system 100 may create a free-viewpoint experience forspectator 101 by generatingviewing experience 200 as an entirely virtual reality (as opposed to augmented reality based upon adding virtual images to an image of reality). In these embodiments,server 110 may generateviewing experience 200 based upon at least one virtual image generated from three-dimensional model 111 and sendviewing experience 200 toviewing device 112 to provide the free-viewpoint experience tospectator 101. For example,instructions 306, when executed byprocessor 302, may controlprocessor 302 to generateviewing experience 200 as a virtual image based at least in part upon at least a portion of three-dimensional model 111. Accordingly,spectator 101 receivesviewing experience 200 as a real-time virtual experience generated from three-dimensional model 111 of the event occurring within theevent area 103. However, since theviewing experience 200 is generated from three-dimensional model 111,spectator 101 may control avirtual camera 606 to create a free-viewpoint that is similar tospectator viewpoint 320, but need not be based upon a location ofspectator 101 relative toevent area 103. - Within
server 110,instructions 306, when executed byprocessor 302,control processor 302 to: (a) determine, based on viewing directives received fromviewing device 112 through interaction ofspectator 101 withuser controls 310, avirtual camera 606 defining an origin within three-dimensional model 111 and a corresponding viewing direction; and (b) generateviewing experience 200 as a virtual image based at least in part upon three-dimensional model 111 and correspondingvirtual camera 606. - In the example of
FIG. 9 , spectator 101(3) controls virtual camera 606(1), via a virtual link 904(1), spectator 101(4) controls virtual camera 606(2), via a virtual link 904(2), and spectator 101(5) controls virtual camera 606(3), via a virtual link 904(3).Virtual camera 606 andvirtual link 904 are terms used to define the free-viewpoint as controlled byspectator 101 to create the desiredviewing experience 200. - In one example of operation, spectator 101(3) may have a seat that is distant from
event area 103, but may interact with server 110 (usinguser controls 310 of viewing device 112) to position virtual camera 606(1) in a desired location to generateviewing experience 200 with a more favorable view of, or within,event area 103. For example, spectator 101(3) may position virtual camera 606(1) in front of the drum player on the stage. Spectator 101(3) thus received andview viewing experience 200 as based upon the defined free-viewpoint that is different from his/her physical location. In this example, the drum player isparticipant 102, the drums areobject 104, the stage isevent area 103, and the concert is the event being performed within theevent area 103.Server 110 may simultaneously provide adifferent viewing experience 200 to eachspectator 101, where certain ones of the viewing experiences may be based uponspectator viewpoints 320 derived from location of the spectator as determined byspectator tracking apparatus 402, and certain other of the viewing experiences are based uponvirtual cameras 606 controlled by the corresponding spectator.Spectators 101 may switch between these different types of viewing experiences. In one example,spectator 101 watching a performer on a stage uses a mobile device (e.g., an iPad, or similar device), to positionvirtual camera 606 near the singer such that the mobile device displaysviewing experience 200 with a close-up view of the singer. Thus,spectator 101 in their normal reality, with a normal view of the stage, uses virtual reality to bring the singer closer to their sittingposition using system 100 and the mobile device. - In certain embodiments,
system 100 may allow afirst spectator 101 to viewviewing experience 200 controlled by asecond spectator 101, wherein the first spectator does not control, manipulate, or influence theviewing experience 200, since thisviewing experience 200 is controlled by the second spectator. - In certain embodiments,
software module 506 withinviewing device 112 may include instructions, that when executed byprocessor 502,control processor 502 to: (a) determine, based on viewing directives received from the spectator via user controls 310, interact withserver 110 to create and controlvirtual camera 606 to define an origin within the three-dimensional model and a viewing direction of the free-viewpoint experience, and (b) generateviewing experience 200 as virtual images showing at least a portion of three-dimensional model 111, based at least in part upon the correspondingvirtual camera 606. In these embodiments,server 110 may send at least a portion of three-dimensional model 111 toviewing device 112, whereinvirtual camera 606 may be implemented withinviewing device 112 andsoftware module 506 generatesviewing experience 200 using the three-dimensional model and the free-viewpoint defined by the virtual camera. - In certain embodiments,
instructions 306 and/orsoftware module 506 may correct generation of viewing experience 200 (e.g., the virtual image) using primary images of video feeds F1-F4 corresponding to at least one cell ofsecondary grid 1202 corresponding to participant 102 and/or object 104 within theviewing experience 200. In these embodiments,server 110 may send at least part of three-dimensional model 111, and/or virtual images thereof, toviewing device 112, which may enhance and/or correct the virtual image and/or three-dimensional model 111 based onsecondary grid 1202. In certain embodiments,software module 506 may also interpolate between any two of the primary images, for example when correcting the virtual image. - In certain embodiments,
instructions 306, when executed byprocessor 302,control processor 302 to: (a) determine, based on three-dimensional model 111 andvirtual camera 606, when an obstruction is located betweenvirtual camera 606 and object 104 and/orparticipant 102; and (b) send directives tosoftware module 506 to display at least a portion of the virtual image corresponding to the obstruction. In certain embodiments,instructions 306, when executed byprocessor 302,control processor 302 to remove an obstruction from the virtual image, when the obstruction is located betweenvirtual camera 606 andparticipant 102 and/or object 104 within the virtual image. In the example ofFIG. 13B , anobstruction 1302 is betweenparticipant 102 andvirtual camera 606.FIG. 14A shows a portion ofparticipant 102 is hidden byobstruction 1302, whereas inFIG. 14B ,viewing experience 200, as received byspectator 101, showsobstruction 1302 removed, at least in part from the corresponding virtual image. Alternatively,participant 102 may be overlaid, using the corresponding virtual image, overobstruction 1302, to generateviewing experience 200 to show the participant to thespectator 101. - In certain embodiments,
instructions 306, when executed byprocessor 302,control processor 302 to determine occurrence of interest 130 (FIG. 1 ) based at least in part upon three-dimensional model 111 andvirtual camera 606, determining at least an identity and coordinates, relative to three-dimensional model 111, for occurrence ofinterest 130. For example, when detected, occurrence ofinterest 130 may be tagged such that it may be selected and viewed byspectators 101. As described above,server 110 may send directives tosoftware module 506 to provide visual and/or audiblespecial effects 202, withinviewing experience 200, based at least in part upon three-dimensional model 111,virtual camera 606, and occurrence ofinterest 130. In certain embodiments,instruction 306, when executed byprocessor 302, may controlprocessor 302 ofserver 110 to add, to the virtual image, visual and/or audiblespecial effects 202 based at least in part upon three-dimensional model 111,virtual camera 606, and occurrence ofinterest 130. - In certain embodiments,
software module 506 may controlprocessor 502 to determine occurrence ofinterest 130 based at least in part upon three-dimensional model 111 andvirtual camera 606, determining at least an identity and coordinates, relative to three-dimensional model 111, for occurrence ofinterest 130.Software module 506 may also controlprocessor 502 to generate visual and/or audiblespecial effects 202 withinviewing experience 200, based at least in part upon three-dimensional model 111,virtual camera 606, and occurrence ofinterest 130. - In certain embodiments,
server 110 may be configured to generateviewing experience 200 with sounds based at least in part upon three-dimensional model 111 andvirtual camera 606. These sounds may be determined by processing and mapping sound feeds F7 received frommicrophones 602 atevent area 103. For example, sound feeds F7 may be processed and mapped based upon the location ofvirtual camera 606 within three-dimensional model 111, such thatviewing experience 200 has sounds according to that location. - In certain embodiments,
software module 506, when executed byprocessor 502, may controlprocessor 502 to process sounds stored within three-dimensional model 111, and/or sounds of sound feeds F7, to generate sounds withinviewing experience 200 based at least in part upon three-dimensional model 111 andvirtual camera 606. - In certain embodiments,
software module 506 may also controlprocessor 502 to generate haptic feedback, usinghaptic feedback actuator 802, to further enhanceviewing experience 200, based at least in part uponvirtual camera 606 and occurrence ofinterest 130. In particular, the haptic feedback may be generated based at least in part upon a location ofvirtual camera 606 within three-dimensional model 111 relative toparticipant 102,object 104, a border ofevent area 103, and/or one or more permanent objects withinevent area 103. For example, whenspectator 101 controlsvirtual camera 606 to have a location of a permanent object withinevent area 103,software module 506 may controlhaptic feedback actuator 802 to generate the haptic feedback (e.g., a vibration) to indicate that the location ofvirtual camera 606 is not valid. In another example,software module 506 may controlhaptic feedback actuator 802 to generate the haptic feedback (e.g., a vibration) whenspectator 101 maneuversvirtual camera 606 to virtually “bump” intoparticipant 102 and/orobject 104. - In another example, consider when
participant 102 is a quarterback and object 104 is an American football. The quarterback throws the football to a point in space.System 100 generatesviewing experience 200 based uponvirtual camera 606 positioned at the point in space and facing the quarterback.Spectator 101 appears to receive the football from the quarterback usingviewing experience 200 viewed by on viewing device 112 (e.g., an iPad or similar device). Accelerometer, gyroscopes, and/or other sensors withinviewing device 112 may sense movement ofviewing device 112 byspectator 101; and this sensed movement may manipulatevirtual camera 606, such thatspectator 101 may attempt to manipulatevirtual camera 606 into the path of the ball. Whenobject 104 hitsvirtual camera 606,system 100 may generate haptic feedback onviewing device 112 so simulate the ball being caught byspectator 101. Viewing experience 200 (of the attempted catch) may be shared with followers ofspectator 101, wherein the followers may also cause haptic feedback onviewing device 112 in an attempt to distractspectator 101 from making the catch. For example,viewing experience 200 may be shared through social media networks, wherein messaging of the social media networks may be used for the feedback from the followers. - In certain embodiments, rendering of three-
dimensional model 111 may be enhanced by mapping light-field data onto at least a portion of three-dimensional model 111, in addition to mapping of portions of the image feeds F1-F4 onto three-dimensional model 111. Capture and mapping of light-field data may also include capturing and mapping of light data corresponding to reflections, as noted previously. -
FIG. 20 shows a high leveloperational overview 2000 ofsystem 100 ofFIGS. 1, and 3-14 .Overview 2000 shows five stages of operation ofsystem 100. In afirst stage 2002,system 100 tracks and captures data fromevent area 103. For example,cameras 106,event tracking apparatus 108,spectator tracking apparatus 402, andmicrophones 602 generate data feeds F1-F7 of movement and activity ofparticipants 102 andobjects 104 withinevent area 103. In asecond stage 2004,system 100 catalogs the data feeds F1-F7 and stores them withincollective database 113. In athird stage 2006,system 100 generates three-dimensional model 111 as at least part of the computer-generated image graphic engine. In afourth stage 2008,system 100 tags the event, potions thereof, and occurrences ofinterest 130 withindatabase 113 and/or a BlockChain ledger. In afifth stage 2010,system 100 usesviewing devices 112 to displayviewing experiences 200 generated from three-dimensional model 111. -
FIGS. 15-19 are flowcharts that collectively show oneexample method 1500 for creating a viewing experience.Method 1500 includes steps 1502-1506, as shown inFIG. 15A , and may further include any combination of steps 1508-1560 shown inFIGS. 15B, 16A, 16B, 17A, 17B, 18A, 18B, 19A, and 19B . - In
step 1502,method 1500 determines location and movement data. In one example ofstep 1502,event tracking apparatus 108 determines location data and movement data ofparticipants 102 and objects 104. Instep 1504,method 1500 determines a three-dimensional model. In one example ofstep 1504,server 110 generates three-dimensional model 111 based upon location and event data feed F5 and image feeds F1-F4. Instep 1506,method 1500 determines a spectator viewpoint. In one example ofstep 1506,instructions 306, when executed byprocessor 302,control processor 302 to determinespectator viewpoint 320 defining an origin, relative to three-dimensional model 111, and a direction forviewing experience 200. - In
step 1508,FIG. 15B ,method 1500 captures light-field data relative to the object and the participant. In one example ofstep 1508,server 110 processes image feeds F1-F4 and other sensed data (e.g., feeds F5, F6, F7) to determine light-field data for one or more ofobject 104 and participant 102 (and even the spectator 101). Instep 1510,method 1500 determines relative location data, of the object and the participant. In one example ofstep 1510,server 110 determines relative location data for each ofobjects 104 andparticipants 102, with respect to one or more of (i) a permanent object location at the arena, (ii)other objects 104 withinevent area 103, (iii)other participants 102 withinevent area 103, and (iv) secondary grid at the arena. Instep 1512,method 1500 triangulates signals from and/or at location units. In one example ofstep 1512,event tracking apparatus 108 triangulates signals fromlocation units 1002, where the signals are selected from the group consisting of sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof. -
FIG. 16A shows steps 1514-1518. Instep 1514,method 1500 captures light-field data relative to the spectator viewpoint. In one example ofstep 1514,server 110 determines light-field data from image feeds F1-F4 with respect tospectator viewpoint 320. Instep 1516,method 1500 determines relative location data, of the viewpoint, with respect to one or more of (i) a permanent object location at the arena, (ii) the at least one object, (iii) the at least one participant, and (iv) a secondary grid at the arena. In one example ofstep 1516,server 110 determines relative locations ofspectator viewpoint 320 with respect to one or more of three-dimensional model 111,object 104,participant 102, and/orsecondary grid 1202. Instep 1518,method 1500 triangulates signals that are emitted from and/or received by a location unit configured with the spectator. In one example ofstep 1518,spectator tracking apparatus 402 triangulates signals received fromlocation unit 1002 attached tospectator 101, where the signal is selected from the group comprising sound, radio waves, microwaves, ultraviolet light, visible light, infrared light, and any combinations thereof. -
FIG. 16B shows steps 1520-1524. Instep 1520,method 1500 receives primary images from a plurality of cameras positioned at an event area. In one example ofstep 1520,server 110 receives image feeds F1-F4 fromcameras 106. Instep 1522,method 1500 maps at least one of the images, fromcameras 106, to the three-dimensional model. In one example ofstep 1522,server 110 maps at least part of images from image feeds F1-F4 to three-dimensional model 111. Instep 1524,method 1500 maps light-field data to the three-dimensional model. In one example ofstep 1524,server 110 maps light-field data to three-dimensional model 111. -
FIG. 17A showsstep 1526, wheremethod 1500 determines, based on viewing directives received from the spectator, a virtual camera defining a virtual origin, relative to the three-dimensional model, and a virtual direction of the viewing experience. In one example ofstep 1526,instructions 306, when executed byprocessor 302,control processor 302 to receive input fromviewing device 112, to manipulate avirtual camera 606 within three-dimensional model 111 to have a particular location and viewing direction such thatserver 110 and/orviewing device 112 generates a desiredviewing experience 200. -
FIG. 17B shows steps 1528-1540. Instep 1528,method 1500 determines, within the three-dimensional model, around each of the participant and the object, a virtual grid having a plurality of cells. In one example ofstep 1528,instructions 306, when executed byprocessor 302,control processor 302 to determinevirtual grid 1102 aroundparticipant 102 within three-dimensional model 111. Instep 1530,method 1500 maps at least a portion of one of the primary images identified by at least one cell of the virtual grid corresponding to the participant or object. In one example ofstep 1530,instructions 306, when executed byprocessor 302,control processor 302 to map corresponding portions of images from image feeds F1-F4 tovirtual grid cells 1104 within three-dimensional model 111. Instep 1532,method 1500 maps at least a portion of one of the primary images identified by a section of the secondary grid corresponding to the participant or object. In one example ofstep 1532,instructions 306, when executed byprocessor 302,control processor 302 to map, based uponsecondary grid 1202 corresponding to participant 102 and/orobject 104, at least a portion of primary images from primary image feeds F1-F4 to participant 102 and/orobject 104. Instep 1534,method 1500 interpolates between any two of the primary images. In one example ofstep 1534,instructions 306, when executed byprocessor 302,control processor 302 to interpolate between at least two images of image feeds F1-F4 when mapping. Instep 1536,method 1500 generates a virtual image, having the object and/or the participant, based upon (i) the three-dimensional model and (ii) the viewpoint or the virtual camera. In one example ofstep 1536,instructions 306, when executed byprocessor 302,control processor 302 to generate a virtual image from three-dimensional model 111 based uponspectator viewpoint 320 and/orvirtual camera 606. Instep 1538,method 1500 sends the location data of the object and the participant to a viewing device configured to provide the viewing experience. In one example ofstep 1538,instructions 306, when executed byprocessor 302,control processor 302 to send the location ofobject 104 and/orparticipant 102 toviewing device 112. Instep 1540,method 1500 sends one or both of (i) at least a portion of the three-dimensional model and (ii) at least a portion of the virtual image, to a viewing device configured to provide the viewing experience. In one example ofstep 1540,instructions 306, when executed byprocessor 302,control processor 302 to send at least part of three-dimensional model 111 and/or at least part ofviewing experience 200 toviewing device 112. -
FIG. 18A showssteps step 1542,method 1500 determines an occurrence of interest. In one example ofstep 1542,instructions 306, when executed byprocessor 302,control processor 302 to determine occurrence ofinterest 130 within three-dimensional model 111 based upon one or more ofparticipant 102,object 104,spectator viewpoint 320, andvirtual camera 606. Instep 1544,method 1500 adds visual special effects and audible special effects to the viewing experience. In one example ofstep 1544,instructions 306, when executed byprocessor 302,control processor 302 to generate visual and/or audiblespecial effects 202 forviewing experience 200 based at least in part upon (i) the location data and movement data ofobject 104 and/orparticipant 102 and/or (ii) occurrence ofinterest 130. -
FIG. 18B shows steps 1546-1552. Instep 1546,method 1500 determines when an obstruction is located between (i) the viewpoint and (ii) the object or the participant. In one example ofstep 1546,instructions 306, when executed byprocessor 302,control processor 302 to process three-dimensional model 111 to determine whenobstruction 1302 is betweenspectator viewpoint 320 andparticipant 102 and/orobject 104. Instep 1548,method 1500 determines when an obstruction is located between (i) the virtual camera and (ii) the object or the participant. In one example ofstep 1548,instructions 306, when executed byprocessor 302,control processor 302 to process three-dimensional model 111 to determine whenobstruction 1302 is betweenvirtual camera 606 and object 104 and/orparticipant 102. Instep 1550,method 1500 adds at least a portion of the virtual image, corresponding to the location of the obstruction, to the viewing experience. In one example ofstep 1550,instructions 306, when executed byprocessor 302,control processor 302 to generateviewing experience 200 from at least one virtual image created from three-dimensional model 111 based upon the location of the obstruction. Instep 1552,method 1500 removes the obstruction from the viewing experience. In one example ofstep 1552,instructions 306, when executed byprocessor 302,control processor 302 to remove at least part ofobstruction 1302 fromviewing experience 200. -
FIG. 19A shows steps 1554-1558. Instep 1554,method 1500 receives sound feeds from a plurality of microphones positioned at the event area. In one example ofstep 1554,server 110 receives sound feeds F7 frommicrophones 602 positioned around and withinevent area 103. Instep 1556,method 1500 maps the sound feeds to the three-dimensional model. In one example ofstep 1556,instructions 306, when executed byprocessor 302,control processor 302 to map sounds from sound feeds F7 to three-dimensional model 111 based upon the location of themicrophones 602 relative to theevent area 103. Instep 1558,method 1500 generates the viewing experience to include sounds based upon the three-dimensional model. In one example ofstep 1558,instructions 306, when executed byprocessor 302,control processor 302 to generateviewing experience 200 to include sounds based upon three-dimensional model 111. -
FIG. 19B showsstep 1560. Instep 1560,method 1500 provides haptic feedback to the spectator based on the virtual camera and the location data of the object and the participant. In one example ofstep 1560,server 110 andviewing device 112 cooperate to controlhaptic feedback actuator 802 to provide haptic feedback tospectator 101 based at least in part upon one or more of a location of a correspondingvirtual camera 606 within three-dimensional model 111, a location ofparticipant 102 within the three-dimensional model 111, and a location ofobject 104 within the three-dimensional model 111. In another example ofstep 1560,server 110 andviewing device 112 cooperate to controlhaptic feedback actuator 802 to provide haptic feedback tospectator 101 based at least in part upon occurrence ofinterest 130 and/or visual and/or audiblespecial effects 202. - Example 1 provides a description of systems and methods for creating a viewpoint including a model of a designated geometric shape where data is derived from multiple known and estimated points resulting in multiple data registries to be used in perceived and actual reality. The result of this method is a set of data points capable of augmenting and re-creating particular moments in time in a defined multi-dimensional space.
- The present disclosure relates to systems and methods configured to facilitate live and recorded mixed, augmented reality, virtual reality, and extended reality environments.
- In the present example, a viewpoint is created by solving for the human condition of defining when and where a spectator is viewing an event within an area by accounting for ocular device(s) and spatially separated equilibrium/sound input device(s) inside a determined area (devices can be but are not limited to cameras, microphones, pressure sensors.) A virtual logarithmic netting is determined around each key individual area (see, e.g.,
FIGS. 11-12 ). This creates multiple data sets defined as Mixed Augmented Virtual Reality objects or MAVR object for short. The MAVR objects are applied into a multidimensional landscape by using spatial X+Y+Z coordinates and +time for each MAVR object to create the MAVR core. This core netting provides multiple specific data points to see what is happening in relation to the experiencer (A) the object(s) of focus (C) and the logarithmic net (B). These three points create a very specific range - When a spectator is in the stands, the spectator knows his/her location where the pitcher is, but more accuracy is gained from having an intermediate reference point. If the spectator is behind home plate, the spectator may be looking through a net. The net acts as a logarithmic medium for which to segment the viewing experience into small micro-chambers. The net is for example used as an X/Y graph. The X/Y graph is applied to that of the spectator's right eye and the spectator's left eye, and because of the offset, the spectator's brain determines the spatial relationship and occludes the net from the spectator's sight.
- A game may be played wherein the entire arena is enclosed in a large plexiglass cage. Where the cage is joined for each panel there is a sensor capable of being a grid marker for MAVR devices. Each player in the game wears an array of sensors and cameras. Each physical structure in the game has an array of sensors and cameras and each has known, fixed values. Each flying ball has a tracking device in it. All of these features have live data sets captured at fifteen times a second or more. In an embodiment, at least a portion of the live data sets are captured periodically at a certain speed (e.g., one hundred and twenty times per second, although other speeds may be used). The location and movement data is exported into a server to model the data and determine unknowns based on the model. Missing data points are determined using known data points. When the model of the game is fixed into a known space, the existing cameras are used to create visual mesh models of the objects that are moving and not moving based on lighting conditions (e.g., live texture mapping of real and estimated real objects).
- Within the MAVR Core space animate or inanimate objects have a gridagonal grid determined around them. The grid space (or, virtual grid) is cross cut horizontally to create the start of a grid system for the object. The space is layered vertically to create virtual/assumed sections of volumetric space where the object may be sliced into smaller data sets. Nothing changes physically to the object inside the grid. The grid space is used to measure and render the object in relation to the data intersections at all dimensions within the grid. In using this model only a small portion of what is actually visible may be visible to the optical (viewing) device using the MAVR core. However, a combination of multiple ocular devices using the core capture different cross sections of the grid and send the location data back to the server to create a virtualized and accurate model of the subject inside the grid space while the primary camera only has limited view. As the data is filled in from the other core devices, the estimated and the real images are layered into a depth map that matches true reality. Once the full model is known for the object within the space the empty space, not corresponding to the object or the participant, is not relevant and dismissed.
- A further aspect of the systems and methods of this example is establishing an intermediary (secondary) grid between the spectator and the object or participant. This increases accuracy at distance by increasing the points of data through adding a secondary grid system. This system is three dimensional yet flat in its presentation to the viewing camera as opposed to the gridagonal approach. Adding the secondary grid to the already created model gives the second layer of accuracy and can be at any angle to the original grid model. This is relevant to accuracy at a distance. A spectator behind home plate, for example, looking through the net has a different viewing angle than a spectator is sitting a few feet away. Yet with the known and estimated models created via the MAVR core system, the secondary grid is used to increase model accuracy. The secondary grid is flat to the eyes and stays fixed to the head's rotation. Having two layers of grids (virtual grid and secondary grid) allows more points of data to increase the accuracy of tracking movement in the pixel grid. The distance between the intermediary grid and the virtual grid helps delineate movement at a greater accuracy inside the virtual grid. Layering the two grid systems on top of each increases the accuracy and ability to create a free viewpoint camera system.
- A further embodiment of this example is triangulation of different angles to create the grid model of objects. The spectator off to a side of the event area view the event through the virtual and secondary grids. Using other known angles, such as of another spectator and his/her viewpoint, the model can be filled in.
- As participants move, their position the mathematics to track the grid around objects is adjusted in real time. Distance calculations are used to make sure virtual objects show up in the proper place in the model and virtual image. These calculations are used to ensure special effects are properly determined spatially and in time.
-
FIG. 21 is aplayout workflow 2100 of the systems ofFIGS. 1, and 3-14 .Workflow 2100 illustrates a liveCGI producer application 2102 generating a playout that may be viewed by alive CGI application 2104. LiveCGI producer application 2102 may be implemented withinserver 110 ofFIG. 1 , and liveCGI application 2104 may be implemented withinviewing device 112. - Changes may be made in the above methods and systems without departing from the scope hereof. It should thus be noted that the matter contained in the above description or shown in the accompanying drawings should be interpreted as illustrative and not in a limiting sense. The following claims are intended to cover all generic and specific features described herein, as well as all statements of the scope of the present system and method, which, as a matter of language, might be said to fall there between.
Claims (20)
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/033,496 US12094053B2 (en) | 2017-05-31 | 2020-09-25 | Systems and associated methods for creating a viewing experience |
US17/090,664 US11538213B2 (en) | 2017-05-31 | 2020-11-05 | Creating and distributing interactive addressable virtual content |
US17/229,583 US11880932B2 (en) | 2017-05-31 | 2021-04-13 | Systems and associated methods for creating a viewing experience |
US18/057,005 US20230082513A1 (en) | 2017-05-31 | 2022-11-18 | Creating and distributing interactive addressable virtual content |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762513198P | 2017-05-31 | 2017-05-31 | |
US15/994,840 US10789764B2 (en) | 2017-05-31 | 2018-05-31 | Systems and associated methods for creating a viewing experience |
US17/033,496 US12094053B2 (en) | 2017-05-31 | 2020-09-25 | Systems and associated methods for creating a viewing experience |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/994,840 Continuation US10789764B2 (en) | 2017-05-31 | 2018-05-31 | Systems and associated methods for creating a viewing experience |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/090,664 Continuation US11538213B2 (en) | 2017-05-31 | 2020-11-05 | Creating and distributing interactive addressable virtual content |
US17/090,664 Continuation-In-Part US11538213B2 (en) | 2017-05-31 | 2020-11-05 | Creating and distributing interactive addressable virtual content |
Publications (2)
Publication Number | Publication Date |
---|---|
US20210012557A1 true US20210012557A1 (en) | 2021-01-14 |
US12094053B2 US12094053B2 (en) | 2024-09-17 |
Family
ID=64459987
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/994,840 Active US10789764B2 (en) | 2017-05-31 | 2018-05-31 | Systems and associated methods for creating a viewing experience |
US17/033,496 Active US12094053B2 (en) | 2017-05-31 | 2020-09-25 | Systems and associated methods for creating a viewing experience |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/994,840 Active US10789764B2 (en) | 2017-05-31 | 2018-05-31 | Systems and associated methods for creating a viewing experience |
Country Status (1)
Country | Link |
---|---|
US (2) | US10789764B2 (en) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101679398B1 (en) * | 2015-08-10 | 2016-11-28 | 김제형 | 3d studio system |
US10105619B2 (en) | 2016-10-14 | 2018-10-23 | Unchartedvr Inc. | Modular solution for delivering a virtual reality attraction |
US11538213B2 (en) | 2017-05-31 | 2022-12-27 | Live Cgi, Inc. | Creating and distributing interactive addressable virtual content |
JP7080613B2 (en) * | 2017-09-27 | 2022-06-06 | キヤノン株式会社 | Image processing equipment, image processing methods and programs |
KR20190041386A (en) * | 2017-10-12 | 2019-04-22 | 언차티드브이알 인코퍼레이티드 | Multiple participant virtual reality attraction |
JP7375542B2 (en) * | 2017-10-17 | 2023-11-08 | 株式会社ニコン | Control devices, control systems, and control programs |
US10289716B1 (en) * | 2017-10-31 | 2019-05-14 | International Business Machines Corporation | Consistent reporting using blockchain |
US10679412B2 (en) | 2018-01-17 | 2020-06-09 | Unchartedvr Inc. | Virtual experience monitoring mechanism |
WO2020206672A1 (en) * | 2019-04-12 | 2020-10-15 | Intel Corporation | Technology to automatically identify the frontal body orientation of individuals in real-time multi-camera video feeds |
EP3991441A1 (en) * | 2019-06-28 | 2022-05-04 | PCMS Holdings, Inc. | System and method for hybrid format spatial data distribution and rendering |
US11430175B2 (en) | 2019-08-30 | 2022-08-30 | Shopify Inc. | Virtual object areas using light fields |
US11029755B2 (en) * | 2019-08-30 | 2021-06-08 | Shopify Inc. | Using prediction information with light fields |
US11409358B2 (en) * | 2019-09-12 | 2022-08-09 | New York University | System and method for reconstructing a VR avatar with full body pose |
JP2022505999A (en) * | 2019-10-15 | 2022-01-17 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Augmented reality data presentation methods, devices, equipment and storage media |
US11023729B1 (en) * | 2019-11-08 | 2021-06-01 | Msg Entertainment Group, Llc | Providing visual guidance for presenting visual content in a venue |
EP4055547A4 (en) * | 2019-11-09 | 2023-11-15 | Promptu Systems Corporation | User notification for digital content access systems per mutable or fixed selection criteria |
US11710247B2 (en) | 2020-01-30 | 2023-07-25 | Unity Technologies Sf | System for image compositing including training with synthetic data |
US11676252B2 (en) | 2020-01-31 | 2023-06-13 | Unity Technologies Sf | Image processing for reducing artifacts caused by removal of scene elements from images |
US20210274091A1 (en) * | 2020-02-28 | 2021-09-02 | Weta Digital Limited | Reconstruction of obscured views of captured imagery using arbitrary captured inputs |
US20210274092A1 (en) | 2020-02-28 | 2021-09-02 | Weta Digital Limited | Reconstruction of obscured views in captured imagery using pixel replacement from secondary imagery |
US11694313B2 (en) | 2020-02-28 | 2023-07-04 | Unity Technologies Sf | Computer-generated image processing including volumetric scene reconstruction |
CN112188302A (en) * | 2020-09-30 | 2021-01-05 | 上海盈赞通信科技有限公司 | Data communication system, method and medium for VR system |
CN114584679A (en) * | 2020-11-30 | 2022-06-03 | 北京市商汤科技开发有限公司 | Race condition data presentation method and device, computer equipment and readable storage medium |
WO2022125964A1 (en) * | 2020-12-10 | 2022-06-16 | Fantech Llc | Methods, systems, apparatuses, and devices for facilitating sharing of virtual experience between users |
CN113377536B (en) * | 2021-06-09 | 2023-11-24 | 中国电子信息产业集团有限公司第六研究所 | Grid generation system and method |
CN116414223A (en) * | 2021-12-31 | 2023-07-11 | 中兴通讯股份有限公司 | Interaction method and device in three-dimensional space, storage medium and electronic device |
WO2023174513A1 (en) * | 2022-03-15 | 2023-09-21 | Telefonaktiebolaget Lm Ericsson (Publ) | Compression of xr data meta-frames communicated through networks for rendering by xr devices as an xr environment |
US12086301B2 (en) | 2022-06-01 | 2024-09-10 | Sphere Entertainment Group, Llc | System for multi-user collaboration within a virtual reality environment |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100113159A1 (en) * | 2008-11-06 | 2010-05-06 | International Business Machines Corporation | Method and apparatus for partitioning virtual worlds using prioritized topic spaces in virtual world systems |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6567116B1 (en) * | 1998-11-20 | 2003-05-20 | James A. Aman | Multiple object tracking system |
US20070239611A1 (en) | 2006-04-10 | 2007-10-11 | Scott Blum | Media player and access system and method |
US9185361B2 (en) * | 2008-07-29 | 2015-11-10 | Gerald Curry | Camera-based tracking and position determination for sporting events using event information and intelligence data extracted in real-time from position information |
US9171393B2 (en) * | 2011-12-07 | 2015-10-27 | Microsoft Technology Licensing, Llc | Three-dimensional texture reprojection |
US10600235B2 (en) * | 2012-02-23 | 2020-03-24 | Charles D. Huston | System and method for capturing and sharing a location based experience |
US9645718B2 (en) | 2013-02-07 | 2017-05-09 | Dizmo Ag | System for organizing and displaying information on a display device |
JP6613244B2 (en) * | 2014-03-27 | 2019-11-27 | ゲーム コンプレックス, インコーポレイテッド | Gamification of actions in physical space |
US11023737B2 (en) | 2014-06-11 | 2021-06-01 | Arris Enterprises Llc | Detection of demarcating segments in video |
US9998711B2 (en) | 2014-11-11 | 2018-06-12 | Isee Vc Pty Ltd | Reducing data content on a data system |
US20180132014A1 (en) | 2015-05-22 | 2018-05-10 | Playsight Interactive Ltd. | Crowd-sourced video generation |
US10198794B2 (en) * | 2015-12-18 | 2019-02-05 | Canon Kabushiki Kaisha | System and method for adjusting perceived depth of an image |
US9849364B2 (en) | 2016-02-02 | 2017-12-26 | Bao Tran | Smart device |
US10412316B2 (en) * | 2016-06-09 | 2019-09-10 | Google Llc | Taking photos through visual obstructions |
US9968856B1 (en) | 2016-11-15 | 2018-05-15 | Genvid Technologies, Inc. | Systems and methods of video game streaming with interactive overlay and additional data |
US11665308B2 (en) * | 2017-01-31 | 2023-05-30 | Tetavi, Ltd. | System and method for rendering free viewpoint video for sport applications |
US20200037034A1 (en) | 2017-03-09 | 2020-01-30 | Eben-Haeser Greyling | System and Method for Navigating in a Digital Environment |
-
2018
- 2018-05-31 US US15/994,840 patent/US10789764B2/en active Active
-
2020
- 2020-09-25 US US17/033,496 patent/US12094053B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100113159A1 (en) * | 2008-11-06 | 2010-05-06 | International Business Machines Corporation | Method and apparatus for partitioning virtual worlds using prioritized topic spaces in virtual world systems |
Also Published As
Publication number | Publication date |
---|---|
US12094053B2 (en) | 2024-09-17 |
US20180350136A1 (en) | 2018-12-06 |
US10789764B2 (en) | 2020-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12094053B2 (en) | Systems and associated methods for creating a viewing experience | |
US11880932B2 (en) | Systems and associated methods for creating a viewing experience | |
US10819967B2 (en) | Methods and systems for creating a volumetric representation of a real-world event | |
US10810791B2 (en) | Methods and systems for distinguishing objects in a natural setting to create an individually-manipulable volumetric model of an object | |
US11017691B2 (en) | Training using tracking of head mounted display | |
US11217006B2 (en) | Methods and systems for performing 3D simulation based on a 2D video image | |
JP7277451B2 (en) | racing simulation | |
US10810798B2 (en) | Systems and methods for generating 360 degree mixed reality environments | |
US9600067B2 (en) | System and method for generating a mixed reality environment | |
US10771760B2 (en) | Information processing device, control method of information processing device, and storage medium | |
US9268406B2 (en) | Virtual spectator experience with a personal audio/visual apparatus | |
US8878846B1 (en) | Superimposing virtual views of 3D objects with live images | |
EP2887322B1 (en) | Mixed reality holographic object development | |
US20090237492A1 (en) | Enhanced stereoscopic immersive video recording and viewing | |
US20130141419A1 (en) | Augmented reality with realistic occlusion | |
US20110102460A1 (en) | Platform for widespread augmented reality and 3d mapping | |
KR20010074508A (en) | Method and apparatus for generating virtual views of sporting events | |
CN111862348B (en) | Video display method, video generation method, device, equipment and storage medium | |
JP2020086983A (en) | Image processing device, image processing method, and program | |
KR101738419B1 (en) | Screen golf system, method for image realization for screen golf and recording medium readable by computing device for recording the method | |
JP6775669B2 (en) | Information processing device | |
CN102118576B (en) | Method and device for color key synthesis in virtual sports system | |
NZ743078A (en) | Systems and associated methods for creating a viewing experience | |
CN105894581A (en) | Method and device for displaying multimedia information | |
CN114995642A (en) | Augmented reality-based exercise training method and device, server and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TEMAVR, LLC, CONNECTICUT Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ROWLEY, MARC;REEL/FRAME:053893/0350 Effective date: 20180314 Owner name: LIVECGI, INC., DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:LIVECGI, LLC;REEL/FRAME:053893/0760 Effective date: 20191105 Owner name: LIVE CGI, LLC, DELAWARE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TEMAVR, LLC;REEL/FRAME:053893/0745 Effective date: 20180307 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
STCC | Information on status: application revival |
Free format text: WITHDRAWN ABANDONMENT, AWAITING EXAMINER ACTION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |