US20090256903A1 - System and method for processing video images - Google Patents
System and method for processing video images Download PDFInfo
- Publication number
- US20090256903A1 US20090256903A1 US12/467,626 US46762609A US2009256903A1 US 20090256903 A1 US20090256903 A1 US 20090256903A1 US 46762609 A US46762609 A US 46762609A US 2009256903 A1 US2009256903 A1 US 2009256903A1
- Authority
- US
- United States
- Prior art keywords
- objects
- images
- dimensional
- frames
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/286—Image signal generators having separate monoscopic and stereoscopic modes
Definitions
- the present invention is generally directed to processing graphical images.
- a number of technologies have been proposed and, in some cases, implemented to perform a conversion of one or several two dimensional images into one or several stereoscopic three dimensional images.
- the conversion of two dimensional images into three dimensional images involves creating a pair of stereoscopic images for each three dimensional frame.
- the stereoscopic images can then be presented to a viewer's left and right eyes using a suitable display device.
- the image information between respective stereoscopic images differ according to the calculated spatial relationships between the objects in the scene and the viewer of the scene. The difference in the image information enables the viewer to perceive the three dimensional effect.
- the '267 patent is associated with a number of limitations. Specifically, the stretching operations cause distortion of the object being stretched. The distortion needs to be minimized to reduce visual anomalies. The amount of stretching also corresponds to the disparity or parallax between an object and its background and is a function of their relative distances from the observer. Thus, the relative distances of interacting objects must be kept small.
- Some representative embodiments are directed to creating a “virtual world” by processing a series of two dimensional images to generate a representation of the physical world depicted in the series of images.
- the virtual world representation includes models of objects that specify the locations of the objects within the virtual world, the geometries of the objects, the dimensions of the objects, the surface representation of the objects, and/or other relevant information. By developing the virtual world representation, a number of image processing effects may be applied.
- stereoscopic images may be created.
- two separate views of the virtual world are rendered that correspond to the left and right eyes of the viewer using two different camera positions. Rendering stereoscopic images in this manner produces three dimensional effects of greater perceived quality than possible using known conversion techniques.
- the use of a three dimensional geometry to perform surface reconstruction enables a more accurate representation of objects than possible when two dimensional correlation is employed.
- the algorithm analysis and manual input are applied to a series of two dimensional images using an editing application.
- a graphical user interface of the editing application enables an “editor” to control the operations of the image processing algorithms and camera reconstruction algorithms to begin the creation of the object models.
- the editor may supply the user input to refine the object models via the graphical user interface.
- a two dimensional sequence may be converted into the virtual world representation in an efficient manner. Accordingly, further image processing such as two to three dimension conversation may occur in a more efficient and more accurate manner than possible using known processing techniques.
- FIG. 1 depicts key frames of a video sequence.
- FIG. 2 depicts representations of an object from the video sequence shown in FIG. 1 generated according to one representative embodiment.
- FIG. 3 depicts an “overhead” view of a three dimensional scene generated according to one representative embodiment.
- FIGS. 4 and 5 depict stereoscopic images generated according to one representative embodiment.
- FIG. 6 depicts a set of interrelated processes for developing a model of a three dimensional scene from a video sequence according to one representative embodiment.
- FIG. 7 depicts a flowchart for generating texture data according to one representative embodiment.
- FIG. 8 depicts a system implemented according to one representative embodiment.
- FIG. 9 depicts a set of frames in which objects may be represented using three dimensional models according to one representative embodiment.
- FIG. 1 depicts sequence 100 of video images that may be processed according to some representative embodiments.
- Sequence 100 of video images includes key frames 101 - 104 . Multiple other frames may exist between these key frames.
- sphere 150 possesses multiple tones and/or chromatic content.
- One half of sphere 150 is rendered using first tone 151 and the other half of sphere 150 is rendered using second tone 152 .
- Sphere 150 undergoes rotational transforms through video sequence 100 . Accordingly, in key frame 102 , a greater amount of tone 151 is seen relative to key frame 101 . In key frame 103 , sufficient rotation has occurred to cause only tone 151 of sphere 150 to be visible. In key frame 104 , tone 152 becomes visible again on the opposite side of sphere 150 as compared to the position of tone 152 in key frame 101 .
- Box 160 is subjected to scaling transformations in video sequence 100 . Specifically, box 160 becomes smaller throughout video sequence 100 . Moreover, box 160 is translated during video sequence 100 . Eventually, the motion of box 160 causes box 160 to be occluded by sphere 150 . In key frame 104 , box 160 is no longer visible.
- the generation of stereoscopic images for key frame 103 would occur by segmenting or matting sphere 150 from key frame 103 .
- the segmented or matted image data for sphere 150 would consist of a single tone (i.e., tone 151 ).
- the segmented or matted image data may be displaced in the stereoscopic views. Additionally, image filling or object stretching may occur to address empty regions caused by the displacement.
- the limitations associated with some known image processing techniques are seen by the inability to accurately render the multi-tone surface characteristics of sphere 150 .
- known techniques would render sphere 150 as a single-tone object in both the right and left images of a stereoscopic pair of images.
- such rendering deviates from the views that would be actually produced in a three dimensional scene.
- the right view may cause a portion of tone 152 to be visible on the right side of sphere 150 .
- the left view may cause a portion of tone 152 to be visible on the left side of sphere 150 .
- Representative embodiments enable a greater degree of accuracy to be achieved when rendering stereoscopic images by creating three dimensional models of objects within the images being processed.
- a single three dimensional model may be created for box 160 .
- the scaling transformations experienced by box 160 may be encoded with the model created for box 160 .
- Representations 201 - 204 of box 160 as shown in FIG. 2 correspond to the key frames 101 - 104 .
- box 160 is not explicitly present in key frame 104 .
- representation 204 of box 160 may be created for key frame 104 .
- the creation of a representation for an object that is not visible in a key frame may be useful to enable a number of effects. For example, an object removal operation may be selected to remove sphere 150 thereby causing box 160 to be visible in the resulting processed image(s).
- a three dimensional model may be selected or created for sphere 150 .
- the rotational transform information associated with sphere 150 may be encoded in association with the three dimensional model.
- FIG. 3 depicts an “overhead” view of scene 300 including three dimensional model 301 of sphere 150 and three dimensional model 302 of box 160 that correspond to key frame 103 .
- tone 152 is generally facing away from the viewing perspectives and tone 151 is generally facing toward the viewing perspectives.
- tone 151 is generally facing toward the viewing perspectives.
- a portion of tone 152 is visible.
- a smaller amount of three dimensional model 302 of box 160 is occluded by three dimensional model 301 of sphere 150 .
- left image 400 and right image 500 may be generated as shown in FIGS. 4 and 5 .
- three dimensional scene 300 defines which objects are visible, the position of the objects, and the sizes of the objects for the left and right views.
- the rendering of the objects in the views may occur by mapping image data onto the three dimensional objects using texture mapping techniques.
- the encoded transform information may be used to perform the texture mapping in an accurate manner.
- the rotation transform information encoded for sphere 150 enables the left portion of sphere 150 to include tone 152 in left image 400 .
- the transform information enables the right portion of sphere 150 to include tone 152 in right image 500 .
- image data associated with tone 152 in key frames 102 and 104 may be mapped onto the appropriate portions of sphere 150 in images 400 and 500 using the transform information.
- the surface characteristics of the portion of box 160 that has become visible in image 500 may be appropriately rendered using information from key frame 102 and the transform information.
- FIG. 9 depict a set of video frames in which a box is rotating in two axes.
- an object matte would be created for each of frames 901 - 904 , because the two dimensional representation of the box is different in each of the frames.
- the creation of respective object mattes for each of frames 901 - 904 may then be a time consuming and cumbersome process.
- an object model is created for frame 901 . Because the three dimensional characteristics of the box do not change, only the rotation information may be defined for frames 902 - 904 .
- the surface characteristics of the box can then be autonomously extracted from frames 902 - 904 using the object model and the transform information.
- some representative embodiments provide a more efficient process for processing video frames than conventional techniques.
- FIG. 6 depicts an interrelated set of processes for defining three dimensional objects from video images according to one representative embodiment.
- process 601 outlines of objects of interest are defined in selected frames. The outline of the objects may occur in a semi-autonomous manner. The user may manually select a relatively small number of points of the edge of a respective object. An edge tracking algorithm may then be used to identify the outline of the object between the user selected points.
- edge tracking algorithms operate by determining the least path cost between two points where the path cost is a function of image gradient characteristics. Domain-specific information concerning the selected object may also be employed during edge tracking. A series of Bezier curves or other parametric curves may be used to encode the outlines of the objects. Further user input may be used to refine the curves if desired.
- Camera reconstruction refers to the process in which the relationship between the camera and the three dimensional scene(s) in the video sequence is analyzed. During this process, the camera's focal length, the camera's relative angular perspective, the camera's position and orientation relative to objects in the scene, and/or other suitable information may be estimated.
- three dimensional models are created or selected from a library of predefined three dimensional models for the objects.
- Any number of suitable model formats could be used.
- Constructive Solid Geometry models could be employed in which each object is represented as a combination of object primitives (e.g., blocks, cylinders, cones, spheres, etc.) and logical operations on the primitives (e.g., union, difference, intersection, etc.).
- object primitives e.g., blocks, cylinders, cones, spheres, etc.
- logical operations on the primitives e.g., union, difference, intersection, etc.
- NURBS nonuniform rational B-splines
- skeleton model elements could be defined to facilitate image processing associated with complex motion of an object through a video sequence according to kinematic animation techniques.
- transformations and translations are defined as experienced by the objects of interest between key frames.
- the translation or displacement of objects, the scaling of objects, the rotation of objects, morphing of objects, and/or the like may be defined.
- an object may increase in size between key frames. The increase in size may result from the object approaching the camera or from the object actually become larger (“ballooning”). By accurately encoding whether the object has been increased in size as opposed to merely moving in the three dimensional scene, subsequent processing may occur more accurately.
- This step may be performed using a combination of autonomous algorithms and user input. For example, motion compensation algorithms may be used to estimate the translation of objects. If an object has experienced scaling, the user may identify that scaling has occurred and an autonomous algorithm may calculate a scaling factor by comparing image outlines between the key frames.
- the positions of objects in the three dimensional scene(s) of the video sequence are defined.
- the definition of the positions may occur in an autonomous manner. User input may be received to alter the positions of objects for editing or other purposes. Additionally, one or several objects may be removed if desired.
- process 606 surface property data structures, such as texture maps, are created.
- FIG. 7 depicts a flowchart for creating texture map data for a three dimensional object for a particular temporal position according to one representative embodiment.
- the flowchart for creating texture map data begins in step 701 where a video frame is selected.
- the selected video frame identifies the temporal position for which the texture map generation will occur.
- an object from the selected video frame is selected.
- step 703 surface positions of the three dimensional model that correspond to visible portions of the selected object in the selected frame are identified.
- the identification of the visible surface positions may be performed, as an example, by employing ray tracing from the original camera position to positions on the three dimensional model using the camera reconstruction data.
- step 704 texture map data is created from image data in the selected frame for the identified portions of the three dimensional model.
- step 706 surface positions of the three dimensional model that correspond to portions of the object that were not originally visible in the selected frame are identified. In one embodiment, the entire remaining surface positions are identified in step 706 thereby causing as much texture map data to be created for the selected frame as possible. In certain situations, it may be desirable to limit construction of the texture data. For example, if texture data is generated on demand, it may be desirable to only identify surface positions in this step (i) that correspond to portions of the object not originally visible in the selected frame and (ii) that have become visible due to rendering the object according to a modification in the viewpoint. In this case, the amount of the object surface exposed due to the perspective change can be calculated from the object's camera distance and a maximum inter-ocular constant.
- the surface positions identified in step 705 are correlated to image data in frames prior to and/or subsequent to the selected frame using the defined model of the object, object transformations and translations, and camera reconstruction data.
- the image data from the other frames is subjected to processing according to the transformations, translations, and camera reconstruction data. For example, if a scaling transformation occurred between frames, the image data in the prior or subject frame may be either enlarged or reduced depending upon the scaling factor. Other suitable processing may occur. In one representative embodiment, weighted average processing may be used depending upon how close in the temporal domain the correlated image data is to the selected frame. For example, lighting characteristics may change between frames.
- the weighted averaging may cause darker pixels to be lightened to match the lighting levels in the selected frame.
- light sources are also modeled as objects. When models are created for light sources, lighting effects associated with the modeled objects may be removed from the generated textures. The lighting effects would then be reintroduced during rendering.
- step 708 texture map data is created for the surface positions identified in step 705 from the data processed in step 707 . Because the translations, transformations, and other suitable information are used in the image data processing, the texture mapping of image data from other frames onto the three dimensional models occurs in a relatively accurate manner. Specifically, significant discontinuities and other imaging artifacts generally will not be observable.
- steps 704 - 707 are implemented in association with generating texture data structures that represent the surface characteristics of an object of interest.
- a given set of texture data structures define all of the surface characteristics of an object that may be recovered from a video sequence. Also, because the surface characteristics may vary over time, a texture data structure may be assigned for each relevant frame. Accordingly, the texture data structures may be considered to capture video information related to a particular object.
- the combined sets of data enables construction of a three dimensional world from the video sequence.
- the three dimensional world may be used to support any number of image processing effects.
- stereoscopic images may be created.
- the stereoscopic images may approximately correspond to the original two dimensional viewpoint.
- stereoscopic images may be decoupled from the viewpoint(s) of the original video if image data is available from a sufficient number of perspectives.
- object removal may be performed to remove objects from frames of a video sequence.
- object insertion may be performed.
- FIG. 8 depicts system 800 for processing a sequence of video images according to one representative embodiment.
- System 800 may be implemented on a suitable computer platform.
- System 800 includes conventional computing resources such as central processing unit 801 , random access memory (RAM) 802 , read only memory (ROM) 803 , user peripherals (e.g., keyboard, mouse, etc.) 804 , and display 805 .
- System 800 further includes non-volatile storage 806 .
- Non-volatile storage 806 comprises data structures and software code or instructions that enable conventional processing resources to implement some representative embodiments.
- the data structures and code may implement the flowcharts of FIGS. 6 and 7 as examples.
- non-volatile storage 806 comprises video sequence 807 .
- Video sequence 807 may be obtained in digital form from another suitable medium (not shown). Alternatively, video sequence 807 may be obtained after analog-to-digital conversation of an analog video signal from an imaging device (e.g., a video cassette player or video camera).
- Object matting module 814 defines outlines of selected objects using a suitable image processing algorithm or algorithms and user input.
- Camera reconstruction algorithm 817 processes video sequence 807 to determine the relationship between objects in video sequence 807 and the camera used to capture the images. Camera reconstruction algorithm 817 stores the data in camera reconstruction data 811 .
- Model selection module 815 enables model templates from model library 810 to be associated with objects in video sequence 807 .
- the selection of models for objects are stored in object models 808 .
- Object refinement module 816 generates and encodes transformation data within object models 808 in video sequence 807 using user input and autonomous algorithms.
- Object models 808 may represent an animated geometry encoding shape, transformation, and position data over time.
- Object models 808 may be hierarchical and may have an associated template type (e.g., a chair).
- Texture map generation module 821 generates textures that represent the surface characteristics of objects in video sequence 807 .
- Texture map generation module 821 uses object models 808 and camera data 811 to generate texture map data structures 809 .
- each object comprises a texture map for each key frame that depicts as much surface characteristics as possible given the number of perspectives in video sequence 807 of the objects and the occlusions of the objects.
- texture map generation module 821 performs searches in prior frames and/or subsequent frames to obtain surface characteristic data that is not present in a current frame.
- the translation and transform data is used to place the surface characteristics from the other frames in the appropriate portions of texture map data structures 809 .
- the transform data may be used to scale, morph, or otherwise process the data from the other frames so that the processed data matches the characteristics of the texture data obtained from the current frame.
- Texture refinement module 822 may be used to perform user editing of the generated textures if desired.
- Scene editing module 818 enables the user to define how processed image data 820 is to be created. For example, the user may define how the left and right perspectives are to be defined for stereoscopic images if a three dimensional effect is desired. Alternatively, the user may provide suitable input to create a two dimensional video sequence having other image processing effects if desired. Object insertion and removal may occur through the receipt of user input to identify objects to be inserted and/or removed and the frames for these effects. Additionally, the user may change object positions.
- Processed image data 820 is constructed using object models 808 , texture map data structures 809 , and other suitable information to provide the desired image processing effects.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
Abstract
Some representative embodiments are directed to creating a “virtual world” by processing a series of two dimensional images to generate a representation of the physical world depicted in the series of images. The virtual world representation includes models of objects that specify the locations of the objects within the virtual world, the geometries of the objects, the dimensions of the objects, the surface representation of the objects, and/or other relevant information. By developing the virtual world representation, a number of image processing effects may be applied such as generation of stereoscopic images, object insertion, object removal, object translation, and/or other object manipulation operations.
Description
- This Application is a Continuation of U.S. patent application Ser. No. 10/946,955 filed on Sep. 22, 2004.
- The present invention is generally directed to processing graphical images.
- A number of technologies have been proposed and, in some cases, implemented to perform a conversion of one or several two dimensional images into one or several stereoscopic three dimensional images. The conversion of two dimensional images into three dimensional images involves creating a pair of stereoscopic images for each three dimensional frame. The stereoscopic images can then be presented to a viewer's left and right eyes using a suitable display device. The image information between respective stereoscopic images differ according to the calculated spatial relationships between the objects in the scene and the viewer of the scene. The difference in the image information enables the viewer to perceive the three dimensional effect.
- An example of a conversion technology is described in U.S. Pat. No. 6,477,267 (the '267 patent). In the '267 patent, only selected objects within a given two dimensional image are processed to receive a three dimensional effect in a resulting three dimensional image. In the '267 patent, an object is initially selected for such processing by outlining the object. The selected object is assigned a “depth” value that is representative of the relative distance of the object from the viewer. A lateral displacement of the selected object is performed for each image of a stereoscopic pair of images that depends upon the assigned depth value. Essentially, a “cut-and-paste” operation occurs to create the three dimensional effect. The simple displacement of the object creates a gap or blank region in the object's background. The system disclosed in the '267 patent compensates for the gap by “stretching” the object's background to fill the blank region.
- The '267 patent is associated with a number of limitations. Specifically, the stretching operations cause distortion of the object being stretched. The distortion needs to be minimized to reduce visual anomalies. The amount of stretching also corresponds to the disparity or parallax between an object and its background and is a function of their relative distances from the observer. Thus, the relative distances of interacting objects must be kept small.
- Another example of a conversion technology is described in U.S. Pat. No. 6,466,205 (the '205 patent). In the '205 patent, a sequence of video frames is processed to select objects and to create “cells” or “mattes” of selected objects that substantially only include information pertaining to their respective objects. A partial occlusion of a selected object by another object in a given frame is addressed by temporally searching through the sequence of video frames to identify other frames in which the same portion of the first object is not occluded. Accordingly, a cell may be created for the full object even though the full object does not appear in any single frame. The advantage of such processing is that gaps or blank regions do not appear when objects are displaced in order to provide a three dimensional effect. Specifically, a portion of the background or other object that would be blank may be filled with graphical information obtained from other frames in the temporal sequence. Accordingly, the rendering of the three dimensional images may occur in an advantageous manner.
- Some representative embodiments are directed to creating a “virtual world” by processing a series of two dimensional images to generate a representation of the physical world depicted in the series of images. The virtual world representation includes models of objects that specify the locations of the objects within the virtual world, the geometries of the objects, the dimensions of the objects, the surface representation of the objects, and/or other relevant information. By developing the virtual world representation, a number of image processing effects may be applied.
- In one embodiment, stereoscopic images may be created. To create a pair of stereoscopic images, two separate views of the virtual world are rendered that correspond to the left and right eyes of the viewer using two different camera positions. Rendering stereoscopic images in this manner produces three dimensional effects of greater perceived quality than possible using known conversion techniques. Specifically, the use of a three dimensional geometry to perform surface reconstruction enables a more accurate representation of objects than possible when two dimensional correlation is employed.
- In one embodiment, the algorithm analysis and manual input are applied to a series of two dimensional images using an editing application. A graphical user interface of the editing application enables an “editor” to control the operations of the image processing algorithms and camera reconstruction algorithms to begin the creation of the object models. Concurrently with the application of the algorithms, the editor may supply the user input to refine the object models via the graphical user interface. By coordinating manual and autonomous image operations, a two dimensional sequence may be converted into the virtual world representation in an efficient manner. Accordingly, further image processing such as two to three dimension conversation may occur in a more efficient and more accurate manner than possible using known processing techniques.
- The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter which form the subject of the claims of the invention. It should be appreciated that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present invention. It should also be realized that such equivalent constructions do not depart from the invention as set forth in the appended claims. The novel features which are believed to be characteristic of the invention, both as to its organization and method of operation, together with further objects and advantages will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present invention.
- For a more complete understanding of the present invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
-
FIG. 1 depicts key frames of a video sequence. -
FIG. 2 depicts representations of an object from the video sequence shown inFIG. 1 generated according to one representative embodiment. -
FIG. 3 depicts an “overhead” view of a three dimensional scene generated according to one representative embodiment. -
FIGS. 4 and 5 depict stereoscopic images generated according to one representative embodiment. -
FIG. 6 depicts a set of interrelated processes for developing a model of a three dimensional scene from a video sequence according to one representative embodiment. -
FIG. 7 depicts a flowchart for generating texture data according to one representative embodiment. -
FIG. 8 depicts a system implemented according to one representative embodiment. -
FIG. 9 depicts a set of frames in which objects may be represented using three dimensional models according to one representative embodiment. - Referring now to the drawings,
FIG. 1 depictssequence 100 of video images that may be processed according to some representative embodiments.Sequence 100 of video images includes key frames 101-104. Multiple other frames may exist between these key frames. - As shown in
FIG. 1 ,sphere 150 possesses multiple tones and/or chromatic content. One half ofsphere 150 is rendered usingfirst tone 151 and the other half ofsphere 150 is rendered usingsecond tone 152.Sphere 150 undergoes rotational transforms throughvideo sequence 100. Accordingly, inkey frame 102, a greater amount oftone 151 is seen relative tokey frame 101. Inkey frame 103, sufficient rotation has occurred to causeonly tone 151 ofsphere 150 to be visible. Inkey frame 104,tone 152 becomes visible again on the opposite side ofsphere 150 as compared to the position oftone 152 inkey frame 101. -
Box 160 is subjected to scaling transformations invideo sequence 100. Specifically,box 160 becomes smaller throughoutvideo sequence 100. Moreover,box 160 is translated duringvideo sequence 100. Eventually, the motion ofbox 160 causesbox 160 to be occluded bysphere 150. Inkey frame 104,box 160 is no longer visible. - According to known image processing techniques, the generation of stereoscopic images for
key frame 103 would occur by segmenting ormatting sphere 150 fromkey frame 103. The segmented or matted image data forsphere 150 would consist of a single tone (i.e., tone 151). The segmented or matted image data may be displaced in the stereoscopic views. Additionally, image filling or object stretching may occur to address empty regions caused by the displacement. The limitations associated with some known image processing techniques are seen by the inability to accurately render the multi-tone surface characteristics ofsphere 150. Specifically, because the generation of stereoscopic views according to known image processing techniques only uses the matted or segmented image data, known techniques would rendersphere 150 as a single-tone object in both the right and left images of a stereoscopic pair of images. However, such rendering deviates from the views that would be actually produced in a three dimensional scene. In an actual three dimensional scene, the right view may cause a portion oftone 152 to be visible on the right side ofsphere 150. Likewise, the left view may cause a portion oftone 152 to be visible on the left side ofsphere 150. - Representative embodiments enable a greater degree of accuracy to be achieved when rendering stereoscopic images by creating three dimensional models of objects within the images being processed. A single three dimensional model may be created for
box 160. Additionally, the scaling transformations experienced bybox 160 may be encoded with the model created forbox 160. Representations 201-204 ofbox 160 as shown inFIG. 2 correspond to the key frames 101-104. Additionally, it is noted thatbox 160 is not explicitly present inkey frame 104. However, because the scaling transformations and translations can be identified and encoded,representation 204 ofbox 160 may be created forkey frame 104. The creation of a representation for an object that is not visible in a key frame may be useful to enable a number of effects. For example, an object removal operation may be selected to removesphere 150 thereby causingbox 160 to be visible in the resulting processed image(s). - In a similar manner, a three dimensional model may be selected or created for
sphere 150. The rotational transform information associated withsphere 150 may be encoded in association with the three dimensional model. - Using the three dimensional models and camera reconstruction information, a three dimensional scene including the locations of the objects within the scene may be defined.
FIG. 3 depicts an “overhead” view ofscene 300 including threedimensional model 301 ofsphere 150 and threedimensional model 302 ofbox 160 that correspond tokey frame 103. As shown inFIG. 3 ,tone 152 is generally facing away from the viewing perspectives andtone 151 is generally facing toward the viewing perspectives. However, because the right view is slightly offset, a portion oftone 152 is visible. Also, a smaller amount of threedimensional model 302 ofbox 160 is occluded by threedimensional model 301 ofsphere 150. - Using three
dimensional scene 300,left image 400 andright image 500 may be generated as shown inFIGS. 4 and 5 . Specifically, threedimensional scene 300 defines which objects are visible, the position of the objects, and the sizes of the objects for the left and right views. The rendering of the objects in the views may occur by mapping image data onto the three dimensional objects using texture mapping techniques. The encoded transform information may be used to perform the texture mapping in an accurate manner. For example, the rotation transform information encoded forsphere 150 enables the left portion ofsphere 150 to includetone 152 inleft image 400. The transform information enables the right portion ofsphere 150 to includetone 152 inright image 500. Specifically, image data associated withtone 152 inkey frames sphere 150 inimages box 160 that has become visible inimage 500 may be appropriately rendered using information fromkey frame 102 and the transform information. - To further illustrate the operation of some embodiments, reference is made to
FIG. 9 .FIG. 9 depict a set of video frames in which a box is rotating in two axes. Using conventional matte modeling techniques, an object matte would be created for each of frames 901-904, because the two dimensional representation of the box is different in each of the frames. The creation of respective object mattes for each of frames 901-904 may then be a time consuming and cumbersome process. However, according to one representative embodiment, an object model is created forframe 901. Because the three dimensional characteristics of the box do not change, only the rotation information may be defined for frames 902-904. The surface characteristics of the box can then be autonomously extracted from frames 902-904 using the object model and the transform information. Thus, some representative embodiments provide a more efficient process for processing video frames than conventional techniques. -
FIG. 6 depicts an interrelated set of processes for defining three dimensional objects from video images according to one representative embodiment. Inprocess 601, outlines of objects of interest are defined in selected frames. The outline of the objects may occur in a semi-autonomous manner. The user may manually select a relatively small number of points of the edge of a respective object. An edge tracking algorithm may then be used to identify the outline of the object between the user selected points. In general, edge tracking algorithms operate by determining the least path cost between two points where the path cost is a function of image gradient characteristics. Domain-specific information concerning the selected object may also be employed during edge tracking. A series of Bezier curves or other parametric curves may be used to encode the outlines of the objects. Further user input may be used to refine the curves if desired. - In
process 602, camera reconstruction may be performed. Camera reconstruction refers to the process in which the relationship between the camera and the three dimensional scene(s) in the video sequence is analyzed. During this process, the camera's focal length, the camera's relative angular perspective, the camera's position and orientation relative to objects in the scene, and/or other suitable information may be estimated. - In
process 603, three dimensional models are created or selected from a library of predefined three dimensional models for the objects. Any number of suitable model formats could be used. For example, Constructive Solid Geometry models could be employed in which each object is represented as a combination of object primitives (e.g., blocks, cylinders, cones, spheres, etc.) and logical operations on the primitives (e.g., union, difference, intersection, etc.). Additionally or alternatively, nonuniform rational B-splines (NURBS) models could be employed in which objects are defined in terms of sets of weighted control points, curve orders, and knot vectors. Additionally, “skeleton” model elements could be defined to facilitate image processing associated with complex motion of an object through a video sequence according to kinematic animation techniques. - In
process 604, transformations and translations are defined as experienced by the objects of interest between key frames. Specifically, the translation or displacement of objects, the scaling of objects, the rotation of objects, morphing of objects, and/or the like may be defined. For example, an object may increase in size between key frames. The increase in size may result from the object approaching the camera or from the object actually become larger (“ballooning”). By accurately encoding whether the object has been increased in size as opposed to merely moving in the three dimensional scene, subsequent processing may occur more accurately. This step may be performed using a combination of autonomous algorithms and user input. For example, motion compensation algorithms may be used to estimate the translation of objects. If an object has experienced scaling, the user may identify that scaling has occurred and an autonomous algorithm may calculate a scaling factor by comparing image outlines between the key frames. - In
process 605, using the information developed in the prior steps, the positions of objects in the three dimensional scene(s) of the video sequence are defined. The definition of the positions may occur in an autonomous manner. User input may be received to alter the positions of objects for editing or other purposes. Additionally, one or several objects may be removed if desired. - In
process 606, surface property data structures, such as texture maps, are created. -
FIG. 7 depicts a flowchart for creating texture map data for a three dimensional object for a particular temporal position according to one representative embodiment. The flowchart for creating texture map data begins instep 701 where a video frame is selected. The selected video frame identifies the temporal position for which the texture map generation will occur. Instep 702, an object from the selected video frame is selected. - In
step 703, surface positions of the three dimensional model that correspond to visible portions of the selected object in the selected frame are identified. The identification of the visible surface positions may be performed, as an example, by employing ray tracing from the original camera position to positions on the three dimensional model using the camera reconstruction data. Instep 704, texture map data is created from image data in the selected frame for the identified portions of the three dimensional model. - In
step 706, surface positions of the three dimensional model that correspond to portions of the object that were not originally visible in the selected frame are identified. In one embodiment, the entire remaining surface positions are identified instep 706 thereby causing as much texture map data to be created for the selected frame as possible. In certain situations, it may be desirable to limit construction of the texture data. For example, if texture data is generated on demand, it may be desirable to only identify surface positions in this step (i) that correspond to portions of the object not originally visible in the selected frame and (ii) that have become visible due to rendering the object according to a modification in the viewpoint. In this case, the amount of the object surface exposed due to the perspective change can be calculated from the object's camera distance and a maximum inter-ocular constant. - In
step 706, the surface positions identified instep 705 are correlated to image data in frames prior to and/or subsequent to the selected frame using the defined model of the object, object transformations and translations, and camera reconstruction data. Instep 707, the image data from the other frames is subjected to processing according to the transformations, translations, and camera reconstruction data. For example, if a scaling transformation occurred between frames, the image data in the prior or subject frame may be either enlarged or reduced depending upon the scaling factor. Other suitable processing may occur. In one representative embodiment, weighted average processing may be used depending upon how close in the temporal domain the correlated image data is to the selected frame. For example, lighting characteristics may change between frames. The weighted averaging may cause darker pixels to be lightened to match the lighting levels in the selected frame. In one representative embodiment, light sources are also modeled as objects. When models are created for light sources, lighting effects associated with the modeled objects may be removed from the generated textures. The lighting effects would then be reintroduced during rendering. - In
step 708, texture map data is created for the surface positions identified instep 705 from the data processed instep 707. Because the translations, transformations, and other suitable information are used in the image data processing, the texture mapping of image data from other frames onto the three dimensional models occurs in a relatively accurate manner. Specifically, significant discontinuities and other imaging artifacts generally will not be observable. - In one representative embodiment, steps 704-707 are implemented in association with generating texture data structures that represent the surface characteristics of an object of interest. A given set of texture data structures define all of the surface characteristics of an object that may be recovered from a video sequence. Also, because the surface characteristics may vary over time, a texture data structure may be assigned for each relevant frame. Accordingly, the texture data structures may be considered to capture video information related to a particular object.
- The combined sets of data (object model, transform information, camera reconstruction information, and texture data structures) enables construction of a three dimensional world from the video sequence. The three dimensional world may be used to support any number of image processing effects. As previously mentioned, stereoscopic images may be created. The stereoscopic images may approximately correspond to the original two dimensional viewpoint. Alternatively, stereoscopic images may be decoupled from the viewpoint(s) of the original video if image data is available from a sufficient number of perspectives. Additionally, object removal may be performed to remove objects from frames of a video sequence. Likewise, object insertion may be performed.
-
FIG. 8 depictssystem 800 for processing a sequence of video images according to one representative embodiment.System 800 may be implemented on a suitable computer platform.System 800 includes conventional computing resources such ascentral processing unit 801, random access memory (RAM) 802, read only memory (ROM) 803, user peripherals (e.g., keyboard, mouse, etc.) 804, anddisplay 805.System 800 further includesnon-volatile storage 806. -
Non-volatile storage 806 comprises data structures and software code or instructions that enable conventional processing resources to implement some representative embodiments. The data structures and code may implement the flowcharts ofFIGS. 6 and 7 as examples. - As shown in
FIG. 8 ,non-volatile storage 806 comprisesvideo sequence 807.Video sequence 807 may be obtained in digital form from another suitable medium (not shown). Alternatively,video sequence 807 may be obtained after analog-to-digital conversation of an analog video signal from an imaging device (e.g., a video cassette player or video camera).Object matting module 814 defines outlines of selected objects using a suitable image processing algorithm or algorithms and user input.Camera reconstruction algorithm 817processes video sequence 807 to determine the relationship between objects invideo sequence 807 and the camera used to capture the images.Camera reconstruction algorithm 817 stores the data incamera reconstruction data 811. -
Model selection module 815 enables model templates frommodel library 810 to be associated with objects invideo sequence 807. The selection of models for objects are stored inobject models 808.Object refinement module 816 generates and encodes transformation data withinobject models 808 invideo sequence 807 using user input and autonomous algorithms.Object models 808 may represent an animated geometry encoding shape, transformation, and position data over time.Object models 808 may be hierarchical and may have an associated template type (e.g., a chair). - Texture
map generation module 821 generates textures that represent the surface characteristics of objects invideo sequence 807. Texturemap generation module 821 uses objectmodels 808 andcamera data 811 to generate texturemap data structures 809. Preferably, each object comprises a texture map for each key frame that depicts as much surface characteristics as possible given the number of perspectives invideo sequence 807 of the objects and the occlusions of the objects. In particular, texturemap generation module 821 performs searches in prior frames and/or subsequent frames to obtain surface characteristic data that is not present in a current frame. The translation and transform data is used to place the surface characteristics from the other frames in the appropriate portions of texturemap data structures 809. Also, the transform data may be used to scale, morph, or otherwise process the data from the other frames so that the processed data matches the characteristics of the texture data obtained from the current frame.Texture refinement module 822 may be used to perform user editing of the generated textures if desired. -
Scene editing module 818 enables the user to define how processedimage data 820 is to be created. For example, the user may define how the left and right perspectives are to be defined for stereoscopic images if a three dimensional effect is desired. Alternatively, the user may provide suitable input to create a two dimensional video sequence having other image processing effects if desired. Object insertion and removal may occur through the receipt of user input to identify objects to be inserted and/or removed and the frames for these effects. Additionally, the user may change object positions. - When the user finishes inputting data via
scene editing module 818, the user may employrendering algorithm 819 to generate processedimage data 820.Processed image data 820 is constructed usingobject models 808, texturemap data structures 809, and other suitable information to provide the desired image processing effects. - Although the present invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the invention as defined by the appended claims. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
Claims (8)
1. (canceled)
2. A method of processing images comprising:
obtaining a plurality of images having at least one object, said images representing at least two perspective views of said object;
generating a model of said object using a first subset of said images, said model comprising a texture map;
modifying said model using a second subset of said images, wherein said second subset includes at least one image not in said first subset; and
creating a three dimensional scene using said model.
3. The method of claim 2 wherein user input is used to obtain said plurality of images.
4. The method of claim 2 wherein user input is used to generate said model of said object.
5. The method of claim 2 further comprising:
generating a sequence of stereoscopic images using said three dimensional scene.
6. The method of claim 1 wherein said object is a light source.
7. The method of claim 1 further comprising:
editing said three dimensional scene according to user-generated instructions.
8. The method of claim 7 wherein said user-generated instructions are entered into a scene editing module.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/467,626 US20090256903A1 (en) | 2004-09-23 | 2009-05-18 | System and method for processing video images |
US13/071,670 US20110169827A1 (en) | 2004-09-23 | 2011-03-25 | System and method for processing video images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/946,955 US7542034B2 (en) | 2004-09-23 | 2004-09-23 | System and method for processing video images |
US12/467,626 US20090256903A1 (en) | 2004-09-23 | 2009-05-18 | System and method for processing video images |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/946,955 Continuation US7542034B2 (en) | 2004-09-22 | 2004-09-23 | System and method for processing video images |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/071,670 Continuation US20110169827A1 (en) | 2004-09-23 | 2011-03-25 | System and method for processing video images |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090256903A1 true US20090256903A1 (en) | 2009-10-15 |
Family
ID=35427855
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/946,955 Active 2027-01-28 US7542034B2 (en) | 2004-09-22 | 2004-09-23 | System and method for processing video images |
US12/467,626 Abandoned US20090256903A1 (en) | 2004-09-23 | 2009-05-18 | System and method for processing video images |
US13/071,670 Abandoned US20110169827A1 (en) | 2004-09-23 | 2011-03-25 | System and method for processing video images |
US13/072,467 Expired - Lifetime US8217931B2 (en) | 2004-09-23 | 2011-03-25 | System and method for processing video images |
US13/544,876 Expired - Lifetime US8860712B2 (en) | 2004-09-23 | 2012-07-09 | System and method for processing video images |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/946,955 Active 2027-01-28 US7542034B2 (en) | 2004-09-22 | 2004-09-23 | System and method for processing video images |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/071,670 Abandoned US20110169827A1 (en) | 2004-09-23 | 2011-03-25 | System and method for processing video images |
US13/072,467 Expired - Lifetime US8217931B2 (en) | 2004-09-23 | 2011-03-25 | System and method for processing video images |
US13/544,876 Expired - Lifetime US8860712B2 (en) | 2004-09-23 | 2012-07-09 | System and method for processing video images |
Country Status (9)
Country | Link |
---|---|
US (5) | US7542034B2 (en) |
EP (1) | EP1800267B1 (en) |
JP (1) | JP2008513882A (en) |
KR (1) | KR20070073803A (en) |
CN (1) | CN101053000B (en) |
AU (1) | AU2005290064A1 (en) |
CA (1) | CA2581273C (en) |
NZ (1) | NZ554661A (en) |
WO (1) | WO2006036469A2 (en) |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8385684B2 (en) | 2001-05-04 | 2013-02-26 | Legend3D, Inc. | System and method for minimal iteration workflow for image sequence depth enhancement |
US8396328B2 (en) | 2001-05-04 | 2013-03-12 | Legend3D, Inc. | Minimal artifact image sequence depth enhancement system and method |
US8730232B2 (en) | 2011-02-01 | 2014-05-20 | Legend3D, Inc. | Director-style based 2D to 3D movie conversion system and method |
US8897596B1 (en) | 2001-05-04 | 2014-11-25 | Legend3D, Inc. | System and method for rapid image sequence depth enhancement with translucent elements |
US9007365B2 (en) | 2012-11-27 | 2015-04-14 | Legend3D, Inc. | Line depth augmentation system and method for conversion of 2D images to 3D images |
US9007404B2 (en) | 2013-03-15 | 2015-04-14 | Legend3D, Inc. | Tilt-based look around effect image enhancement method |
US9031383B2 (en) | 2001-05-04 | 2015-05-12 | Legend3D, Inc. | Motion picture project management system |
US9113130B2 (en) | 2012-02-06 | 2015-08-18 | Legend3D, Inc. | Multi-stage production pipeline system |
US9241147B2 (en) | 2013-05-01 | 2016-01-19 | Legend3D, Inc. | External depth map transformation method for conversion of two-dimensional images to stereoscopic images |
US9282321B2 (en) | 2011-02-17 | 2016-03-08 | Legend3D, Inc. | 3D model multi-reviewer system |
US9286941B2 (en) | 2001-05-04 | 2016-03-15 | Legend3D, Inc. | Image sequence enhancement and motion picture project management system |
US9288476B2 (en) | 2011-02-17 | 2016-03-15 | Legend3D, Inc. | System and method for real-time depth modification of stereo images of a virtual reality environment |
US9407904B2 (en) | 2013-05-01 | 2016-08-02 | Legend3D, Inc. | Method for creating 3D virtual reality from 2D images |
US9438878B2 (en) | 2013-05-01 | 2016-09-06 | Legend3D, Inc. | Method of converting 2D video to 3D video using 3D object models |
US9547937B2 (en) | 2012-11-30 | 2017-01-17 | Legend3D, Inc. | Three-dimensional annotation system and method |
US9609307B1 (en) | 2015-09-17 | 2017-03-28 | Legend3D, Inc. | Method of converting 2D video to 3D video using machine learning |
WO2020072972A1 (en) * | 2018-10-05 | 2020-04-09 | Magic Leap, Inc. | A cross reality system |
US10957112B2 (en) | 2018-08-13 | 2021-03-23 | Magic Leap, Inc. | Cross reality system |
US11227435B2 (en) | 2018-08-13 | 2022-01-18 | Magic Leap, Inc. | Cross reality system |
US11232635B2 (en) | 2018-10-05 | 2022-01-25 | Magic Leap, Inc. | Rendering location specific virtual content in any location |
US11257294B2 (en) | 2019-10-15 | 2022-02-22 | Magic Leap, Inc. | Cross reality system supporting multiple device types |
US11386627B2 (en) | 2019-11-12 | 2022-07-12 | Magic Leap, Inc. | Cross reality system with localization service and shared location-based content |
US11410395B2 (en) | 2020-02-13 | 2022-08-09 | Magic Leap, Inc. | Cross reality system with accurate shared maps |
US11551430B2 (en) | 2020-02-26 | 2023-01-10 | Magic Leap, Inc. | Cross reality system with fast localization |
US11562542B2 (en) | 2019-12-09 | 2023-01-24 | Magic Leap, Inc. | Cross reality system with simplified programming of virtual content |
US11562525B2 (en) | 2020-02-13 | 2023-01-24 | Magic Leap, Inc. | Cross reality system with map processing using multi-resolution frame descriptors |
US11568605B2 (en) | 2019-10-15 | 2023-01-31 | Magic Leap, Inc. | Cross reality system with localization service |
US11632679B2 (en) | 2019-10-15 | 2023-04-18 | Magic Leap, Inc. | Cross reality system with wireless fingerprints |
US11830149B2 (en) | 2020-02-13 | 2023-11-28 | Magic Leap, Inc. | Cross reality system with prioritization of geolocation information for localization |
US11900547B2 (en) | 2020-04-29 | 2024-02-13 | Magic Leap, Inc. | Cross reality system for large scale environments |
US12100108B2 (en) | 2019-10-31 | 2024-09-24 | Magic Leap, Inc. | Cross reality system with quality information about persistent coordinate frames |
Families Citing this family (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102004037464A1 (en) * | 2004-07-30 | 2006-03-23 | Heraeus Kulzer Gmbh | Arrangement for imaging surface structures of three-dimensional objects |
US7542034B2 (en) | 2004-09-23 | 2009-06-02 | Conversion Works, Inc. | System and method for processing video images |
KR100603603B1 (en) * | 2004-12-07 | 2006-07-24 | 한국전자통신연구원 | Apparatus and method of two-pass dynamic programming with disparity candidates for stereo matching |
KR20060063265A (en) * | 2004-12-07 | 2006-06-12 | 삼성전자주식회사 | Method and apparatus for processing image |
JP4896230B2 (en) * | 2006-11-17 | 2012-03-14 | トムソン ライセンシング | System and method of object model fitting and registration for transforming from 2D to 3D |
EP2089852A1 (en) * | 2006-11-21 | 2009-08-19 | Thomson Licensing | Methods and systems for color correction of 3d images |
US8655052B2 (en) * | 2007-01-26 | 2014-02-18 | Intellectual Discovery Co., Ltd. | Methodology for 3D scene reconstruction from 2D image sequences |
US8274530B2 (en) | 2007-03-12 | 2012-09-25 | Conversion Works, Inc. | Systems and methods for filling occluded information for 2-D to 3-D conversion |
CN101657839B (en) * | 2007-03-23 | 2013-02-06 | 汤姆森许可贸易公司 | System and method for region classification of 2D images for 2D-to-3D conversion |
DE102007048857A1 (en) * | 2007-10-11 | 2009-04-16 | Robert Bosch Gmbh | Method for generating and / or updating textures of background object models, video surveillance system for carrying out the method and computer program |
US8384718B2 (en) * | 2008-01-10 | 2013-02-26 | Sony Corporation | System and method for navigating a 3D graphical user interface |
US20090186694A1 (en) * | 2008-01-17 | 2009-07-23 | Microsoft Corporation | Virtual world platform games constructed from digital imagery |
US8379058B2 (en) * | 2008-06-06 | 2013-02-19 | Apple Inc. | Methods and apparatuses to arbitrarily transform windows |
CN101726963B (en) * | 2008-10-28 | 2012-07-25 | 华硕电脑股份有限公司 | Method for identifying dimension form of shot subject |
US8326088B1 (en) * | 2009-05-26 | 2012-12-04 | The United States Of America As Represented By The Secretary Of The Air Force | Dynamic image registration |
US9424583B2 (en) * | 2009-10-15 | 2016-08-23 | Empire Technology Development Llc | Differential trials in augmented reality |
US9132352B1 (en) * | 2010-06-24 | 2015-09-15 | Gregory S. Rabin | Interactive system and method for rendering an object |
US9053562B1 (en) * | 2010-06-24 | 2015-06-09 | Gregory S. Rabin | Two dimensional to three dimensional moving image converter |
US9699438B2 (en) | 2010-07-02 | 2017-07-04 | Disney Enterprises, Inc. | 3D graphic insertion for live action stereoscopic video |
CN101951511B (en) * | 2010-08-19 | 2012-11-28 | 深圳市亮信科技有限公司 | Method for layering video scenes by analyzing depth |
US9971551B2 (en) * | 2010-11-01 | 2018-05-15 | Electronics For Imaging, Inc. | Previsualization for large format print jobs |
WO2012169174A1 (en) * | 2011-06-08 | 2012-12-13 | パナソニック株式会社 | Image processing device and image processing method |
KR101870764B1 (en) * | 2011-06-14 | 2018-06-25 | 삼성전자주식회사 | Display apparatus using image conversion mechanism and method of operation thereof |
US20130063556A1 (en) * | 2011-09-08 | 2013-03-14 | Prism Skylabs, Inc. | Extracting depth information from video from a single camera |
EP2786303A4 (en) * | 2011-12-01 | 2015-08-26 | Lightcraft Technology Llc | Automatic tracking matte system |
CN104145479B (en) * | 2012-02-07 | 2017-10-27 | 诺基亚技术有限公司 | Object is removed from image |
WO2013158784A1 (en) * | 2012-04-17 | 2013-10-24 | 3Dmedia Corporation | Systems and methods for improving overall quality of three-dimensional content by altering parallax budget or compensating for moving objects |
US9014543B1 (en) | 2012-10-23 | 2015-04-21 | Google Inc. | Methods and systems configured for processing video frames into animation |
US9071756B2 (en) | 2012-12-11 | 2015-06-30 | Facebook, Inc. | Systems and methods for digital video stabilization via constraint-based rotation smoothing |
US20140168204A1 (en) * | 2012-12-13 | 2014-06-19 | Microsoft Corporation | Model based video projection |
US9998684B2 (en) * | 2013-08-16 | 2018-06-12 | Indiana University Research And Technology Corporation | Method and apparatus for virtual 3D model generation and navigation using opportunistically captured images |
CN104573144A (en) * | 2013-10-14 | 2015-04-29 | 鸿富锦精密工业(深圳)有限公司 | System and method for simulating offline point cloud of measuring equipment |
US9344733B2 (en) | 2013-12-27 | 2016-05-17 | Samsung Electronics Co., Ltd. | Feature-based cloud computing architecture for physics engine |
CN104794756A (en) * | 2014-01-20 | 2015-07-22 | 鸿富锦精密工业(深圳)有限公司 | Mapping system and method of point clouds model |
CN104091366B (en) * | 2014-07-17 | 2017-02-15 | 北京毛豆科技有限公司 | Three-dimensional intelligent digitalization generation method and system based on two-dimensional shadow information |
WO2016081722A1 (en) * | 2014-11-20 | 2016-05-26 | Cappasity Inc. | Systems and methods for 3d capture of objects using multiple range cameras and multiple rgb cameras |
US20170323433A1 (en) * | 2014-11-27 | 2017-11-09 | Nokia Technologies Oy | Method, apparatus and computer program product for generating super-resolved images |
US9665989B1 (en) * | 2015-02-17 | 2017-05-30 | Google Inc. | Feature agnostic geometric alignment |
US9311632B1 (en) | 2015-03-03 | 2016-04-12 | Bank Of America Corporation | Proximity-based notification of a previously abandoned and pre-queued ATM transaction |
RU2586566C1 (en) * | 2015-03-25 | 2016-06-10 | Общество с ограниченной ответственностью "Лаборатория 24" | Method of displaying object |
CN104881260B (en) * | 2015-06-03 | 2017-11-24 | 武汉映未三维科技有限公司 | A kind of projection print implementation method and its realization device |
WO2017031718A1 (en) * | 2015-08-26 | 2017-03-02 | 中国科学院深圳先进技术研究院 | Modeling method of deformation motions of elastic object |
CN106993152B (en) * | 2016-01-21 | 2019-11-08 | 杭州海康威视数字技术股份有限公司 | Three-dimension monitoring system and its quick deployment method |
US10074205B2 (en) * | 2016-08-30 | 2018-09-11 | Intel Corporation | Machine creation of program with frame analysis method and apparatus |
US10839203B1 (en) | 2016-12-27 | 2020-11-17 | Amazon Technologies, Inc. | Recognizing and tracking poses using digital imagery captured from multiple fields of view |
US10699421B1 (en) | 2017-03-29 | 2020-06-30 | Amazon Technologies, Inc. | Tracking objects in three-dimensional space using calibrated visual cameras and depth cameras |
US10848741B2 (en) * | 2017-06-12 | 2020-11-24 | Adobe Inc. | Re-cinematography for spherical video |
US11284041B1 (en) * | 2017-12-13 | 2022-03-22 | Amazon Technologies, Inc. | Associating items with actors based on digital imagery |
US11482045B1 (en) | 2018-06-28 | 2022-10-25 | Amazon Technologies, Inc. | Associating events with actors using digital imagery and machine learning |
US11468681B1 (en) | 2018-06-28 | 2022-10-11 | Amazon Technologies, Inc. | Associating events with actors using digital imagery and machine learning |
US11468698B1 (en) | 2018-06-28 | 2022-10-11 | Amazon Technologies, Inc. | Associating events with actors using digital imagery and machine learning |
US10984587B2 (en) * | 2018-07-13 | 2021-04-20 | Nvidia Corporation | Virtual photogrammetry |
WO2020023582A1 (en) * | 2018-07-24 | 2020-01-30 | Magic Leap, Inc. | Methods and apparatuses for determining and/or evaluating localizing maps of image display devices |
KR102526700B1 (en) | 2018-12-12 | 2023-04-28 | 삼성전자주식회사 | Electronic device and method for displaying three dimensions image |
US10991160B1 (en) * | 2019-06-25 | 2021-04-27 | A9.Com, Inc. | Depth hull for rendering three-dimensional models |
US11138789B1 (en) * | 2019-06-25 | 2021-10-05 | A9.Com, Inc. | Enhanced point cloud for three-dimensional models |
US11423630B1 (en) | 2019-06-27 | 2022-08-23 | Amazon Technologies, Inc. | Three-dimensional body composition from two-dimensional images |
US11903730B1 (en) | 2019-09-25 | 2024-02-20 | Amazon Technologies, Inc. | Body fat measurements from a two-dimensional image |
US11443516B1 (en) | 2020-04-06 | 2022-09-13 | Amazon Technologies, Inc. | Locally and globally locating actors by digital cameras and machine learning |
US11398094B1 (en) | 2020-04-06 | 2022-07-26 | Amazon Technologies, Inc. | Locally and globally locating actors by digital cameras and machine learning |
CN112135091A (en) * | 2020-08-27 | 2020-12-25 | 杭州张量科技有限公司 | Monitoring scene marking method and device, computer equipment and storage medium |
US11854146B1 (en) | 2021-06-25 | 2023-12-26 | Amazon Technologies, Inc. | Three-dimensional body composition from two-dimensional images of a portion of a body |
US11887252B1 (en) | 2021-08-25 | 2024-01-30 | Amazon Technologies, Inc. | Body model composition update from two-dimensional face images |
US11861860B2 (en) | 2021-09-29 | 2024-01-02 | Amazon Technologies, Inc. | Body dimensions from two-dimensional body images |
US12131539B1 (en) | 2022-06-29 | 2024-10-29 | Amazon Technologies, Inc. | Detecting interactions from features determined from sequences of images captured using one or more cameras |
Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4689616A (en) * | 1984-08-10 | 1987-08-25 | U.S. Philips Corporation | Method of producing and modifying a synthetic picture |
US4925294A (en) * | 1986-12-17 | 1990-05-15 | Geshwind David M | Method to convert two dimensional motion pictures for three-dimensional systems |
US5914941A (en) * | 1995-05-25 | 1999-06-22 | Information Highway Media Corporation | Portable information storage/playback apparatus having a data interface |
US5977978A (en) * | 1996-11-13 | 1999-11-02 | Platinum Technology Ip, Inc. | Interactive authoring of 3D scenes and movies |
US6016150A (en) * | 1995-08-04 | 2000-01-18 | Microsoft Corporation | Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers |
US6151404A (en) * | 1995-06-01 | 2000-11-21 | Medical Media Systems | Anatomical visualization system |
US6226004B1 (en) * | 1997-09-12 | 2001-05-01 | Autodesk, Inc. | Modeling system using surface patterns and geometric relationships |
US6278460B1 (en) * | 1998-12-15 | 2001-08-21 | Point Cloud, Inc. | Creating a three-dimensional model from two-dimensional images |
US20010040570A1 (en) * | 1997-12-24 | 2001-11-15 | John J. Light | Method and apparatus for automated dynamics of three-dimensional graphics scenes for enhanced 3d visualization |
US6342887B1 (en) * | 1998-11-18 | 2002-01-29 | Earl Robert Munroe | Method and apparatus for reproducing lighting effects in computer animated objects |
US20020030675A1 (en) * | 2000-09-12 | 2002-03-14 | Tomoaki Kawai | Image display control apparatus |
US6434278B1 (en) * | 1997-09-23 | 2002-08-13 | Enroute, Inc. | Generating three-dimensional models of objects defined by two-dimensional image data |
US20020122113A1 (en) * | 1999-08-09 | 2002-09-05 | Foote Jonathan T. | Method and system for compensating for parallax in multiple camera systems |
US20020122585A1 (en) * | 2000-06-12 | 2002-09-05 | Swift David C. | Electronic stereoscopic media delivery system |
US6456745B1 (en) * | 1998-09-16 | 2002-09-24 | Push Entertaiment Inc. | Method and apparatus for re-sizing and zooming images by operating directly on their digital transforms |
US6477267B1 (en) * | 1995-12-22 | 2002-11-05 | Dynamic Digital Depth Research Pty Ltd. | Image conversion and encoding techniques |
US6486205B2 (en) * | 1997-04-02 | 2002-11-26 | Laboratorios Dalmer Sa | Mixture of primary fatty acids obtained from sugar cane wax |
US20020186348A1 (en) * | 2001-05-14 | 2002-12-12 | Eastman Kodak Company | Adaptive autostereoscopic display system |
US6549200B1 (en) * | 1997-06-17 | 2003-04-15 | British Telecommunications Public Limited Company | Generating an image of a three-dimensional object |
US20030090482A1 (en) * | 2001-09-25 | 2003-05-15 | Rousso Armand M. | 2D to 3D stereo plug-ins |
US20030164893A1 (en) * | 1997-11-13 | 2003-09-04 | Christopher A. Mayhew | Real time camera and lens control system for image depth of field manipulation |
US6714196B2 (en) * | 2000-08-18 | 2004-03-30 | Hewlett-Packard Development Company L.P | Method and apparatus for tiled polygon traversal |
US20040247174A1 (en) * | 2000-01-20 | 2004-12-09 | Canon Kabushiki Kaisha | Image processing apparatus |
US20050094879A1 (en) * | 2003-10-31 | 2005-05-05 | Michael Harville | Method for visual-based recognition of an object |
US7181081B2 (en) * | 2001-05-04 | 2007-02-20 | Legend Films Inc. | Image sequence enhancement system and method |
US7289662B2 (en) * | 2002-12-07 | 2007-10-30 | Hrl Laboratories, Llc | Method and apparatus for apparatus for generating three-dimensional models from uncalibrated views |
Family Cites Families (145)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7907793B1 (en) | 2001-05-04 | 2011-03-15 | Legend Films Inc. | Image sequence depth enhancement system and method |
ATE224557T1 (en) | 1990-11-30 | 2002-10-15 | Sun Microsystems Inc | IMPROVED METHOD AND APPARATUS FOR GENERATING VIRTUAL WORLDS |
US5323007A (en) | 1992-02-07 | 1994-06-21 | Univ. Of Chicago Development Corp. Argonne National Laboratories | Method of recovering tomographic signal elements in a projection profile or image by solving linear equations |
US5614941A (en) * | 1993-11-24 | 1997-03-25 | Hines; Stephen P. | Multi-image autostereoscopic imaging system |
JPH07230556A (en) | 1994-02-17 | 1995-08-29 | Hazama Gumi Ltd | Method for generating cg stereoscopic animation |
US5805117A (en) | 1994-05-12 | 1998-09-08 | Samsung Electronics Co., Ltd. | Large area tiled modular display system |
US5621815A (en) | 1994-09-23 | 1997-04-15 | The Research Foundation Of State University Of New York | Global threshold method and apparatus |
US5729471A (en) * | 1995-03-31 | 1998-03-17 | The Regents Of The University Of California | Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene |
US5742291A (en) | 1995-05-09 | 1998-04-21 | Synthonics Incorporated | Method and apparatus for creation of three-dimensional wire frames |
US6049628A (en) | 1995-09-01 | 2000-04-11 | Cerulean Colorization Llc | Polygon reshaping in picture colorization |
JPH0991436A (en) | 1995-09-21 | 1997-04-04 | Toyota Central Res & Dev Lab Inc | Image processing method and device therefor |
US5748199A (en) | 1995-12-20 | 1998-05-05 | Synthonics Incorporated | Method and apparatus for converting a two dimensional motion picture into a three dimensional motion picture |
JPH09237346A (en) | 1995-12-26 | 1997-09-09 | Ainesu:Kk | Method for composing partial stereoscopic model and method for preparing perfect stereoscopic model |
JPH09186957A (en) | 1995-12-28 | 1997-07-15 | Canon Inc | Image recording and reproducing device |
JPH09289655A (en) | 1996-04-22 | 1997-11-04 | Fujitsu Ltd | Stereoscopic image display method, multi-view image input method, multi-view image processing method, stereoscopic image display device, multi-view image input device and multi-view image processor |
KR100468234B1 (en) | 1996-05-08 | 2005-06-22 | 가부시키가이샤 니콘 | Exposure method, exposure apparatus and disc |
JP3679512B2 (en) | 1996-07-05 | 2005-08-03 | キヤノン株式会社 | Image extraction apparatus and method |
US6009189A (en) | 1996-08-16 | 1999-12-28 | Schaack; David F. | Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects |
US6310733B1 (en) | 1996-08-16 | 2001-10-30 | Eugene Dolgoff | Optical elements and methods for their manufacture |
JPH10111934A (en) * | 1996-10-03 | 1998-04-28 | Oojisu Soken:Kk | Method and medium for three-dimensional shape model generation |
JP3535339B2 (en) * | 1997-03-05 | 2004-06-07 | 沖電気工業株式会社 | Interpolated image generation device and contour data generation method |
US6208360B1 (en) | 1997-03-10 | 2001-03-27 | Kabushiki Kaisha Toshiba | Method and apparatus for graffiti animation |
JPH10293852A (en) | 1997-04-21 | 1998-11-04 | Fuji Photo Film Co Ltd | Outline extracting method |
US6208347B1 (en) | 1997-06-23 | 2001-03-27 | Real-Time Geometry Corporation | System and method for computer modeling of 3D objects and 2D images by mesh constructions that incorporate non-spatial data such as color or texture |
US6031564A (en) | 1997-07-07 | 2000-02-29 | Reveo, Inc. | Method and apparatus for monoscopic to stereoscopic image conversion |
US5990900A (en) | 1997-12-24 | 1999-11-23 | Be There Now, Inc. | Two-dimensional to three-dimensional image converting system |
EP0930585B1 (en) * | 1998-01-14 | 2004-03-31 | Canon Kabushiki Kaisha | Image processing apparatus |
US6134346A (en) | 1998-01-16 | 2000-10-17 | Ultimatte Corp | Method for removing from an image the background surrounding a selected object |
GB9807097D0 (en) | 1998-04-02 | 1998-06-03 | Discreet Logic Inc | Processing image data |
US6333749B1 (en) | 1998-04-17 | 2001-12-25 | Adobe Systems, Inc. | Method and apparatus for image assisted modeling of three-dimensional scenes |
US6504569B1 (en) | 1998-04-22 | 2003-01-07 | Grass Valley (U.S.), Inc. | 2-D extended image generation from 3-D data extracted from a video sequence |
KR100304784B1 (en) | 1998-05-25 | 2001-09-24 | 박호군 | Multi-user 3d image display system using polarization and light strip |
US7116323B2 (en) | 1998-05-27 | 2006-10-03 | In-Three, Inc. | Method of hidden surface reconstruction for creating accurate three-dimensional images converted from two-dimensional images |
US20050231505A1 (en) | 1998-05-27 | 2005-10-20 | Kaye Michael C | Method for creating artifact free three-dimensional images converted from two-dimensional images |
JP3420504B2 (en) | 1998-06-30 | 2003-06-23 | キヤノン株式会社 | Information processing method |
US6134345A (en) | 1998-08-28 | 2000-10-17 | Ultimatte Corporation | Comprehensive method for removing from an image the background surrounding a selected subject |
US6434265B1 (en) | 1998-09-25 | 2002-08-13 | Apple Computers, Inc. | Aligning rectilinear images in 3D through projective registration and calibration |
US6466205B2 (en) * | 1998-11-19 | 2002-10-15 | Push Entertainment, Inc. | System and method for creating 3D models from 2D sequential image data |
JP2000194863A (en) | 1998-12-28 | 2000-07-14 | Nippon Telegr & Teleph Corp <Ntt> | Three-dimensional structure acquisition/restoration method and device and storage medium recording three- dimensional structure acquisition/restoration program |
JP4203779B2 (en) | 1999-03-15 | 2009-01-07 | ソニー株式会社 | Display element three-dimensional apparatus and method |
JP3476710B2 (en) | 1999-06-10 | 2003-12-10 | 株式会社国際電気通信基礎技術研究所 | Euclidean 3D information restoration method and 3D information restoration apparatus |
US6359630B1 (en) | 1999-06-14 | 2002-03-19 | Sun Microsystems, Inc. | Graphics system using clip bits to decide acceptance, rejection, clipping |
US6128132A (en) | 1999-07-13 | 2000-10-03 | Disney Enterprises, Inc. | Method and apparatus for generating an autostereo image |
US6870545B1 (en) | 1999-07-26 | 2005-03-22 | Microsoft Corporation | Mixed but indistinguishable raster and vector image data types |
GB2365243B (en) | 2000-07-27 | 2004-03-24 | Canon Kk | Image processing apparatus |
US6678406B1 (en) | 2000-01-26 | 2004-01-13 | Lucent Technologies Inc. | Method of color quantization in color images |
US6674925B1 (en) | 2000-02-08 | 2004-01-06 | University Of Washington | Morphological postprocessing for object tracking and segmentation |
US7065242B2 (en) | 2000-03-28 | 2006-06-20 | Viewpoint Corporation | System and method of three-dimensional image capture and modeling |
US6580821B1 (en) | 2000-03-30 | 2003-06-17 | Nec Corporation | Method for computing the location and orientation of an object in three dimensional space |
JP3575679B2 (en) | 2000-03-31 | 2004-10-13 | 日本電気株式会社 | Face matching method, recording medium storing the matching method, and face matching device |
US7471821B2 (en) * | 2000-04-28 | 2008-12-30 | Orametrix, Inc. | Method and apparatus for registering a known digital object to scanned 3-D model |
US6956576B1 (en) | 2000-05-16 | 2005-10-18 | Sun Microsystems, Inc. | Graphics system using sample masks for motion blur, depth of field, and transparency |
JP2002092657A (en) | 2000-09-12 | 2002-03-29 | Canon Inc | Stereoscopic display controller, its method, and storage medium |
US6924822B2 (en) | 2000-12-21 | 2005-08-02 | Xerox Corporation | Magnification methods, systems, and computer program products for virtual three-dimensional books |
US6677957B2 (en) | 2001-01-09 | 2004-01-13 | Intel Corporation | Hardware-accelerated visualization of surface light fields |
US20020164067A1 (en) | 2001-05-02 | 2002-11-07 | Synapix | Nearest neighbor edge selection from feature tracking |
US8401336B2 (en) | 2001-05-04 | 2013-03-19 | Legend3D, Inc. | System and method for rapid image sequence depth enhancement with augmented computer-generated elements |
JP2002350775A (en) | 2001-05-30 | 2002-12-04 | Fuji Photo Optical Co Ltd | Projector |
US7233337B2 (en) | 2001-06-21 | 2007-06-19 | Microsoft Corporation | Method and apparatus for modeling and real-time rendering of surface detail |
US6989840B1 (en) | 2001-08-31 | 2006-01-24 | Nvidia Corporation | Order-independent transparency rendering system and method |
US6816629B2 (en) | 2001-09-07 | 2004-11-09 | Realty Mapping Llc | Method and system for 3-D content creation |
US6809745B1 (en) | 2001-10-01 | 2004-10-26 | Adobe Systems Incorporated | Compositing two-dimensional and 3-dimensional images |
US6724386B2 (en) * | 2001-10-23 | 2004-04-20 | Sony Corporation | System and process for geometry replacement |
GB0126526D0 (en) | 2001-11-05 | 2002-01-02 | Canon Europa Nv | Three-dimensional computer modelling |
US20030210329A1 (en) | 2001-11-08 | 2003-11-13 | Aagaard Kenneth Joseph | Video system and methods for operating a video system |
US7756305B2 (en) | 2002-01-23 | 2010-07-13 | The Regents Of The University Of California | Fast 3D cytometry for information in tissue engineering |
US7412022B2 (en) | 2002-02-28 | 2008-08-12 | Jupiter Clyde P | Non-invasive stationary system for three-dimensional imaging of density fields using periodic flux modulation of compton-scattered gammas |
US20030202120A1 (en) * | 2002-04-05 | 2003-10-30 | Mack Newton Eliot | Virtual lighting system |
US7117135B2 (en) | 2002-05-14 | 2006-10-03 | Cae Inc. | System for providing a high-fidelity visual display coordinated with a full-scope simulation of a complex system and method of using same for training and practice |
US6978167B2 (en) * | 2002-07-01 | 2005-12-20 | Claron Technology Inc. | Video pose tracking system and method |
US7051040B2 (en) | 2002-07-23 | 2006-05-23 | Lightsurf Technologies, Inc. | Imaging system providing dynamic viewport layering |
EP1551189A4 (en) | 2002-09-27 | 2009-01-07 | Sharp Kk | 3-d image display unit, 3-d image recording device and 3-d image recording method |
US7113185B2 (en) | 2002-11-14 | 2006-09-26 | Microsoft Corporation | System and method for automatically learning flexible sprites in video layers |
US7065232B2 (en) | 2003-01-31 | 2006-06-20 | Genex Technologies, Inc. | Three-dimensional ear biometrics system and method |
WO2005115017A1 (en) | 2003-02-14 | 2005-12-01 | Lee Charles C | 3d camera system and method |
EP1599829A1 (en) | 2003-03-06 | 2005-11-30 | Animetrics, Inc. | Viewpoint-invariant detection and identification of a three-dimensional object from two-dimensional imagery |
JP4677175B2 (en) | 2003-03-24 | 2011-04-27 | シャープ株式会社 | Image processing apparatus, image pickup system, image display system, image pickup display system, image processing program, and computer-readable recording medium recording image processing program |
US7636088B2 (en) | 2003-04-17 | 2009-12-22 | Sharp Kabushiki Kaisha | 3-Dimensional image creation device, 3-dimensional image reproduction device, 3-dimensional image processing device, 3-dimensional image processing program, and recording medium containing the program |
US7362918B2 (en) | 2003-06-24 | 2008-04-22 | Microsoft Corporation | System and method for de-noising multiple copies of a signal |
GB2405775B (en) | 2003-09-05 | 2008-04-02 | Canon Europa Nv | 3D computer surface model generation |
GB2406252B (en) | 2003-09-18 | 2008-04-02 | Canon Europa Nv | Generation of texture maps for use in 3d computer graphics |
US7643025B2 (en) | 2003-09-30 | 2010-01-05 | Eric Belk Lange | Method and apparatus for applying stereoscopic imagery to three-dimensionally defined substrates |
US20050140670A1 (en) | 2003-11-20 | 2005-06-30 | Hong Wu | Photogrammetric reconstruction of free-form objects with curvilinear structures |
US7053904B1 (en) | 2003-12-15 | 2006-05-30 | Nvidia Corporation | Position conflict detection and avoidance in a programmable graphics processor |
US7755608B2 (en) | 2004-01-23 | 2010-07-13 | Hewlett-Packard Development Company, L.P. | Systems and methods of interfacing with a machine |
WO2005084405A2 (en) | 2004-03-03 | 2005-09-15 | Virtual Iris Studios, Inc. | System for delivering and enabling interactivity with images |
US7643966B2 (en) | 2004-03-10 | 2010-01-05 | Leica Geosystems Ag | Identification of 3D surface points using context-based hypothesis testing |
US8042056B2 (en) * | 2004-03-16 | 2011-10-18 | Leica Geosystems Ag | Browsers for large geometric data visualization |
JP4423076B2 (en) | 2004-03-22 | 2010-03-03 | キヤノン株式会社 | Recognition object cutting apparatus and method |
GB0410551D0 (en) | 2004-05-12 | 2004-06-16 | Ller Christian M | 3d autostereoscopic display |
US7015926B2 (en) | 2004-06-28 | 2006-03-21 | Microsoft Corporation | System and process for generating a two-layer, 3D representation of a scene |
EP1766580A2 (en) | 2004-07-14 | 2007-03-28 | Braintech Canada, Inc. | Method and apparatus for machine-vision |
US20060023197A1 (en) | 2004-07-27 | 2006-02-02 | Joel Andrew H | Method and system for automated production of autostereoscopic and animated prints and transparencies from digital and non-digital media |
JP4610262B2 (en) | 2004-08-30 | 2011-01-12 | 富士フイルム株式会社 | Projection-type image display device |
US8194093B2 (en) * | 2004-09-15 | 2012-06-05 | Onlive, Inc. | Apparatus and method for capturing the expression of a performer |
CA2579903C (en) | 2004-09-17 | 2012-03-13 | Cyberextruder.Com, Inc. | System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images |
US7542034B2 (en) | 2004-09-23 | 2009-06-02 | Conversion Works, Inc. | System and method for processing video images |
US20080259073A1 (en) | 2004-09-23 | 2008-10-23 | Conversion Works, Inc. | System and method for processing video images |
US20080246836A1 (en) | 2004-09-23 | 2008-10-09 | Conversion Works, Inc. | System and method for processing video images for camera recreation |
CA2582094C (en) | 2004-09-29 | 2014-12-23 | Warner Bros. Entertainment Inc. | Correction of blotches in component images |
KR100603601B1 (en) | 2004-11-08 | 2006-07-24 | 한국전자통신연구원 | Apparatus and Method for Production Multi-view Contents |
US8396329B2 (en) | 2004-12-23 | 2013-03-12 | General Electric Company | System and method for object measurement |
DE102005001325B4 (en) | 2005-01-11 | 2009-04-09 | Siemens Ag | Method for aligning a graphic object on an overview image of an object |
JP4646797B2 (en) | 2005-02-01 | 2011-03-09 | キヤノン株式会社 | Image processing apparatus, control method therefor, and program |
US7599555B2 (en) | 2005-03-29 | 2009-10-06 | Mitsubishi Electric Research Laboratories, Inc. | System and method for image matting |
US7706603B2 (en) | 2005-04-19 | 2010-04-27 | Siemens Corporation | Fast object detection for augmented reality systems |
US7636128B2 (en) | 2005-07-15 | 2009-12-22 | Microsoft Corporation | Poisson matting for images |
US7720282B2 (en) | 2005-08-02 | 2010-05-18 | Microsoft Corporation | Stereo image segmentation |
US8111904B2 (en) | 2005-10-07 | 2012-02-07 | Cognex Technology And Investment Corp. | Methods and apparatus for practical 3D vision system |
US7477777B2 (en) | 2005-10-28 | 2009-01-13 | Aepx Animation, Inc. | Automatic compositing of 3D objects in a still frame or series of frames |
US7737973B2 (en) | 2005-10-31 | 2010-06-15 | Leica Geosystems Ag | Determining appearance of points in point cloud based on normal vectors of points |
US7518619B2 (en) | 2005-11-07 | 2009-04-14 | General Electric Company | Method and apparatus for integrating three-dimensional and two-dimensional monitors with medical diagnostic imaging workstations |
US20070153122A1 (en) | 2005-12-30 | 2007-07-05 | Ayite Nii A | Apparatus and method for simultaneous multiple video channel viewing |
JP5063071B2 (en) | 2006-02-14 | 2012-10-31 | 株式会社ニューフレアテクノロジー | Pattern creating method and charged particle beam drawing apparatus |
KR101195942B1 (en) | 2006-03-20 | 2012-10-29 | 삼성전자주식회사 | Camera calibration method and 3D object reconstruction method using the same |
WO2007130122A2 (en) | 2006-05-05 | 2007-11-15 | Thomson Licensing | System and method for three-dimensional object reconstruction from two-dimensional images |
US8471866B2 (en) * | 2006-05-05 | 2013-06-25 | General Electric Company | User interface and method for identifying related information displayed in an ultrasound system |
JP4407670B2 (en) | 2006-05-26 | 2010-02-03 | セイコーエプソン株式会社 | Electro-optical device and electronic apparatus |
WO2007142643A1 (en) | 2006-06-08 | 2007-12-13 | Thomson Licensing | Two pass approach to three dimensional reconstruction |
US7836086B2 (en) | 2006-06-09 | 2010-11-16 | Pixar | Layering and referencing of scene description |
WO2007142649A1 (en) | 2006-06-09 | 2007-12-13 | Thomson Licensing | Method and system for color correction using three-dimensional information |
EP1868157A1 (en) | 2006-06-14 | 2007-12-19 | BrainLAB AG | Shape reconstruction using X-ray images |
CA2884702C (en) | 2006-06-23 | 2018-06-05 | Samuel Zhou | Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition |
US20080056719A1 (en) | 2006-09-01 | 2008-03-06 | Bernard Marc R | Method and apparatus for enabling an optical network terminal in a passive optical network |
US7742060B2 (en) | 2006-09-22 | 2010-06-22 | Autodesk, Inc. | Sampling methods suited for graphics hardware acceleration |
US7715606B2 (en) | 2006-10-18 | 2010-05-11 | Varian Medical Systems, Inc. | Marker system and method of using the same |
JP5108893B2 (en) | 2006-10-27 | 2012-12-26 | トムソン ライセンシング | System and method for restoring a 3D particle system from a 2D image |
US7767967B2 (en) * | 2006-11-01 | 2010-08-03 | Sony Corporation | Capturing motion using quantum nanodot sensors |
US7656402B2 (en) | 2006-11-15 | 2010-02-02 | Tahg, Llc | Method for creating, manufacturing, and distributing three-dimensional models |
JP4896230B2 (en) | 2006-11-17 | 2012-03-14 | トムソン ライセンシング | System and method of object model fitting and registration for transforming from 2D to 3D |
CN101542536A (en) | 2006-11-20 | 2009-09-23 | 汤姆森特许公司 | System and method for compositing 3D images |
EP2089852A1 (en) | 2006-11-21 | 2009-08-19 | Thomson Licensing | Methods and systems for color correction of 3d images |
US7769205B2 (en) | 2006-11-28 | 2010-08-03 | Prefixa International Inc. | Fast three dimensional recovery method and apparatus |
US8655052B2 (en) | 2007-01-26 | 2014-02-18 | Intellectual Discovery Co., Ltd. | Methodology for 3D scene reconstruction from 2D image sequences |
US20080226128A1 (en) | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | System and method for using feature tracking techniques for the generation of masks in the conversion of two-dimensional images to three-dimensional images |
US20080225042A1 (en) | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems and methods for allowing a user to dynamically manipulate stereoscopic parameters |
US20080228449A1 (en) | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems and methods for 2-d to 3-d conversion using depth access segments to define an object |
US20080226160A1 (en) | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems and methods for filling light in frames during 2-d to 3-d image conversion |
US20080225040A1 (en) | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | System and method of treating semi-transparent features in the conversion of two-dimensional images to three-dimensional images |
US20080226194A1 (en) | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems and methods for treating occlusions in 2-d to 3-d image conversion |
US8274530B2 (en) | 2007-03-12 | 2012-09-25 | Conversion Works, Inc. | Systems and methods for filling occluded information for 2-D to 3-D conversion |
US20080226181A1 (en) | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems and methods for depth peeling using stereoscopic variables during the rendering of 2-d to 3-d images |
US20080225045A1 (en) | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | Systems and methods for 2-d to 3-d image conversion using mask to model, or model to mask, conversion |
US20080225059A1 (en) | 2007-03-12 | 2008-09-18 | Conversion Works, Inc. | System and method for using off-screen mask space to provide enhanced viewing |
US7773087B2 (en) * | 2007-04-19 | 2010-08-10 | International Business Machines Corporation | Dynamically configuring and selecting multiple ray tracing intersection methods |
-
2004
- 2004-09-23 US US10/946,955 patent/US7542034B2/en active Active
-
2005
- 2005-09-07 EP EP05794967.9A patent/EP1800267B1/en not_active Not-in-force
- 2005-09-07 WO PCT/US2005/031664 patent/WO2006036469A2/en active Application Filing
- 2005-09-07 JP JP2007532366A patent/JP2008513882A/en active Pending
- 2005-09-07 KR KR1020077009079A patent/KR20070073803A/en not_active Application Discontinuation
- 2005-09-07 NZ NZ554661A patent/NZ554661A/en not_active IP Right Cessation
- 2005-09-07 CA CA2581273A patent/CA2581273C/en not_active Expired - Fee Related
- 2005-09-07 CN CN2005800377632A patent/CN101053000B/en not_active Expired - Fee Related
- 2005-09-07 AU AU2005290064A patent/AU2005290064A1/en not_active Abandoned
-
2009
- 2009-05-18 US US12/467,626 patent/US20090256903A1/en not_active Abandoned
-
2011
- 2011-03-25 US US13/071,670 patent/US20110169827A1/en not_active Abandoned
- 2011-03-25 US US13/072,467 patent/US8217931B2/en not_active Expired - Lifetime
-
2012
- 2012-07-09 US US13/544,876 patent/US8860712B2/en not_active Expired - Lifetime
Patent Citations (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4689616A (en) * | 1984-08-10 | 1987-08-25 | U.S. Philips Corporation | Method of producing and modifying a synthetic picture |
US4925294A (en) * | 1986-12-17 | 1990-05-15 | Geshwind David M | Method to convert two dimensional motion pictures for three-dimensional systems |
US5914941A (en) * | 1995-05-25 | 1999-06-22 | Information Highway Media Corporation | Portable information storage/playback apparatus having a data interface |
US6151404A (en) * | 1995-06-01 | 2000-11-21 | Medical Media Systems | Anatomical visualization system |
US6016150A (en) * | 1995-08-04 | 2000-01-18 | Microsoft Corporation | Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers |
US6477267B1 (en) * | 1995-12-22 | 2002-11-05 | Dynamic Digital Depth Research Pty Ltd. | Image conversion and encoding techniques |
US5977978A (en) * | 1996-11-13 | 1999-11-02 | Platinum Technology Ip, Inc. | Interactive authoring of 3D scenes and movies |
US6486205B2 (en) * | 1997-04-02 | 2002-11-26 | Laboratorios Dalmer Sa | Mixture of primary fatty acids obtained from sugar cane wax |
US6549200B1 (en) * | 1997-06-17 | 2003-04-15 | British Telecommunications Public Limited Company | Generating an image of a three-dimensional object |
US6226004B1 (en) * | 1997-09-12 | 2001-05-01 | Autodesk, Inc. | Modeling system using surface patterns and geometric relationships |
US6434278B1 (en) * | 1997-09-23 | 2002-08-13 | Enroute, Inc. | Generating three-dimensional models of objects defined by two-dimensional image data |
US20030164893A1 (en) * | 1997-11-13 | 2003-09-04 | Christopher A. Mayhew | Real time camera and lens control system for image depth of field manipulation |
US20010040570A1 (en) * | 1997-12-24 | 2001-11-15 | John J. Light | Method and apparatus for automated dynamics of three-dimensional graphics scenes for enhanced 3d visualization |
US6456745B1 (en) * | 1998-09-16 | 2002-09-24 | Push Entertaiment Inc. | Method and apparatus for re-sizing and zooming images by operating directly on their digital transforms |
US6342887B1 (en) * | 1998-11-18 | 2002-01-29 | Earl Robert Munroe | Method and apparatus for reproducing lighting effects in computer animated objects |
US6278460B1 (en) * | 1998-12-15 | 2001-08-21 | Point Cloud, Inc. | Creating a three-dimensional model from two-dimensional images |
US20020122113A1 (en) * | 1999-08-09 | 2002-09-05 | Foote Jonathan T. | Method and system for compensating for parallax in multiple camera systems |
US20040247174A1 (en) * | 2000-01-20 | 2004-12-09 | Canon Kabushiki Kaisha | Image processing apparatus |
US20020122585A1 (en) * | 2000-06-12 | 2002-09-05 | Swift David C. | Electronic stereoscopic media delivery system |
US6714196B2 (en) * | 2000-08-18 | 2004-03-30 | Hewlett-Packard Development Company L.P | Method and apparatus for tiled polygon traversal |
US20020030675A1 (en) * | 2000-09-12 | 2002-03-14 | Tomoaki Kawai | Image display control apparatus |
US7181081B2 (en) * | 2001-05-04 | 2007-02-20 | Legend Films Inc. | Image sequence enhancement system and method |
US20020186348A1 (en) * | 2001-05-14 | 2002-12-12 | Eastman Kodak Company | Adaptive autostereoscopic display system |
US20030090482A1 (en) * | 2001-09-25 | 2003-05-15 | Rousso Armand M. | 2D to 3D stereo plug-ins |
US7289662B2 (en) * | 2002-12-07 | 2007-10-30 | Hrl Laboratories, Llc | Method and apparatus for apparatus for generating three-dimensional models from uncalibrated views |
US20050094879A1 (en) * | 2003-10-31 | 2005-05-05 | Michael Harville | Method for visual-based recognition of an object |
Cited By (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9031383B2 (en) | 2001-05-04 | 2015-05-12 | Legend3D, Inc. | Motion picture project management system |
US8396328B2 (en) | 2001-05-04 | 2013-03-12 | Legend3D, Inc. | Minimal artifact image sequence depth enhancement system and method |
US8401336B2 (en) | 2001-05-04 | 2013-03-19 | Legend3D, Inc. | System and method for rapid image sequence depth enhancement with augmented computer-generated elements |
US9286941B2 (en) | 2001-05-04 | 2016-03-15 | Legend3D, Inc. | Image sequence enhancement and motion picture project management system |
US8897596B1 (en) | 2001-05-04 | 2014-11-25 | Legend3D, Inc. | System and method for rapid image sequence depth enhancement with translucent elements |
US8953905B2 (en) | 2001-05-04 | 2015-02-10 | Legend3D, Inc. | Rapid workflow system and method for image sequence depth enhancement |
US8385684B2 (en) | 2001-05-04 | 2013-02-26 | Legend3D, Inc. | System and method for minimal iteration workflow for image sequence depth enhancement |
US9615082B2 (en) | 2001-05-04 | 2017-04-04 | Legend3D, Inc. | Image sequence enhancement and motion picture project management system and method |
US8730232B2 (en) | 2011-02-01 | 2014-05-20 | Legend3D, Inc. | Director-style based 2D to 3D movie conversion system and method |
US9282321B2 (en) | 2011-02-17 | 2016-03-08 | Legend3D, Inc. | 3D model multi-reviewer system |
US9288476B2 (en) | 2011-02-17 | 2016-03-15 | Legend3D, Inc. | System and method for real-time depth modification of stereo images of a virtual reality environment |
US9113130B2 (en) | 2012-02-06 | 2015-08-18 | Legend3D, Inc. | Multi-stage production pipeline system |
US9595296B2 (en) | 2012-02-06 | 2017-03-14 | Legend3D, Inc. | Multi-stage production pipeline system |
US9270965B2 (en) | 2012-02-06 | 2016-02-23 | Legend 3D, Inc. | Multi-stage production pipeline system |
US9443555B2 (en) | 2012-02-06 | 2016-09-13 | Legend3D, Inc. | Multi-stage production pipeline system |
US9007365B2 (en) | 2012-11-27 | 2015-04-14 | Legend3D, Inc. | Line depth augmentation system and method for conversion of 2D images to 3D images |
US9547937B2 (en) | 2012-11-30 | 2017-01-17 | Legend3D, Inc. | Three-dimensional annotation system and method |
US9007404B2 (en) | 2013-03-15 | 2015-04-14 | Legend3D, Inc. | Tilt-based look around effect image enhancement method |
US9438878B2 (en) | 2013-05-01 | 2016-09-06 | Legend3D, Inc. | Method of converting 2D video to 3D video using 3D object models |
US9407904B2 (en) | 2013-05-01 | 2016-08-02 | Legend3D, Inc. | Method for creating 3D virtual reality from 2D images |
US9241147B2 (en) | 2013-05-01 | 2016-01-19 | Legend3D, Inc. | External depth map transformation method for conversion of two-dimensional images to stereoscopic images |
US9609307B1 (en) | 2015-09-17 | 2017-03-28 | Legend3D, Inc. | Method of converting 2D video to 3D video using machine learning |
US11386629B2 (en) | 2018-08-13 | 2022-07-12 | Magic Leap, Inc. | Cross reality system |
US10957112B2 (en) | 2018-08-13 | 2021-03-23 | Magic Leap, Inc. | Cross reality system |
US11227435B2 (en) | 2018-08-13 | 2022-01-18 | Magic Leap, Inc. | Cross reality system |
US11978159B2 (en) | 2018-08-13 | 2024-05-07 | Magic Leap, Inc. | Cross reality system |
WO2020072972A1 (en) * | 2018-10-05 | 2020-04-09 | Magic Leap, Inc. | A cross reality system |
US11789524B2 (en) | 2018-10-05 | 2023-10-17 | Magic Leap, Inc. | Rendering location specific virtual content in any location |
US11232635B2 (en) | 2018-10-05 | 2022-01-25 | Magic Leap, Inc. | Rendering location specific virtual content in any location |
US11995782B2 (en) | 2019-10-15 | 2024-05-28 | Magic Leap, Inc. | Cross reality system with localization service |
US11257294B2 (en) | 2019-10-15 | 2022-02-22 | Magic Leap, Inc. | Cross reality system supporting multiple device types |
US11568605B2 (en) | 2019-10-15 | 2023-01-31 | Magic Leap, Inc. | Cross reality system with localization service |
US11632679B2 (en) | 2019-10-15 | 2023-04-18 | Magic Leap, Inc. | Cross reality system with wireless fingerprints |
US12100108B2 (en) | 2019-10-31 | 2024-09-24 | Magic Leap, Inc. | Cross reality system with quality information about persistent coordinate frames |
US11386627B2 (en) | 2019-11-12 | 2022-07-12 | Magic Leap, Inc. | Cross reality system with localization service and shared location-based content |
US11869158B2 (en) | 2019-11-12 | 2024-01-09 | Magic Leap, Inc. | Cross reality system with localization service and shared location-based content |
US11562542B2 (en) | 2019-12-09 | 2023-01-24 | Magic Leap, Inc. | Cross reality system with simplified programming of virtual content |
US11748963B2 (en) | 2019-12-09 | 2023-09-05 | Magic Leap, Inc. | Cross reality system with simplified programming of virtual content |
US12067687B2 (en) | 2019-12-09 | 2024-08-20 | Magic Leap, Inc. | Cross reality system with simplified programming of virtual content |
US11790619B2 (en) | 2020-02-13 | 2023-10-17 | Magic Leap, Inc. | Cross reality system with accurate shared maps |
US11830149B2 (en) | 2020-02-13 | 2023-11-28 | Magic Leap, Inc. | Cross reality system with prioritization of geolocation information for localization |
US11967020B2 (en) | 2020-02-13 | 2024-04-23 | Magic Leap, Inc. | Cross reality system with map processing using multi-resolution frame descriptors |
US11562525B2 (en) | 2020-02-13 | 2023-01-24 | Magic Leap, Inc. | Cross reality system with map processing using multi-resolution frame descriptors |
US11410395B2 (en) | 2020-02-13 | 2022-08-09 | Magic Leap, Inc. | Cross reality system with accurate shared maps |
US11551430B2 (en) | 2020-02-26 | 2023-01-10 | Magic Leap, Inc. | Cross reality system with fast localization |
US11900547B2 (en) | 2020-04-29 | 2024-02-13 | Magic Leap, Inc. | Cross reality system for large scale environments |
Also Published As
Publication number | Publication date |
---|---|
US20120275687A1 (en) | 2012-11-01 |
US8217931B2 (en) | 2012-07-10 |
CA2581273C (en) | 2013-12-31 |
NZ554661A (en) | 2009-04-30 |
EP1800267B1 (en) | 2019-04-24 |
CN101053000B (en) | 2011-01-05 |
KR20070073803A (en) | 2007-07-10 |
EP1800267A2 (en) | 2007-06-27 |
CN101053000A (en) | 2007-10-10 |
WO2006036469A3 (en) | 2006-06-08 |
WO2006036469A2 (en) | 2006-04-06 |
CA2581273A1 (en) | 2006-04-06 |
AU2005290064A1 (en) | 2006-04-06 |
US8860712B2 (en) | 2014-10-14 |
JP2008513882A (en) | 2008-05-01 |
US20110169827A1 (en) | 2011-07-14 |
US7542034B2 (en) | 2009-06-02 |
WO2006036469A8 (en) | 2006-08-24 |
US20110169914A1 (en) | 2011-07-14 |
US20060061583A1 (en) | 2006-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7542034B2 (en) | System and method for processing video images | |
US20120032948A1 (en) | System and method for processing video images for camera recreation | |
US8791941B2 (en) | Systems and methods for 2-D to 3-D image conversion using mask to model, or model to mask, conversion | |
US20080259073A1 (en) | System and method for processing video images | |
US20080225045A1 (en) | Systems and methods for 2-d to 3-d image conversion using mask to model, or model to mask, conversion | |
US20080228449A1 (en) | Systems and methods for 2-d to 3-d conversion using depth access segments to define an object | |
US20080225042A1 (en) | Systems and methods for allowing a user to dynamically manipulate stereoscopic parameters | |
US20080226160A1 (en) | Systems and methods for filling light in frames during 2-d to 3-d image conversion | |
US20080226181A1 (en) | Systems and methods for depth peeling using stereoscopic variables during the rendering of 2-d to 3-d images | |
US20080226128A1 (en) | System and method for using feature tracking techniques for the generation of masks in the conversion of two-dimensional images to three-dimensional images | |
US6266068B1 (en) | Multi-layer image-based rendering for video synthesis | |
US20080226194A1 (en) | Systems and methods for treating occlusions in 2-d to 3-d image conversion | |
Chang et al. | Facial model adaptation from a monocular image sequence using a textured polygonal model | |
WO2008112786A2 (en) | Systems and method for generating 3-d geometry using points from image sequences |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CONVERSION WORKS, INC., A DELAWARE CORPORATION, CA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SPOONER, DAVID A.;SIMPSON, TODD;REEL/FRAME:022949/0580 Effective date: 20040908 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |