Nothing Special   »   [go: up one dir, main page]

CN105894567A - Scaling pixel depth values of user-controlled virtual object in three-dimensional scene - Google Patents

Scaling pixel depth values of user-controlled virtual object in three-dimensional scene Download PDF

Info

Publication number
CN105894567A
CN105894567A CN201610191451.7A CN201610191451A CN105894567A CN 105894567 A CN105894567 A CN 105894567A CN 201610191451 A CN201610191451 A CN 201610191451A CN 105894567 A CN105894567 A CN 105894567A
Authority
CN
China
Prior art keywords
value
dimensional
user
pixel depth
virtual objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610191451.7A
Other languages
Chinese (zh)
Other versions
CN105894567B (en
Inventor
B.M.吉诺瓦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment America LLC
Original Assignee
Sony Computer Entertainment America LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/986,814 external-priority patent/US9041774B2/en
Priority claimed from US12/986,872 external-priority patent/US9183670B2/en
Priority claimed from US12/986,854 external-priority patent/US8619094B2/en
Priority claimed from US12/986,827 external-priority patent/US8514225B2/en
Application filed by Sony Computer Entertainment America LLC filed Critical Sony Computer Entertainment America LLC
Publication of CN105894567A publication Critical patent/CN105894567A/en
Application granted granted Critical
Publication of CN105894567B publication Critical patent/CN105894567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/178Metadata, e.g. disparity information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N2013/40Privacy aspects, i.e. devices showing different images to different viewers, the images not being viewpoints of the same scene
    • H04N2013/405Privacy aspects, i.e. devices showing different images to different viewers, the images not being viewpoints of the same scene the images being stereoscopic or three dimensional

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Pixel depth values of a user-controlled virtual object in a three-dimensional scene may be re-scaled to avoid artifacts when the scene is displayed. Minimum and maximum threshold values can be determined for the three-dimensional scene. Each pixel depth value of the user-controlled virtual object can be compared to the minimum threshold value and the maximum threshold value. A depth value of each pixel of the user-controlled virtual object that falls below the minimum threshold value can be set to a corresponding low value. Each pixel depth value of the user-controlled virtual object that exceeds the maximum threshold value can be set to a corresponding high value.

Description

The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls
This divisional application be filing date December in 2011 2 days, Application No. 201180064484.0, The division of invention entitled " pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls " Application.
Cross-Reference to Related Applications
The application relates to entitled " the DYNAMIC ADJUSTMENT of submission on January 7th, 2011 OF PREDETERMINED THREE-DIMENSIONAL VIDEO SETTINGS BASED ON SCENE CONTENT " application number 12/986,814 (agent commonly assigned, co-pending Reference Number SCEA10052US00).
The application relates to the entitled " MORPHOLOGICAL of submission on January 7th, 2011 ANTI-ALIASING(MLAA)OF A RE-PROJECTION OF A TWO-DIMENSIONAL IMAGE " application number 12/986,854 commonly assigned, co-pending (attorney docket SCEA10054US00).
The application relates to the entitled " MULTI-SAMPLE of submission on January 7th, 2011 RESOLVING OF RE-PROJECTION OF TWO-DIMENSIONAL IMAGE " common Assign, co-pending application number 12/986,872 (attorney docket SCEA10055US00).
Technical field
The pixel that embodiment of the present invention relate to the virtual objects that the user in scaling three-dimensional scenic controls is deep Angle value.
Background technology
In the past few years, the ability of perception two dimensional image in three dimensions has been carried out by many different technology Through becoming the most popular.A degree of depth aspect is provided it is possible to by any described field to two dimensional image Scape is created that higher sense of reality.This introducing of 3D vision performance greatly enhances viewer experience, The especially experience in video-game boundary.
There is many for giving the technology of the three-dimensional rendering of image.Recently, it has been suggested that a kind of being used for will One or more two dimensional images project the technology to three dimensions, and described technology is referred to as based on depth map Picture render (DIBR).(that is, two separation are regarded with the basic conception frequently relying on " three-dimensional " video Frequency stream one for left eye and one for right eye collection, transmit and show) former motion Compare, this new idea be based on to single as video (that is, single video flowing) with the degree of depth pixel-by-pixel that is associated The joint transmission more flexibly of information.According to this Data Representation, then can be by so-called DIBR Technology generates one or more " virtual " view of 3-D scene in real time receiving side.3-D view renders This new way bring the some advantages exceeding previous approach.
First, this approach allows adjust 3-D projection or show to coordinate broad range of different three-dimensional aobvious Show device and optical projection system.Because required left-eye view and right-eye view are only raw at 3D-TV receptor Become, it is possible to for concrete viewing condition, described view is presented at ' perceived depth ' aspect in addition Adjust.This for spectators provide customization 3-D experience, it be can cosily watch any kind of Solid or the experience of Autostereoscopic 3D-TV display.
DIBR also allows for 2D to 3D based on " exercise recovery structure (structure from motion) " approach Conversion, described conversion may be used for being recorded single as video material generate required for depth information. Therefore, for broad range of program making (programming), 3D can be generated according to 2D video Video, this may play a significant role in the success of 3D-TV.
Head movement parallax is (that is, on the perceived position by the object caused by the change in viewing angle Apparent displacement or difference) can be supported under DIBR, in order to provide another extra three-dimensional depth to imply. This eliminates the well-known " shear using solid or Autostereoscopic 3D-TV system usually to experience Distortion " (that is, stereo-picture shows as following described observer when observer changes viewing location).
Additionally, eliminate the need for from the beginning destroying the light between relief left-eye view and right-eye view Degree unsymmetry (such as, for brightness, contrast or color), because two views are from same original Image synthesizes effectively.Additionally, described approach is capable of automatic object segmentation based on degree of depth keying And allow to synthesize 3D object to the integration easily in " real world " sequence.
Finally, this approach allows spectators to adjust the reproduction to the degree of depth to be suitable for his/her individual's preference very Seem that each conventional 2D-TV allows spectators to control to adjust color rendering by (going) saturation.These right and wrong The most important feature, because the degree of depth appreciation degree of age groups there are differences.Such as, Norman etc. Recent research confirms: it is sensitive that old people is not so good as youngster to perception three-dimensional depth.
Each spectators can have uniqueness preferred depth collection is set while, present to described spectators' Each scene can also have the preferred depth of uniqueness and arrange collection.The content provided degree of depth of each scene is arranged Which scope should be used for the optimal viewing of described scene.For each scene, a reprojection Parameter set is not likely to be preferable.Such as, depend on having how many remote background to be in visual field, different Parameter can play preferably effect.Because whenever the content of scene change scene just changes, so determining During reprojection parameter, existing 3D system will not obtain the content of scene.
Embodiment of the present invention produce under this situation.
Accompanying drawing explanation
Figure 1A is to illustrate that a kind of three-dimensional scenic determined for user according to embodiment of the present invention sets The flow diagram/schematic of the method dynamically adjusted put.
Figure 1B is the schematic diagram of the basic conception illustrating three-dimensional reprojection.
Fig. 1 C is to illustrate that the virtual video camera that the 3D video according to embodiment of the present invention is arranged adjusts The simplification figure of embodiment.
Fig. 1 D is to illustrate that the mechanical video camera that the 3D video according to embodiment of the present invention is arranged adjusts The simplification figure of embodiment.
Fig. 2 A to Fig. 2 B is to be shown in the virtual objects that in three-dimensional scenic, user controls to penetrate virtual world The schematic diagram of the problem of element.
Fig. 2 C is to illustrate that solving the virtual objects that user controls in three-dimensional scenic penetrates the unit of virtual world The schematic diagram of the pixel depth value scaling of the problem of part.
Fig. 3 is to illustrate a kind of user's control in scaling three-dimensional scenic according to embodiment of the present invention The schematic diagram of the method for the pixel depth value of the virtual objects of system.
Fig. 4 is illustrate according to embodiment of the present invention a kind of for implementing the three-dimensional scenic that user determines The pixel depth value of the virtual objects that the user in the dynamically adjustment arranged and/or scaling three-dimensional scenic controls The block diagram of equipment.
Fig. 5 is illustrate according to embodiment of the present invention a kind of for implementing the three-dimensional scenic that user determines The pixel depth value of the virtual objects that the user in the dynamically adjustment arranged and/or scaling three-dimensional scenic controls The block diagram of embodiment of Cell processor implementation of equipment.
Fig. 6 A illustrates have for implementing the three-dimensional that user determines according to a kind of of embodiment of the present invention The embodiment of the non-transient computer-readable recording medium of the instruction dynamically adjusted of scene setting.
Fig. 6 B illustrates that the one according to embodiment of the present invention has for implementing in scaling three-dimensional scenic The non-transient computer-readable recording medium of instruction of the pixel depth value of virtual objects that controls of user Embodiment.
Fig. 7 is the isometric view of three-dimensional viewing glasses according to an aspect of the present invention.
Fig. 8 is the system level block diagram of three-dimensional viewing glasses according to an aspect of the present invention.
Detailed description of the invention
For any spectators of the 3-D view of projection, some characteristic/hints dominate them to the degree of depth Perception.The ability of the degree of depth in each spectators perception tripleplane is come for themselves eyes Say it is unique.Some hint can be that spectators provide some depth characteristic being associated with given scenario. Citing and not with ways to restrain for, these eyes hint can include stereoscopic vision (stereopsis), assemble And shade stereoscopic vision.
Obtained by stereoscopic vision refers to that spectators are by the different projections on place's reason object to each retina Information judges the ability of the degree of depth.By using the two of the Same Scene obtained from slightly different angle Individual image, it is possible to accuracy highly by the distance triangle division to object.If object is remote, The aberration (disparity) that so that image falls on two retinas will be the least.If object near or Close, then aberration will be the biggest.Angle difference between being projected by the difference adjusting Same Scene, is seen Many may his perception to the degree of depth of optimization.
Convergence is another eyes hint of depth perception.When two eyeball fixes are on same target, They are just assembled.This convergence will stretch extraocular muscles.The kinaesthesia sense of these extraocular muscles helps deeply just The perception of degree.When eye gaze is on remote object, the angle of convergence is less, and when watching attentively nearer Time on object, the angle of convergence is bigger.By adjusting the convergence of eyes for given scenario, spectators may Can his perception to the degree of depth of optimization.
Shade stereoscopic vision refers to the stereoscopic fusion of the shade in order to given scenario to give the degree of depth.Strengthen or The intensity of the shade reducing scene can the optimization spectators perception to the degree of depth further.
By adjusting and these eyes scene setting of being associated of hint, spectators can with optimization he to the degree of depth General three perception.Although given user may select general three-dimensional for watching all scenes Scene setting collection, but each scene is unique, and therefore, depend on that concrete scene Content, it may be necessary to dynamically adjust some visual cues/user setup.Such as, in the feelings of virtual world In border, the concrete object that spectators watch attentively in given scenario is probably important.But, predetermined the three of spectators Dimension scene setting is not likely to be best for watching that concrete object.Here, the setting of spectators Put and will be dynamically adjusted according to described scene, so that feeling under more preferably three-dimensional scenic arranges collection Know described concrete object.
Figure 1A is to illustrate that a kind of three-dimensional scenic determined for user according to embodiment of the present invention sets The flow chart of the method dynamically adjusted put.Initially, spectators 115 be configured to make three dimensional video data The processor 113 flowing to visual displays 111 communicates.Processor 113 can be in the form of: Video game console, computer equipment, maybe can process other device any of three dimensional video data. Citing and not with ways to restrain for, visual displays 111 can in the form of 3-D ready television machine, Text, numeral, graphical symbol or other visual object are shown as by it will be watched glasses 119 by a pair 3-D The stereo-picture of perception.Fig. 7 to Fig. 8 describes and is described in more detail below 3-D to watch glasses Embodiment.3-D viewing glasses 119 can be in the form of: active liquid crystal shutter glasses, active " red Eye " shutter glasses, passive linear polarization glasses, passive Circular Polarisation glasses, interference light filter glasses, mutually Anaglyphic projection sheet or be configured to the image that viewing is projected in three dimensions by visual displays 111 Any other 3-D secondary watches glasses.Spectators 115 can be by user interface 117 and processor 113 Communicating, described user interface can take the form of rocking bar, controller, remote controller, keyboard, Or other device any can being used in combination with graphical user interface (GUI).
Spectators 115 initially can select to be applied to present to audience 115 each three-dimensional scenic one Group general three-dimensional video is arranged.Citing and not with ways to restrain for, spectators can be with the outside of selected depth Boundary, within three-dimensional scenic is just projected in described external boundary.As additional embodiment, user can arrange vertical Body vision, convergence or the predetermined value of shade stereoscopic vision.If additionally, user is not provided with these parameters Predetermined value, then described predetermined value can be Default Value default value.
Can be by user setup and other 3D video parameter being dynamically adjusted based on scene content The example arranged includes but not limited to: both 3D depth effect and 3D scope.Severity control has how many 3D Effect is presented to user.The external boundary of the degree of depth substantially represents scope and parallax (our degree of depth and effect Slider).In the implementation relating to reprojection, drop shadow curve can be adjusted as mentioned below.To again The adjustment of drop shadow curve can be the adjustment of the shape characteristic to described reprojection curve, and described shape is permissible It is linear or the S-shaped that may is that the center of emphasizing.Furthermore it is possible to adjust the parameter of described shape.Lift Example and not with ways to restrain for, for linear reprojection curve, end points or slope can be adjusted. For the reprojection curve of S-shaped, how soon etc. S oblique ascension can be adjusted.
In other embodiment relating to reprojection, it is provided that certain edge ambiguity to repair leak, And spectators 115 can drive described repairing.Furthermore it is possible to the application use reprojection of the present invention or its The embodiment of its means allows to put based on user to drive color contrast to help to reduce ghost image Indentation row adjusts by scene.Additionally, in the case of being not related to reprojection, user can adjust from defeated Enter scaling that video camera will be how far or the slight fine setting to camera angle.Can be with by scene side Other video camera that formula is adjusted arranges and includes that the depth of field is arranged or camera aperture.
Because one or more spectators 115 differently perception 3D vision shows, so different spectators can There is different general three-dimensional scene setting combinations with the preference according to them.Such as, channel syndrome is studied Real: old people is sensitive not as youngster to perception three-dimensional depth, and therefore, old people may benefit from Increase the scene setting of the perception to the degree of depth.Similarly, youngster is it may be found that reduce the perception to the degree of depth Arrange and can alleviate eyestrain and tired out, the most still provide pleasant for spectators and three-dimensional experience.
When spectators 115 are observing the stationary flow of three-dimensional scenic 103, not shown to spectators Or multiple scene can be stored in output buffer 101.Can be according to presenting sequentially of scene 103 They are arranged.Scene 103 refers to that the one or more three-dimensionals being characterized with one group of sharing characteristic regard Frequently frame.Such as, the frame of video of one group of different views representing same landscape can be characterized as being a scene. But, the close-up view of object and the perspective view of object can represent different scenes.It is important to note that: The combination of any amount of frame can be characterized as being a scene.
Scene 103 before being presented to spectators through two stages.First described scene is processed to determine The one or more characteristics 105 being associated with given scenario.Then determining according to those characteristics will be by It is applied to one or more scale factors 107 of the predetermined set of user.It is then possible to by described ratio The factor is transmitted to processor 113 and the setting being applied to dynamically adjust spectators as metadata 109, As indicated by 110.It is then possible to use adjusted setting that described scene is presented on display 111 On, as indicated by 112.This allows each scene to present to audience in such a way: retains and sees Many basic preferences, still maintain described scene by the materialization content of scene being taken into account simultaneously Visual integrity.In the case of being not related to reprojection, metadata can be transmitted to harvester to make Adjust, adjust our virtual camera position in gaming, or adjust such as, as real in 3D chat Execute the physics video camera used in example.
Before the embodiment of explanation the inventive method, discussing some backgrounds about three-dimensional video system is Useful.Embodiment of the present invention can apply to for by reprojection process according to 2D video institute The reprojection of the 3D video generated is arranged.In reprojection, can be according to normal two dimensional image and described In image, the depth information pixel-by-pixel being associated of each pixel synthesizes the left eye virtual view of a scene With right eye virtual view.This process can be implemented as follows by processor 113.
First, utilize in original image the depth data of each pixel by original graph picture point reprojection to 3D In the world.Hereafter, by these 3d space spot projections to " virtual " shooting being positioned at required viewing location In the plane of delineation of machine.The linking of reprojection (2D to 3D) and follow-up projection (3D to 2D) is sometimes referred to as 3D Scalloping or reprojection.As shown in fig. 1b, by comparing with the operation of " truly " stereo camera, It is appreciated that described reprojection.In " truly ", high-quality stereo camera, usually utilize two kinds of differences One in method is established so-called parallax free and is arranged (ZPS), i.e. select the convergence in 3D scene Distance Zc.In " introversion (toed-in) " approach, the associating by left-eye camera and right-eye camera is inside Rotate and select described ZPS.In displacement transducer approach, focusing distance ZcPlane can be by image The thin tail sheep h of sensor establishes, and described imageing sensor is used for standoff distance tcThe left eye being placed in parallel " virtual " video camera and right eye " virtual " video camera, as shown in fig. 1b.Each virtual video camera can be by The focal distance f determined is characterized, described focal length represent between virtual video camera camera lens and imageing sensor away from From.Used in this distance and some implementations described herein to hither plane PnNear Plan range ZnCorresponding.
Technically, " introversion " approach is easier in " truly " stereo camera realize.But, displacement Sensor approach for reprojection be sometimes it is furthermore preferred that because it will not introduce unwanted vertically Difference, described vertical differentiation can be the potential source of the eyestrain between left-eye view and right-eye view.
In view of horizontal coordinate in original 2D image and vertical coordinate (u, v) degree of depth letter of each pixel at place Breath Z, it is possible to use displacement transducer approach, generates left-eye view and right-eye view according to below equation Respective pixel coordinate (u ', v '), (u ", v "):
For left-eye view,
For right-eye view,
In aforementioned equation, αuIt is convergence angle in the horizontal direction, as seen in this fig. 1b.thmp? Be illustrate spectators actual viewing location optionally translate item (sometimes referred to as head movement parallax item).
Can be with convergence angle [alpha] by the displacement h of below equation, left-eye view and right-eye viewu, assemble Distance ZcAnd horizontal convergence angle [alpha]uRelevant:
For left-eye view,
For right-eye view,
Processor 113 can receive scene 103 with regard to following aspect: original 2D image and the pixel-by-pixel degree of depth Information is together with can apply to arranging, such as α by scene acquiescence scaling of 3D video parameteru、tc、Zc、f And thmpOr a combination thereof (such as, ratio).Such as, scaling arranges and can represent 0 (for not having 3D Perception) and certain value more than 1 (the 3D perception for strengthening) between the multiplier of variation.Change virtual taking the photograph The 3D video parameter of camera arranges the qualitative perception affecting 3D video.Citing and not with ways to restrain for, Increase described in following Table I (+) or reduce (-) selected by some qualitative effect of 3D video parameter.
Table I
In tablei, term " screen parallax " refers to the level difference between left-eye view and right-eye view; Term " perceived depth " refers to the Apparent Depth of the shown scene perceived by spectators;" object is big for term Little " refer to the apparent size of the display object on screen 111 perceived by described spectators.
In some implementations, can be with regard to hither plane PnWith far plane PfRather than convergence angle αuWith Sensor distance tcMath equation used above is described.Term " hither plane " refers to by video camera That is, the closest approach in the scene that imageing sensor is gathered.Term " far plane " refers to be adopted by video camera Solstics in the scene of collection.Not to rendering beyond far plane Pf, i.e. beyond far plane distance Zf(such as figure 1B is described) anything attempt.A kind of system using math equation described above Hither plane and far plane can be indirectly selected by the value of some variable in selection equation.Alternatively, Convergence angle α can be adjusted based on selected hither plane and far planeuWith sensor distance tcValue.
Can require to be described below to the operation of three-dimensional reprojection system: 1) the near of given scenario is put down The selection in face;2) selection to the far plane of given scenario;3) be the reprojection of described given scenario define from Described hither plane is to the conversion of described far plane.Described conversion, sometimes referred to as reprojection curve, it is basic On make horizontal pixel relevant with pixel depth with the amount of vertical pixel displacement;4) a kind of for inessential/weight Want the method that pixel carries out filtering and/or weighting;5) a kind of system, it is used for making may be transformed in scene In journey occur to 1 to 3 any change smoothing, in order to prevent by spectators 115 perceived deep The inharmonious editing of degree.Three-dimensional video system the most also includes 6) allow spectators' scaling 3-D effect certain Mechanism.
Above 6 requirements are specifically designated as follows by typical reprojection system: 1) video camera of scene is near Plane;2) far plane of the video camera of scene;3) conversion of pixel only horizontal displacement.By fixing displacement (commonly referred to as convergence) turn down one the deepest or the most remote with the amount pixel that the depth value of each pixel is inversely proportional to, Pixel is the fewest because of the displacement assembled.Such as can describe this by math equation presented above to want Ask;4) because 1 to 3 is constant, so need not be weighted;5) because 1 to 3 is constant, institute Smoothing;And 6) slider can be used to such as by scaling pixel will position linearly The amount moved adjusts described conversion.This in fact by from above for u ' or u " equation in second (with the possible 3rd) item is plus a constant ratio factor.This kind of constant ratio factor can be adjustable via user Whole slider is implemented, and (and therefore described user's adjustable slider trends towards making hither plane and far plane Average effect) move towards screen plane.
This may cause three-dimensional poor use.Given scenario is probably unbalanced and causes Unnecessary eyes are tired out.3D video editor or 3D game developer must be careful to build all fields Scape and film, so that the correctly all objects in layout scene.
For given 3 D video, there is the viewing in the region being located close to visual displays comfortable District 121.The image off screen curtain perceived is the most remote, watches the most uncomfortable (for most people). Therefore, the three-dimensional scenic being associated with given scenario arranges and is intended to make the use of comfort zone 121 to maximize. Although some things in the outside of comfort zone 121, but can it is generally desirable to the great majority that spectators are watched attentively Things is all in comfort zone 121.Citing and not with ways to restrain for, spectators can arrange comfort zone 121 Border, simultaneous processor 113 can dynamically adjust scene setting so that for each scene come Say that the use of comfort zone 121 is maximized.
The maximized simple and direct approach of use making comfort zone 121 can relate to: hither plane is set equal to The minimum pixel degree of depth that given scenario is associated, and far plane is set equal to relevant to given scenario The maximum pixel degree of depth of connection, retain simultaneously as above in relation to character 3 defined in typical reprojection system to 6.This will make the use of comfort zone 121 maximize, but it does not but take into account in described scene or outward The effect of the object of flight, this may cause huge displacement in three dimensions.
Citing and not with ways to restrain for, some embodiment of the method for the present invention can additionally by The mean depth of described scene is taken into account.Can be towards the mean depth of a target drives scene.Three Dimension contextual data can be that given scenario arranges target, allows user to scene (example described in their perception simultaneously Such as, the border of comfort zone) carry out scaling from described target is how far.
The false code being used for calculating such a meansigma methods can be imagined as follows:
Hither plane can be arranged to the minimum depth value for all pixels in scene, and permissible Far plane is arranged to the maximum depth value for all described pixel in described scene.Target apperception The degree of depth can be to be specifically designated by creator of content and by one of the preference of user in addition scaling Value.By using the meansigma methods calculated and from above Transformation Properties 3, it is possible to calculate average Scene depth is how far from the target apperception degree of depth.Citing and not with ways to restrain for, then by simple Ground adjusts to be assembled and target delta (as shown in table 1), can make overall perception scene depth displacement.Also may be used So that the smoothing of described target delta, as following, hither plane is done the same with far plane.Can also make With adjust target depth other method, as in 3D film use in order to guarantee in scene changes The method of the consistent degree of depth.It should however be noted that 3D film can not provide a kind of for spectators at present adjusts target The method of scene depth.
Citing and not with ways to restrain for, a kind of determine one or more three be associated with given scenario The approach of dimension characteristic determines that and uses the important scene characteristics of following two: the mean pixel of scene is deep The standard deviation of the pixel depth of degree and that scene.Can to for calculate pixel depth meansigma methods and The false code of standard deviation is imagined as follows:
The mean pixel degree of depth that then hither plane can be arranged to scene deducts the pixel depth of that scene Standard deviation.Likewise it is possible to far plane is arranged to the mean pixel degree of depth of scene plus that The standard deviation of the pixel depth of scape.If these results are not enough, then reprojection system can be by Represent that the data of scene are converted into frequency domain, for the mean pixel degree of depth of given scenario and standard deviation Calculating.Such as above embodiment, driving to target depth can complete in the same manner.
For providing a kind of method for unessential pixel is filtered and weighted, can study in detail Scene and the unessential pixel of labelling.Granule that unessential pixel is likely to include leaping and its Its incoherent small geometry body.In the situation of video-game, this can in rasterization process easily Complete, otherwise, it is likely that a kind of algorithm for finding little cluster degree of depth aberration will be used.If A kind of method can distinguish the place that user is watched attentively, then should the degree of depth of neighbouring pixel be considered relatively The most remote for our important focal point, pixel is the most inessential.Such a method can include without limit In: determine cursor or graticule whether in image and they positions in the picture, or by utilize from The feedback of specialized glasses measures the rotation of eyes.This kind of glasses can include the eyeball pointing to wearer The simple video camera at place.Described video camera can provide image, in described image, the eyes of user The white of the eye can differentiate with dark parts (such as, pupil).By analyzing image to determine the position of pupil also And make described position be associated with eyeball angles, it may be determined that Rotation of eyeball.Such as, pupil placed in the middle will The eyeball being forwardly directed straight can be roughly corresponded to.
In some embodiments, it may be necessary to be highlighted the pixel in the middle body of device 111, because of Value for edge is likely to less important.If by fixed for the distance between pixel to become to ignore the two of the degree of depth Dimension distance, then can be by following false code to emphasizing that this kind of center pixel or simply having of focus weight partially Statistical model is imagined:
For provide a kind of system being mostly in comfort zone 121 keeping picture, except or substitute with Outside convergence described in upper embodiment, it should adjust hither plane and far plane (or number described above Learn other variable in equation).Processor 113 can be arranged to implement a process, and described process is such as With by the process contemplated by following false code:
1-scale=viewerScale*contentScale
2-nearPlane'=nearPlane*scale+ (mean-standardDeviation) * (1-scale)
3-farPlane'=farPlane*scale+ (mean+standardDeviation) * (1-scale)
Both viewerScale and contentScale are the values between 0 and 1 controlling rate of change. Spectators 115 adjust the value of viewerScale, and creator of content arranges the value of contentScale.Same Smoothing can apply to above convergence and adjusts.
In some implementation (such as video-game), because may need the processor 113 can driving field Object off screen curtain 111 in scape farther or closer to, can so increasing target adjustment step as follows Can be useful:
1-nearPlane'=nearPlane*scale+ (mean+nearShift-standardDeviation) * (1 -scale)
2-farPlane'=farPlane*scale+ (mean+farShift+standardDeviation) * (1- scale)
Positive displacement will trend towards making nearPlane and farPlane move back in scene.Similarly, negative Displacement will make things move closer to.
In the one or more characteristics determining given scenario, (such as, hither plane, far plane, mean pixel are deep Degree, standard deviation pixel depth etc.) after 105, it may be determined that scale factor collection 107.These ratios because of Son may indicate that how to make described scene maximize in the border of the comfort zone 121 that user determines.Additionally, In these scale factors one may be used for controlling the speed that amendment three-dimensional is arranged in scene conversion process Rate.
Once it is determined that the scale factor of the characteristic corresponding to given scenario, it is possible to described scale factor is made It is stored in scene data for metadata 109.Scene 103 (three-dimensional data adjoint with it) can connect Transmit to processor 113 together with the metadata 109 being associated with that scene.Then processor 113 may be used To adjust the setting of described three-dimensional scenic according to described metadata.
It is important to note that: the different phase that scene can be processed to process at three-dimensional data crossfire is true The certainty ratio factor and metadata, and described scene is not limited to after being placed in output buffer 101 Processed.Additionally, the three-dimensional scenic that user determines arranges collection is not limited to arrange the border of tripleplane.Lift Example and not with ways to restrain for, the scene setting that user determines may also include control in three-dimensional scenic right The intensity of the shade in the definition of elephant or described three-dimensional scenic.
Although previous embodiment being described under the situation of reprojection, but the embodiment party of the present invention Case is not limited to this kind of implementation.The degree of depth of scaling reprojection and the concept of scope can equally well be suitable for In adjusting input parameter, described input parameter is such as taken the photograph for the virtual or true stereo of real-time 3D video The position of camera.If video camera feed-in is dynamic, then can implement for real-time volume content The adjustment of input parameter.Fig. 1 C and Fig. 1 D illustrates the video camera of the alternate embodiment according to the present invention The embodiment dynamically adjusted of feed-in.
Such as finding in fig. 1 c, processor 113 can generate the left eye of scene 103 according to three-dimensional data View and right-eye view, described three-dimensional data represents object and includes left-eye camera 114A and right eye shooting The virtual three-dimensional video camera 114 of machine 114B position in simulated environment 102, as in video-game or void Intend the position in the world.For the purpose of embodiment, virtual three-dimensional video camera can be considered to be has two The part of one unit of individual individual camera.But, embodiment of the present invention include implemented below side Formula: wherein virtual three-dimensional video camera is to be individually and not the part of a unit.It should be noted that it is virtual The position of video camera 114A, 114B and orientation determine things shown in the scene.Such as, false If simulated environment is the rank that first person shooting (FPS) is played, wherein incarnation 115A represents user 115. User controls the movement of incarnation 115A by using processor 113 and the controller 117 being suitable for and moves Make.In response to user command, processor 113 can select the position of virtual video camera 114A, 114B And orientation.If virtual video camera points to remote object (such as non-player role 116), then with described shooting Machine point near the situation of object (such as non-player role 118) compare, described scene can have bigger deep Degree.These objects can be by processor according to by the thing played relative to all positions of virtual video camera The three-dimensional information that reason simulator parts are generated determines.Object can be calculated for described scene taking the photograph The degree of depth in the visual field of camera.Then can for described scene calculate mean depth, depth capacity, Depth bounds etc., and these by scene value may be used for select 3D parameter (such as αu、tc、Zc, f and thmp) default value and/or scale factor.Citing and not with ways to restrain for, processor 113 can be real Execute the look-up table or function making concrete 3D parameter relevant with the concrete combination by scene value.Can be with warp Test and determine that 3D parameter and acquiescence are by the sheet format relation between scene value and/or scale factor or functional relationships System.Processor 113 then can according to user be preferably provided with amendment other default value and/or ratio because of Son.
In the variant of the embodiment described in about Figure 1A to Fig. 1 C, it is also possible to use motorization Physics stereo camera implements the similar adjustment arranging 3D parameter.For example, it is contemplated that Video chat is real Execute example, such as, as Fig. 1 D describes.In this case, first user 115 and the second user 115 ' respectively via first processor 113 and the second processor 113 ', 3D video camera 114 and 2nd 3D video camera 114 ' and the first controller 117 and second controller 117 ' interact. Processor 113,113 ' is coupled to each other by such as network 120, and described network can be cable network or nothing Gauze network, LAN (LAN), wide area network or other communication network.The 3D video camera of first user 114 include left-eye camera 114A and right-eye camera 114B.The left-eye image of the environment of first user It is shown in being attached in the video display units 111 ' of the processor 113 ' of the second user with eye image.With Same way, the 3D video camera 114 ' of the second user includes left-eye camera 114A ' and right eye shooting Machine 114B '.Can be to have for the purpose of embodiment, left eye stereo camera and right eye stereo camera One unit of two integrated cameras (such as, for left view and right view independent lens unit and Separated sensor) physical piece.But, embodiment of the present invention include implementations below: wherein Virtual left-eye camera and right-eye camera are the part of a unit the most independently of one another and not.
The left-eye image of the environment of the second user and eye image are shown in being attached to the process of first user In the video display units 111 of device 113.The processor 113 of first user can be according to left-eye image and the right side Eye pattern picture determines by scene 3D value.Such as, color buffer is generally acquired by two video cameras. With applicable depth recovery algorithm, can believe according to the color buffer of left-eye camera and right-eye camera Breath recovers depth information.Depth information can be transmitted to the second user by processor 113 together with image Processor 113 '.It should be noted that depend on scene content, depth information can change.Such as, The scene gathered by video camera 114A ', 114B ' can contain the object being in different depth, such as user 115 ' and remote object 118 '.These objects different depth in described scene can affect described scene The mean pixel degree of depth and the standard deviation of pixel depth.
The two the left eye of video camera 114 ' of the video camera 114 of first user and the second user can be used in Video camera and right-eye camera motorization, so that imaging for described left eye in the adjustment that can be in operation The parameter of machine and right-eye camera is (such as f, tcAnd " introversion " angle) value.First user can select to take the photograph The initial setting up of the 3D video parameter of camera 114, such as spacing t between video cameracAnd/or left-eye camera 114A and the relative level anglec of rotation of right-eye camera 114B (for " introversion ").Such as, as more than Described, the second user 115 ' can use second controller 117 ' and the second processor 113 to adjust the Setting (such as, f, t of the 3D video parameter of the video camera 114 of one usercOr inclined angle) to adjust Scale factor.Represent that then the data of the adjustment of the comparative example factor can be transmitted to first via network 120 Processor 113.First processor can use described adjustment to adjust the video camera 114 of first user 3D video parameter is arranged.In a similar manner, first user 115 can adjust the 3D video of the second user The setting of video camera 114.By this way, each user 115,115 ' can be arranged at comfortable 3D The 3D video image of the environment of lower viewing the opposing party.
The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls
The improvement that 3-D view renders has in using the region of interactive virtual environments of 3-D technology Significant impact.Many video-games are implemented 3-D views and are rendered and create for the mutual virtual environment of user. But, simulation real world physical phenomenon promote the user with virtual world be alternately much more expensive also And be quite difficult to carry out.Therefore, during the execution of game, it is likely to occur some unwanted vision disorderly Disorderly.
The virtual objects (such as, role and rifle) causing user to control when the pseudomorphism of 3 D video penetrates virtual A problem is there will be during other element in world's (such as, background scenery).When user control virtual right During as penetrating other element in virtual world, greatly reduce the sense of reality of game.At the first person In the situation of shooting, the sight line of the described first person is possible hindered or perhaps some critical elements may It is occluded.Therefore, it is any that the virtual objects controlled with the user in three-dimensional virtual environment is characterized alternately Program is necessary to eliminate the appearance of these visual disorders.
Embodiment of the present invention can be configured to the virtual objects pixel depth that scaling user controls, with Just the problem that the virtual objects that solution user controls penetrates the element of the three-dimensional scenic of virtual world.First In the situation of person shooter (FPS) video-game, a possible embodiment will be as from ejaculator visual angle institute The end of the gun barrel seen.
Fig. 2 A to Fig. 2 B illustrates that the virtual objects that user controls penetrates the three dimensional field using reprojection to be generated The problem of the element of the virtual world in scape.When the virtual objects of user's control penetrates its in virtual world During its element, greatly reduce the sense of reality of game.As shown in Fig. 2 A, the most unexecuted To user control virtual objects pixel depth value scaling virtual environment (such as, scene) in, user (such as, the virtual objects 201 (such as, gun barrel) controlled can penetrate another element 203 of virtual world Wall), thus cause potential viewing obstruction and the sense of reality weakened, as discussed above.The first In the case of claiming shooting, the sight line of the first person may be hindered or perhaps some critical elements is (such as, The end of gun barrel) may be occluded.The element hidden is shown in fig. 2 with imaginary line.
The common solution of two dimension first person video-game is the deep of the object in scaling virtual world Degree, in order to (or to be changed into by described pseudomorphism be not same the most different to eliminate the visual artefacts in two dimensional image Pseudomorphism).In the rasterization process of two-dimensional video image, generally apply described scaling.Shoot at the first person In embodiment, it means that whether the top of nozzle barrel 201 is through wall 203, and spectators will see See the top of described gun barrel.Described solution plays good effect for two-dimensional video, but, There is a problem when this solution is applied to measurements of the chest, waist and hips video.Described it has a problem in that: relative to two dimension The remainder of image, the depth value of scaling no longer represents the real point in three-dimensional.Therefore, throw again when application When shadow generates left-eye view and right-eye view, degree of depth scaling causes object to present compression on depth dimensions And it is on errors present.Such as, as shown in Figure 2 B, gun barrel 201 is perceived now in degree of depth side Upwards will " be crushed ", and when it should from physical screen more close to time, described gun barrel is oriented to It is extremely close to spectators.Another in reprojection has a problem in that: degree of depth scaling also can at the end of at image In leave the big leak being difficult to fill up.
Additionally, degree of depth scaling is returned to original value or rewriting with the real depth value from three-dimensional scene information Depth value means: spectators still will see gun barrel, but described gun barrel will be perceived as being in wall After.Despite the fact that virtual objects 201 should be stopped by wall 203, but spectators will see institute State the mirage phantom part of virtual objects.It is bothersome that this degree of depth pierces through effect, because spectators expect still See wall.
For solving this problem, the object that second group of scaling is applied in scene by embodiment of the present invention, So that in the suitable perceived position they being placed in described scene.Can the rasterisation of two dimensional image it But afterwards before the reprojection of described image or during apply described second scaling with generate left eye regard Figure and right-eye view.Fig. 2 C illustrates the scaling wherein carried out the virtual objects pixel depth value that user controls Virtual environment (such as, scene).Here, by the scaling to pixel depth as discussed above, use The virtual objects 201 that family controls close to another element 203 of virtual world, but but can be limited System and can not piercing elements 203.Second scaling limits depth value and is between close values N and remote value F.This In matter, object may be rendered as still crushed on depth dimensions, but can apply on its thickness Full control.This is a kind of balance, of course, it is possible to provide the control of this second scaling for spectators, such as, As discussed above.
Therefore, it can eliminate or significantly decrease the virtual objects controlled by user penetrate the unit of virtual world Visual disorders caused by part.
Fig. 3 is to illustrate a kind of user's control in scaling three-dimensional scenic according to embodiment of the present invention The schematic diagram of the method for the pixel depth value of the virtual objects of system.
For solving this problem, program can be applied according to the three-dimensional scenic content needing to be presented to user The second scaling to the pixel depth value of the virtual objects that user controls.
Scene 103 may be located in output buffer 101 before presentation to a user.Can be according to these They are sequentially arranged by presenting of scene 103.Scene 103 refers to one group of sharing characteristic as spy The one or more 3 D video frames levied.Such as, the frame of video of one group of different views representing same landscape A scene can be characterized as being.But, the close-up view of same target and perspective view can also represent different Scene.It is important to note that: the combination of any amount of frame can be characterized as being a scene.
ID scaling as indicated by 133, to the two dimensional image of three-dimensional scenic 103.Generally Use the view projections matrix revised to carry out described ID in the rasterization process of two dimensional image Scaling.The depth information of scaling is write to the depth buffer of described scene by this.
Before dimensionally (such as, as left-eye view and right-eye view) scene 103 being presented to user, Can study described scene in detail to determine for solving problem discussed above is the important of key Characteristic.For given scenario 103, it is first determined Minimum Threshold limit value, as indicated at 135.This Individual Minimum Threshold limit value represents minimum pixel depth value, and any fragment of the virtual objects that user controls must not Fall below described minimum pixel depth value.Secondly, terminal threshold value is determined, as indicated by 137. This terminal threshold value represents maximum pixel depth value, and any fragment of the virtual objects that user controls is necessary Less than described maximum pixel depth value.The virtual objects that user is controlled by these threshold limit values can be virtual Situation about advancing in environment arranges a restriction, so that the virtual objects that described user controls is restricted And other element in described virtual environment can not be penetrated.
When the virtual objects that user controls moves in virtual world, virtual objects is followed the tracks of their picture Element depth value and make it compare with the pixel depth value of threshold limit value determined above, as 139 Place is indicated.When the pixel depth value of any fragment of the virtual objects no matter user controls falls in minimum Below threshold limit value, all those pixel depth value are arranged to low value, as indicated by 141.Citing and For not with ways to restrain, this low value can be described Minimum Threshold limit value.Alternatively, this low value can To be the scaling value of the virtual objects pixel depth value that user controls.Such as, by being multiplied by with inverse proportion Pixel depth value below described Minimum Threshold limit value and then by smallest offset plus product, can be true Fixed described low value.
When the pixel depth value of any fragment of the virtual objects no matter user controls exceedes terminal threshold Those pixel depth value are all arranged to high level, as indicated by 143 by value.Citing and not with limit For mode, this high level can be described terminal threshold value.Alternatively, this high level can be user The scaling value of the virtual objects pixel depth value controlled.Such as, by described in being multiplied by more than with inverse proportion The pixel depth value of big threshold limit value and then deduct product from peak excursion, it may be determined that described high level.
For the most tiny need not to strengthen for the virtual objects of the perception of the degree of depth, by low/ High level is arranged to min/max threshold limit value and plays the best effect.These low/high value make described effectively Virtual objects is away from virtual video camera displacement.But, for needing the virtual of the perception to the degree of depth of enhancing For object (such as sight), scaling low/high value mentioned above can more effectively play a role.
Before program is performed by processor 113, Minimum Threshold limit value and maximum can be determined by described program Threshold limit value.These values can also be determined by processor 113 while performing the content of described program.? During the execution of described program, processor 113 complete the pixel depth of the virtual objects that user controls Value and the comparison of threshold limit value.Similarly, during the execution of described program, described processor complete Exceed threshold limit value or the low value of virtual objects pixel depth that the user below threshold limit value of falling controls and high level Establishment.
After carrying out the second scaling on pixel depth value, processor 113 can use X-Y scheme As and use the pixel depth value collection of virtual objects that the user of gained controls to carry out reprojection, in order to Generate two or more views (such as, left-eye view and right-eye view) of three-dimensional scenic, as 145 Place is indicated.Said two or more view may be displayed on three dimensional display, as in 147 places Instruction.
Any pixel depth value of the virtual objects by the user exceeding threshold limit value being controlled is arranged to low value And high level, solve the problem penetrating other virtual world element.Although simulation virtual objects is virtual with it The mutual physical phenomenon in the world will efficiently solve this problem, but in fact this is quite difficult to reality Execute.Therefore, the pixel depth value of the virtual objects that scaling user controls is carried out according to method described above Ability be that described problem provides a kind of simple, cost-effective solution.
Equipment
Fig. 4 illustrates that a kind of may be used for according to embodiment of the present invention implements the three dimensional field that user determines What scape was arranged dynamically adjust and/or the block diagram of computer equipment to the scaling of pixel depth value.Equipment 200 typically can include processor module 201 and memorizer 205.Processor module 201 can include one Individual or multiple processor cores.The embodiment using the processing system of multiple processor module is cell processing Device, embodiment is described in detail in such asCell Broadband Engine ArchitectureIn, it can be Line ground with
http://www-306.ibm.com/chip/techlib/techlib.nsf/techdocs/1AEEE1270EA277638 7257060006E61BA/ $ file/CBEA_01_pub.pdf obtains, and is incorporated by reference this Literary composition.
Memorizer 205 can form in integrated circuit, such as RAM, DRAM, ROM etc..Deposit Reservoir 205 can also is that can be by the main storage of all processor die block access.In some embodiments, Processor module 201 can have the local memory being associated with each core.Program 203 can The form of the processor instructions performed on described processor module is stored in main storage 205. Program 203 can be configured to carry out the three-dimensional scenic determining user and arrange the dynamic adjustment of collection.Program 203 can also be configured to carry out the pixel depth value to the virtual objects that the user in three-dimensional scenic controls Scaling, such as, above with respect to as described in Fig. 3.Can with any applicable processor readable language (such as, C, C++, JAVA, Assembly, MATLAB, FORTRAN) and many other Languages write Program 203.Input data 207 can also store in memory.This kind of input data 207 can be wrapped Include the three-dimensional that user determines arrange three-dimensional character that collection is associated with given scenario or with some three-dimensional character The scale factor being associated.Input data 207 can also include the threshold limit value that is associated with three-dimensional scenic with And the pixel depth value that is associated of object controlled with user.During the execution of program 203, program The part of code and/or data can be loaded onto in the local memory of memorizer or processor core, with In by multiple processor core parallel processings.
Equipment 200 can also include well-known support function 209, such as input/output (I/O) element 211, power supply (P/S) 213, clock (CLK) 215 and cache 217.Equipment 200 can be optionally Including high-capacity storage 219, such as disc driver, CD-ROM drive, tape drive or class Like thing with storage program and/or data.Device 200 can optionally include display unit 221 and user Interface unit 225 is mutual with promote between described equipment and user.Citing and not with ways to restrain for, Display unit 221 can be in the form of 3-D ready television machine, and it is by text, numeral, graphical symbol Or other visual object is shown as the stereo-picture by a pair 3-D viewing glasses 227 perception, described 3-D Viewing glasses could be attached to I/O element 211.Stereo refers to by slightly different images being presented to The amplification of depth illusion in the two dimensional image of every eyes.User interface 225 can include keyboard, mouse, Rocking bar, light pen, or other device can being used in combination with graphical user interface (GUI).Equipment 200 is also Can include that network interface 223 is to allow described device to lead to other device through network (such as the Internet) Letter.
The parts of system 200, including processor 201, memorizer 205, support that function 209, magnanimity are deposited Storage device 219, user interface 225, network interface 223 and display 221 can be via one or many Individual data/address bus 227 is operatively connected to one another.These parts can be embodied in hardware, software or firmware Or in these parts in two or more some combinations.
There is other modes many makes the parallel processing using the multiple processors in described equipment close Physics and chemistry.Such as, in some implementations, such as by multiple in the heart at two or more processor cores Code processed and make each processor core implement described code to process different pieces of information block, it is possible to " solution Open " treatment loop.This kind of implementation can be avoided and set the waiting time that described loop is associated.? When being applied to embodiment of the present invention, multiple processors can determine concurrently the ratio of different scene because of Son.The ability processing data concurrently can also save the process time of preciousness, thus obtains for scaling Pixel depth value more effective corresponding to the virtual objects that the one or more users in three-dimensional scenic control With the system rationalized.The ability processing data concurrently can also save the process time of preciousness, thus Obtain the more effective and system of rationalization dynamically adjusted of the scene setting collection determined for three dimensional user.
In addition to can implementing the processing system of parallel processing on three or more processors one Embodiment is Cell processor.Existence can be classified as the process body that the many of Cell processor is different Architecture.Citing and without limitation for, Fig. 5 illustrates a type of Cell processor.Cell processing Device 300 includes main storage 301, single supply processor elements (PPE) 307, and eight collaborative process Device element (SPE) 311.Alternatively, described Cell processor can be configured with any amount of SPE. With reference to Fig. 3, memorizer 301, PPE 307 and SPE 311 can be through ring-type element interconnection bus 317 And communicate with one another and communicate with I/O device 315.Memorizer 301 is containing having with described above The input data 303 of the input identical feature of data with there is the feature identical with program described above Program 305.At least one in SPE 311 can include programmed instruction at its local memory (LS) 313 and input data 303 the part having pending parallel processing, such as, as described above.PPE 307 can include programmed instruction 309 in its L1 cache.Programmed instruction 309,313 can be joined Put and implement embodiment of the present invention, such as, as above with respect to described by Fig. 1 or Fig. 3.Citing and For not with ways to restrain, instruction 309,313 can have identical with program described above 203 Feature.Instruction 309,313 and data 303 can also be stored in memorizer 301 for when needed Accessed by SPE 311 and PPE 307.
Citing and not with ways to restrain for, instruction 309,313 can include for implement as above with respect to The instruction dynamically adjusting instruction that the three-dimensional scenic that user described by Fig. 1 determines is arranged.Alternatively, refer to Make 309,313 can be arranged to implement the scaling of pixel depth value to the virtual objects that user controls, Such as, as above with respect to described by Fig. 3.
For example, PPE 307 can be 64 PowerPC process with the cache being associated Device unit (PPU).PPE 307 can include optional vector multimedia extension unit.Each SPE 311 Including coprocessor unit (SPU) and local memory (LS).In some implementations, local storage Device can have for program and the memory span of e.g., from about 256 kilobytes of data.SPU is and PPU Compare more uncomplicated computing unit, because described SPU does not the most carry out system management function.SPU can There is single-instruction multiple-data (SIMD) ability and generally process data and initialize any required Data transmission (is limited by by the access character set by PPE), in order to carries out them and obtains the task of distribution. SPU allows system to implement need the application program of higher computing unit density and can be efficiently used institute The instruction set provided.A large amount of SPU in the system managed by PPE allow through broad range of application journey Sequence carries out cost-effective process.For example, the feature of Cell processor can be to be referred to as unit band The architecture of wide engine architecture (CBEA).In CBEA compliant architecture, multiple PPE A PPE group can be combined into, and multiple SPE can be combined into a SPE group.For embodiment Purpose, Cell processor is depicted as having the single SPE group with single SPE and the list with single PPE PPE group.Alternatively, Cell processor can include organizing power processor element (PPE group) more and organizing association more Same processor elements (SPE group).CBEA compatible processor is described in detail in such asCell Broadband Engine ArchitectureIn, it can online with https://www-306.ibm.com/chips/techlib/techlib.nsf/techdocs/1AEEE1270EA2776 38725706000E61BA/ $ file/CBEA_01_pub.pdf obtains, and is incorporated by reference this Literary composition.
According to another embodiment, for the instruction dynamically adjusted of the three-dimensional scenic setting that user determines Can store in a computer-readable storage medium.Citing and not with ways to restrain for, Fig. 6 A illustrates The embodiment of the non-transient computer-readable recording medium 400 according to embodiment of the present invention.Storage Medium 400 stores containing with a kind of form can retrieved by computer processor unit, interpret and be performed Computer-readable instruction.Citing and not with ways to restrain for, computer-readable recording medium can be Computer-readable memory, such as random access memory (RAM) or read only memory (ROM), is used for admittedly Determine the computer-readable storage disk of disc driver (such as, hard disk drive), or moveable magnetic disc drives Device.It addition, computer-readable recording medium 400 can be flash memory device, computer-readable tape, CD-ROM, DVD-ROM, Blu-ray Disc (Blu-Ray), HD-DVD, UMD, or other optics Storage medium.
The instruction 401 dynamically adjusted that storage medium 400 is arranged containing the three-dimensional scenic determined for user. The instruction 401 dynamically adjusted that three-dimensional scenic that user determines is arranged can be configured to according to above with respect to Method described by Fig. 1 is implemented dynamically to adjust.Specifically, dynamically adjust instruction 401 can include Determine the instruction 403 of the three-dimensional character of scene, described instruction for that determine given scenario with described scene Relevant some characteristic of the optimization that arranges of three-dimensional viewing.Dynamically adjust instruction 401 can wrap further Including the instruction 405 determining scale factor, described instruction is configured to based upon the characteristic of given scenario and determines one Individual or multiple scale factor is to represent that some optimization that will make adjusts.
Dynamically adjust instruction 401 can also include adjusting the three-dimensional instruction 407 arranged that user determines, institute State instruction and be configured to be applied to the one or more scale factor the three-dimensional scenic that described user determines Arrange, so that result is: by the 3-D of the scene that user preference and intrinsic both scene characteristics are taken into account Projection.Described result is the scene visual performance according to the predetermined set of user, and the predetermined of described user sets Put and can be revised according to some characteristic being associated with described scene, so that can be the most optimal Change each user perception to given scenario.
Dynamically adjusting instruction 401 and additionally can include showing the instruction 409 of scene, described instruction is configured To arrange according to the three-dimensional scenic dynamically adjusted obtained above to show scene at visual displays On.
According to another embodiment, the picture of the virtual objects that the user in scaling three-dimensional scenic controls The instruction of element depth value can store in a computer-readable storage medium.Illustrate and do not come with ways to restrain Saying, Fig. 6 B illustrates the non-transient computer-readable recording medium 410 according to embodiment of the present invention Embodiment.Storage medium 410 containing can be retrieved by computer processor unit with one, interpret and The computer-readable instruction of the form storage performed.Citing and not with ways to restrain for, computer-readable Storage medium can be computer-readable memory, such as random access memory (RAM) or read only memory (ROM), for fixed disk drive (such as, hard disk drive) computer-readable store disk, or Removable disk drive.It addition, computer-readable recording medium 410 can be flash memory device, calculating The readable tape of machine, CD-ROM, DVD-ROM, Blu-ray Disc, HD-DVD, UMD, or other Optical storage medium.
The storage medium 410 pixel containing the virtual objects that the user in scaling three-dimensional scenic controls is deep The instruction 411 of angle value.The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls Instruction 411 can be configured to according to implementing pixel depth scaling above with respect to the method described by Fig. 3. Specifically, pixel depth scaling instruction 411 can include initial scaling instruction 412, described initial scaling Instruction can carry out the initial scaling of the two dimensional image of three-dimensional scenic when executed.Instruction 411 can be entered One step includes the instruction 413 of the minimum threshold of the determination three-dimensional scenic for determining Minimum Threshold limit value, for For concrete scene, the pixel depth value of the virtual objects that user controls may will not fall at described Minimum Threshold Below limit value.Similarly, pixel depth scaling instruction 411 could be included for determining terminal threshold value The instruction 415 of terminal threshold of determination three-dimensional scenic, for concrete scene, the void that user controls The pixel depth value intending object may be not over described terminal threshold value.
Pixel depth scaling instruction 411 can also include the instruction 417 comparing virtual objects pixel depth, Described instruction is for the pixel depth being associated by the virtual objects controlled with user and threshold determined above Limit value compares.The pixel depth value of the virtual objects by user being controlled is deep with the pixel of threshold limit value Angle value compares, and can follow the tracks of the position of the virtual objects that user controls continuously to guarantee that it will not be worn Other virtual component in three-dimensional scenic thoroughly.
Pixel depth scaling instruction 411 may further include and virtual objects pixel depth is arranged to low value Instruction 419, any part of the degree of depth that described instruction limits virtual objects will not fall at Minimum Threshold limit value Below.The low value of the too low pixel depth value being assigned to virtual objects can be described Minimum Threshold limit value itself, Or the scaling value of low pixel depth value, as discussed above.
Pixel depth scaling instruction 411 can include virtual objects pixel depth is arranged to high level in addition Instruction 421, described instruction limits any part of the degree of depth of virtual objects less than terminal threshold value.Refer to The high level of the too high pixel depth value tasking virtual objects can be described terminal threshold value itself, or high picture The scaling value of element depth value, as discussed above.
The instruction of pixel depth scaling may further include reprojection instruction 423, and described instruction uses gained User control virtual objects pixel depth value set pair two dimensional image carry out reprojection to produce three dimensional field Two or more views of scape.Pixel depth scaling instruction 411 additionally can include the finger showing scene Making 425, described instruction is configured to use the virtual objects pixel depth of gained and arranges collection and scene shown On visual displays.
As mentioned above, embodiment of the present invention can utilize three-dimensional viewing glasses.Fig. 7 shows The embodiment of three-dimensional viewing glasses 501 according to an aspect of the present invention.Glasses can include for solid Hold left LCD eyeglass 510 and the framework 505 of right LCD eyeglass 512.As mentioned above, Each eyeglass 510 and 512 can rapidly and optionally blackening, in order to prevents wearer from seeing Wear eyeglass.Left earphone 530 and right earphone 532 are also preferably connected to framework 505.For sending and connecing Receive wireless messages antenna 520 can also be included in framework 505 or on.Can come via any means Follow the tracks of glasses to determine that described glasses are the most just being seen to screen.Such as, the front of glasses can also include using One or more photoelectric detectors 540 in the detection described glasses orientation towards monitor.
Various known technology can be used to provide the replacement of the image from video feed-in to show.Fig. 1's Visual displays 111 can be configured to each video feed-in shared on screen with progressive scan mould Formula operates.But, embodiment of the present invention can also be configured to work terleaved video, As described.For standard television monitor, as used staggered NTSC or those TVs of PAL format For monitor, the image of two screen feed-ins can interlock and from a figure of a video feed-in Multiple row of picture can be with multiple line interlacings of an image from another video feed-in.Such as, aobvious Show the odd-numbered line from the Image Acquisition from the first video feed-in, and then show from from the second video The even number line of the Image Acquisition of feed-in.
Fig. 8 shows the system-level diagram of the glasses can being used in combination with embodiment of the present invention.Eye Mirror can include processor 602, and it performs the instruction from the program 608 stored in the memory 604. Memorizer 604 can also store and will be supplied to other memory scan any of processor 602 and glasses/deposit Storage element or the number from other memory scan any of described processor and described glasses/memory element output According to.Other element of processor 602, memorizer 604 and glasses can be carried out each other through bus 606 Communication.Other element this kind of can include lcd driver 610, and it provides optionally shield left LCD Eyeglass 612 and the driving signal of right LCD eyeglass 614.Lcd driver can in the different time and With various durations individually, left LCD mirror is blocked together or in the identical time or with the identical persistent period Sheet and right LCD eyeglass.
It is (such as, based on NTSC in glasses that frequency when blocking LCD eyeglass can shift to an earlier date memorizer Given frequency).Alternatively, 616 can be inputted by means of user (such as, adjust or key in required frequency Knob or button) select frequency.It addition, required frequency and initially blocking the time started, or during instruction Between section out of Memory (during the described time period, it should or should not block LCD eyeglass, no matter this The class time period whether arrange frequency and duration under) can be via wireless transmitter receiver 601 or appoint What its input element transmits to glasses.Wireless transmitter/receptor 601 can include any wireless transmit Device, including bluetooth transmitters/receptor.
Audio frequency amplifier 620 can also receive the information from wireless transmitter/receptor 601, i.e. will The L channel of the audio frequency of left speaker 622 to be supplied to or right speaker 624 and R channel.Glasses also may be used To include mike 630.Mike 630 can be used in combination to provide voice communication with game;Voice Signal can transmit to game console or another device via wireless transmitter/receptor 601.
Glasses can also include one or more photoelectric detector 634.Photoelectric detector is determined for Whether glasses orient towards monitor.Such as, photoelectric detector can detect incident described photoelectric detector The intensity of light and transmit information to processor 602.If described processor detect may with The essence that family sight is transitioned off in the light intensity that monitor is relevant declines, then described processor can be whole Only eyeglass is blocked.Can also use and determine that whether glasses (and therefore user) are towards monitor orientation Other method.It is, for example possible to use replace one or more video cameras of photoelectric detector, and by Reason device 602 checks that acquired image is to determine whether glasses use such a towards monitor orientation The several possible embodiment of video camera may include that inspection contrast level is to detect described video camera Whether point to described monitor, or attempt detecting the luminance test pattern on described monitor.By via Wireless transmitter/receptor 601 transmits information to processor 602, provides multiple feedbacks to described monitor The device entered may indicate that the existence of this kind of test pattern.
It should be noted that such as, by the software implemented on processor 602 or firmware, glasses can be passed through Implement some aspect of embodiment of the present invention.For example, it is possible to implement by content in described glasses That drive and by user's scaling/adjustment color contrast or correction are arranged, and make extra metadata streams Send to glasses.It addition, along with wireless and LCD improvement, processor 113 can be directly to glasses 119 Broadcast left eye image data and right eye image data, thus eliminate the needs to independent display 111.Replace Dai Di, as image and can be associated to described glasses feeding list from display 111 or processor 113 Pixel depth value.Both means that reprojection process actually will occur on described glasses.
Although having been described above the reality wherein using passive or active 3D viewing glasses to watch three-dimensional 3D rendering The embodiment of existing mode, but embodiment of the present invention are not limited to this kind of implementation.Particularly, Embodiment of the present invention go for not relying on the passive or active 3D of head tracking and watch glasses Stereo 3 D video technology.The embodiment of the stereo 3 D video technology of this kind of " exempting from wear a pair of spectacles " is sometimes referred to as Automatic stereo technology or free stereo.The embodiment of this kind of technology includes but not limited to based on lenticular mirror The technology of the use of sheet.Lenticular lens is the array of magnifier, and it is designed to so that when from the most not When same angle is watched, amplify different images.Different images can be selected with at different angles Three-dimensional viewing effect is provided during viewing lenticulated screen.The quantity of the image generated regards with described screen Point quantity proportionally increases.
More particularly, in lenticular lens video system, can be according to original 2D image and described The depth information of each pixel in image generates the throwing again from slightly different viewing angle of scene Shadow image.Use reprojection technology, institute can be generated according to described original 2D image and depth information State the different views from different viewing angles progressively of scene.Represent that the image of different views can be by Being divided into band and show on automatic stereoscopic display device with staggered pattern, described automatic stereoscopic display device has There is the indicator screen between lenticular lens array and viewing location.Constitute described lenticular lens Eyeglass can be to align with described band and cylindrical magnifier that the most described band twice is wide. Depending on the angle watching screen, spectators perceive the different views of scene.Different views can be selected Degree of depth illusion in shown scene is provided.
Although some preferred styles with reference to the present invention carries out description in considerable detail to the present invention, but It is that other pattern is possible to.Therefore, the spirit and scope of appended claims should be not limited to right The description of the preferred styles contained by Ben Wen.On the contrary, it should with reference to appended claims together with they equivalences The four corner of thing determines the scope of the present invention.
All spies disclosed in this specification (including any appended claims, summary and graphic) Levy and can be replaced by for identical, of equal value or similar purpose alternative features, unless otherwise expressly provided.Cause This, unless otherwise expressly provided, disclosed each feature is only a series of generalized equivalent or similar characteristics An embodiment.Any feature (being whether preferred) can with any further feature (whether It is preferred) combination.In appended claims, indefinite article " (pcs/species) " refers to described article One or more amount in item afterwards, is exception unless otherwise expressly provided.As in the U.S. Specified by code the 35th the 6th section of the 112nd article, non-clear stipulaties specifies the " dress of function for carrying out Put " claim in any key element will not explain by " device " or " step " clause.Concrete next Saying, in claims, the use of " step (step of) " is not intended to quote United States Code No. 35 herein Article 112, the regulation of the 6th section.
Reader can direct attention to submit to this specification and open for public affairs together with this specification simultaneously Many All Fileses examined and official document, and the content of any file and official document is incorporated by reference herein.

Claims (22)

1. one or more pixels of the virtual objects of the user's control in scaling three-dimensional scenic The method of depth value, described method includes:
A) the ID scaling of the two dimensional image to described three-dimensional scenic is carried out;
B) the Minimum Threshold limit value of described three-dimensional scenic is determined;
C) the terminal threshold value of described three-dimensional scenic is determined;
Wherein from the target derived from pixel depth data, described terminal threshold is determined for described three-dimensional scenic Value or Minimum Threshold limit value;
D) each pixel depth value of the virtual objects that described user is controlled and described Minimum Threshold limit value and Described terminal threshold value compares;
E) each pixel of the virtual objects that the described user fallen below described Minimum Threshold limit value is controlled The low value that depth value is contoured to correspond to;
F) each pixel depth of the virtual objects that the described user exceeding described terminal threshold value is controlled The high level that value is contoured to correspond to, wherein for not requiring that the degree of depth strengthens the virtual objects of perception by low and high Pixel depth value is set to each Minimum Threshold limit value and described terminal threshold value;
G) use the gained pixel depth value collection of the virtual objects that described user controls to carry out described two dimension The reprojection of image, in order to generate two or more views of described three-dimensional scenic;And
H) said two or more view are shown on three dimensional display.
2. the method for claim 1, e) in correspond to described Minimum Threshold limit value with Under the described low value of pixel depth be described Minimum Threshold limit value.
3. the method for claim 1, f) in correspond to over described terminal threshold value The described high level of pixel depth is described terminal threshold value.
4. the method for claim 1, wherein by with inverse proportion be multiplied by described pixel depth and By smallest offset plus described product determine e) in correspond to the picture below described Minimum Threshold limit value The described low value of the element degree of depth.
5. the method for claim 1, wherein by with inverse proportion be multiplied by described pixel depth and Described product is deducted deep to the pixel corresponding to over described terminal threshold value determining e) from peak excursion The described high level of degree.
6. the method for claim 1, wherein said three dimensional display is three-dimensional display and institute State two or more views and include left-eye view and the right-eye view of described three-dimensional scenic.
7. the method for claim 1, wherein said three dimensional display be automatic stereoscopic display device also And said two or more view include two of the described three-dimensional scenic from slightly different viewing angle Or more staggered view.
8. the method for claim 1, wherein said ID scaling is at described two dimensional image Rasterization process in carry out.
9. method as claimed in claim 8, wherein before g) or period carries out b), c), d), e) And one or more in f).
10., for an equipment for the one or more pixel depth value of scaling, described equipment includes:
Processor;
Memorizer;And
Computer code instructs, and it embodies in which memory and can be performed by described processor, its Described in computer code instruction be arranged to implement a kind of user in scaling three-dimensional scenic control The method of one or more pixel depth value of virtual objects, described method includes:
A) the ID scaling of the two dimensional image to described three-dimensional scenic is carried out;
B) the Minimum Threshold limit value of described three-dimensional scenic is determined;
C) the terminal threshold value of described three-dimensional scenic is determined;
Wherein from the target derived from pixel depth data, described terminal threshold is determined for described three-dimensional scenic Value or Minimum Threshold limit value;
D) each pixel depth value of the virtual objects that described user is controlled and described Minimum Threshold limit value and Described terminal threshold value compares;
E) each pixel of the virtual objects that the described user fallen below described Minimum Threshold limit value is controlled The low value that depth value is contoured to correspond to;
F) each pixel depth of the virtual objects that the described user exceeding described terminal threshold value is controlled The high level that value is contoured to correspond to, wherein for not requiring that the degree of depth strengthens the virtual objects of perception by low and high Pixel depth value is set to each Minimum Threshold limit value and described terminal threshold value;
G) use the gained pixel depth value collection of the virtual objects that described user controls to carry out described two dimension The reprojection of image, in order to generate two or more views of described three-dimensional scenic;And
H) said two or more view are shown on three dimensional display.
11. equipment as claimed in claim 10, it farther includes 3D vision display, and it is joined Put according to described in showing corresponding to the pixel depth value of scaling of the one or more virtual objects to Determine scene.
12. equipment as claimed in claim 11, wherein said three dimensional display be three-dimensional display and Said two or more view include left-eye view and the right-eye view of described three-dimensional scenic.
13. equipment as claimed in claim 11, wherein said three dimensional display is automatic stereoscopic display device And said two or more view include two of the described three-dimensional scenic from slightly different viewing angle Individual or more staggered views.
14. equipment as claimed in claim 10, wherein said ID scaling is at described X-Y scheme The rasterization process of picture is carried out.
15. equipment as claimed in claim 14, wherein before g) or period carries out b), c), d), E) one or more in and f).
16. 1 kinds of computer programs, comprising:
Non-transient, computer-readable recording medium, it has the computer embodied in the medium can Reader code, the void that described computer readable program code user in scaling three-dimensional scenic controls Intending one or more pixel depth value of object, described computer program has and is embodied in meter therein Calculation machine instructions, implementation includes upon execution:
A) the ID scaling of the two dimensional image to described three-dimensional scenic is carried out;
B) the Minimum Threshold limit value of described three-dimensional scenic is determined;
C) the terminal threshold value of described three-dimensional scenic is determined;
Wherein from the target derived from pixel depth data, described terminal threshold is determined for described three-dimensional scenic Value or Minimum Threshold limit value;
D) each pixel depth value of the virtual objects that described user is controlled and described Minimum Threshold limit value and Described terminal threshold value compares;
E) each pixel of the virtual objects that the described user fallen below described Minimum Threshold limit value is controlled The low value that depth value is contoured to correspond to;
F) each pixel depth of the virtual objects that the described user exceeding described terminal threshold value is controlled The high level that value is contoured to correspond to, wherein for not requiring that the degree of depth strengthens the virtual objects of perception by low and high Pixel depth value is set to each Minimum Threshold limit value and described terminal threshold value;
G) use the gained pixel depth value collection of the virtual objects that described user controls to carry out described two dimension The reprojection of image, in order to generate two or more views of described three-dimensional scenic;And
H) said two or more view are shown on three dimensional display.
17. computer programs as claimed in claim 16, wherein said three dimensional display is three-dimensional Display and said two or more view include that the left-eye view of described three-dimensional scenic and right eye regard Figure.
18. computer programs as claimed in claim 16, wherein said three dimensional display is automatic Three-dimensional display and said two or more view include described three from slightly different viewing angle Two or more staggered views of dimension scene.
19. computer programs as claimed in claim 16, wherein said ID scaling be The rasterization process of described two dimensional image is carried out.
20. computer programs as claimed in claim 19, wherein before g) or period carries out B) one or more in, c), d), e) and f).
21. the method for claim 1, the virtual objects that wherein user in three-dimensional scenic controls It is positioned in the simulated environment of video-game.
22. the method for claim 1, the low pixel depth value being provided with is less than mean pixel Depth value, and the high pixel depth value arranged is more than mean pixel depth value.
CN201610191451.7A 2011-01-07 2011-12-02 Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene Active CN105894567B (en)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US12/986,814 US9041774B2 (en) 2011-01-07 2011-01-07 Dynamic adjustment of predetermined three-dimensional video settings based on scene content
US12/986,872 US9183670B2 (en) 2011-01-07 2011-01-07 Multi-sample resolving of re-projection of two-dimensional image
US12/986,827 2011-01-07
US12/986,854 US8619094B2 (en) 2011-01-07 2011-01-07 Morphological anti-aliasing (MLAA) of a re-projection of a two-dimensional image
US12/986,827 US8514225B2 (en) 2011-01-07 2011-01-07 Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
US12/986,814 2011-01-07
US12/986,872 2011-01-07
US12/986,854 2011-01-07
CN201180064484.0A CN103329165B (en) 2011-01-07 2011-12-02 The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201180064484.0A Division CN103329165B (en) 2011-01-07 2011-12-02 The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls

Publications (2)

Publication Number Publication Date
CN105894567A true CN105894567A (en) 2016-08-24
CN105894567B CN105894567B (en) 2020-06-30

Family

ID=46457655

Family Applications (7)

Application Number Title Priority Date Filing Date
CN201180063813.XA Active CN103348360B (en) 2011-01-07 2011-12-02 The morphology anti aliasing (MLAA) of the reprojection of two dimensional image
CN201610191451.7A Active CN105894567B (en) 2011-01-07 2011-12-02 Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene
CN201180063836.0A Active CN103283241B (en) 2011-01-07 2011-12-02 The multisample of the reprojection of two dimensional image is resolved
CN201180064484.0A Active CN103329165B (en) 2011-01-07 2011-12-02 The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls
CN201180063720.7A Active CN103947198B (en) 2011-01-07 2011-12-02 Dynamic adjustment of predetermined three-dimensional video settings based on scene content
CN201610095198.5A Active CN105898273B (en) 2011-01-07 2011-12-02 The multisample parsing of the reprojection of two dimensional image
CN201610191875.3A Active CN105959664B (en) 2011-01-07 2011-12-02 The dynamic adjustment of predetermined three-dimensional video setting based on scene content

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201180063813.XA Active CN103348360B (en) 2011-01-07 2011-12-02 The morphology anti aliasing (MLAA) of the reprojection of two dimensional image

Family Applications After (5)

Application Number Title Priority Date Filing Date
CN201180063836.0A Active CN103283241B (en) 2011-01-07 2011-12-02 The multisample of the reprojection of two dimensional image is resolved
CN201180064484.0A Active CN103329165B (en) 2011-01-07 2011-12-02 The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls
CN201180063720.7A Active CN103947198B (en) 2011-01-07 2011-12-02 Dynamic adjustment of predetermined three-dimensional video settings based on scene content
CN201610095198.5A Active CN105898273B (en) 2011-01-07 2011-12-02 The multisample parsing of the reprojection of two dimensional image
CN201610191875.3A Active CN105959664B (en) 2011-01-07 2011-12-02 The dynamic adjustment of predetermined three-dimensional video setting based on scene content

Country Status (5)

Country Link
KR (2) KR101741468B1 (en)
CN (7) CN103348360B (en)
BR (2) BR112013017321A2 (en)
RU (2) RU2562759C2 (en)
WO (4) WO2012094076A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019033777A1 (en) * 2017-08-18 2019-02-21 深圳市道通智能航空技术有限公司 Method and device for improving depth information of 3d image, and unmanned aerial vehicle
CN110719532A (en) * 2018-02-23 2020-01-21 索尼互动娱乐欧洲有限公司 Apparatus and method for mapping virtual environment
CN111275611A (en) * 2020-01-13 2020-06-12 深圳市华橙数字科技有限公司 Method, device, terminal and storage medium for determining depth of object in three-dimensional scene
CN112684883A (en) * 2020-12-18 2021-04-20 上海影创信息科技有限公司 Method and system for multi-user object distinguishing processing

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3301645B1 (en) * 2013-10-02 2019-05-15 Given Imaging Ltd. System and method for size estimation of in-vivo objects
CN105323573B (en) 2014-07-16 2019-02-05 北京三星通信技术研究有限公司 3-D image display device and method
WO2016010246A1 (en) * 2014-07-16 2016-01-21 삼성전자주식회사 3d image display device and method
EP3232406B1 (en) * 2016-04-15 2020-03-11 Ecole Nationale de l'Aviation Civile Selective display in a computer generated environment
CN107329690B (en) * 2017-06-29 2020-04-17 网易(杭州)网络有限公司 Virtual object control method and device, storage medium and electronic equipment
CN109992175B (en) * 2019-04-03 2021-10-26 腾讯科技(深圳)有限公司 Object display method, device and storage medium for simulating blind feeling
RU2749749C1 (en) * 2020-04-15 2021-06-16 Самсунг Электроникс Ко., Лтд. Method of synthesis of a two-dimensional image of a scene viewed from a required view point and electronic computing apparatus for implementation thereof
US11882295B2 (en) 2022-04-15 2024-01-23 Meta Platforms Technologies, Llc Low-power high throughput hardware decoder with random block access
US20230334736A1 (en) * 2022-04-15 2023-10-19 Meta Platforms Technologies, Llc Rasterization Optimization for Analytic Anti-Aliasing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060088206A1 (en) * 2004-10-21 2006-04-27 Kazunari Era Image processing apparatus, image pickup device and program therefor
CN101383046A (en) * 2008-10-17 2009-03-11 北京大学 Three-dimensional reconstruction method on basis of image
WO2010049868A1 (en) * 2008-10-28 2010-05-06 Koninklijke Philips Electronics N.V. A three dimensional display system
US20100215251A1 (en) * 2007-10-11 2010-08-26 Koninklijke Philips Electronics N.V. Method and device for processing a depth-map
CN101937079A (en) * 2010-06-29 2011-01-05 中国农业大学 Remote sensing image variation detection method based on region similarity

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2724033B1 (en) * 1994-08-30 1997-01-03 Thomson Broadband Systems SYNTHESIS IMAGE GENERATION METHOD
US5790086A (en) * 1995-01-04 1998-08-04 Visualabs Inc. 3-D imaging system
GB9511519D0 (en) * 1995-06-07 1995-08-02 Richmond Holographic Res Autostereoscopic display with enlargeable image volume
US8369607B2 (en) * 2002-03-27 2013-02-05 Sanyo Electric Co., Ltd. Method and apparatus for processing three-dimensional images
EP2357838B1 (en) * 2002-03-27 2016-03-16 Sanyo Electric Co., Ltd. Method and apparatus for processing three-dimensional images
KR20050010846A (en) * 2002-06-03 2005-01-28 코닌클리케 필립스 일렉트로닉스 엔.브이. Adaptive scaling of video signals
EP1437898A1 (en) * 2002-12-30 2004-07-14 Koninklijke Philips Electronics N.V. Video filtering for stereo images
US7663689B2 (en) * 2004-01-16 2010-02-16 Sony Computer Entertainment Inc. Method and apparatus for optimizing capture device settings through depth information
US8094927B2 (en) * 2004-02-27 2012-01-10 Eastman Kodak Company Stereoscopic display system with flexible rendering of disparity map according to the stereoscopic fusing capability of the observer
US20050248560A1 (en) * 2004-05-10 2005-11-10 Microsoft Corporation Interactive exploded views from 2D images
CA2599483A1 (en) * 2005-02-23 2006-08-31 Craig Summers Automatic scene modeling for the 3d camera and 3d video
JP4555722B2 (en) * 2005-04-13 2010-10-06 株式会社 日立ディスプレイズ 3D image generator
US20070146360A1 (en) * 2005-12-18 2007-06-28 Powerproduction Software System And Method For Generating 3D Scenes
GB0601287D0 (en) * 2006-01-23 2006-03-01 Ocuity Ltd Printed image display apparatus
US8044994B2 (en) * 2006-04-04 2011-10-25 Mitsubishi Electric Research Laboratories, Inc. Method and system for decoding and displaying 3D light fields
US7778491B2 (en) 2006-04-10 2010-08-17 Microsoft Corporation Oblique image stitching
CN100510773C (en) * 2006-04-14 2009-07-08 武汉大学 Single satellite remote sensing image small target super resolution ratio reconstruction method
US20080085040A1 (en) * 2006-10-05 2008-04-10 General Electric Company System and method for iterative reconstruction using mask images
US20080174659A1 (en) * 2007-01-18 2008-07-24 Mcdowall Ian Wide field of view display device and method
GB0716776D0 (en) * 2007-08-29 2007-10-10 Setred As Rendering improvement for 3D display
US8493437B2 (en) * 2007-12-11 2013-07-23 Raytheon Bbn Technologies Corp. Methods and systems for marking stereo pairs of images
EP2235955A1 (en) * 2008-01-29 2010-10-06 Thomson Licensing Method and system for converting 2d image data to stereoscopic image data
JP4695664B2 (en) * 2008-03-26 2011-06-08 富士フイルム株式会社 3D image processing apparatus, method, and program
US9019381B2 (en) * 2008-05-09 2015-04-28 Intuvision Inc. Video tracking systems and methods employing cognitive vision
US8106924B2 (en) 2008-07-31 2012-01-31 Stmicroelectronics S.R.L. Method and system for video rendering, computer program product therefor
US8743114B2 (en) * 2008-09-22 2014-06-03 Intel Corporation Methods and systems to determine conservative view cell occlusion
US8335425B2 (en) * 2008-11-18 2012-12-18 Panasonic Corporation Playback apparatus, playback method, and program for performing stereoscopic playback
CN101783966A (en) * 2009-01-21 2010-07-21 中国科学院自动化研究所 Real three-dimensional display system and display method
RU2421933C2 (en) * 2009-03-24 2011-06-20 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." System and method to generate and reproduce 3d video image
US8289346B2 (en) 2009-05-06 2012-10-16 Christie Digital Systems Usa, Inc. DLP edge blending artefact reduction
US9269184B2 (en) * 2009-05-21 2016-02-23 Sony Computer Entertainment America Llc Method and apparatus for rendering image based projected shadows with multiple depth aware blurs
US8933925B2 (en) * 2009-06-15 2015-01-13 Microsoft Corporation Piecewise planar reconstruction of three-dimensional scenes

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060088206A1 (en) * 2004-10-21 2006-04-27 Kazunari Era Image processing apparatus, image pickup device and program therefor
US20100215251A1 (en) * 2007-10-11 2010-08-26 Koninklijke Philips Electronics N.V. Method and device for processing a depth-map
CN101383046A (en) * 2008-10-17 2009-03-11 北京大学 Three-dimensional reconstruction method on basis of image
WO2010049868A1 (en) * 2008-10-28 2010-05-06 Koninklijke Philips Electronics N.V. A three dimensional display system
CN101937079A (en) * 2010-06-29 2011-01-05 中国农业大学 Remote sensing image variation detection method based on region similarity

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019033777A1 (en) * 2017-08-18 2019-02-21 深圳市道通智能航空技术有限公司 Method and device for improving depth information of 3d image, and unmanned aerial vehicle
US11030762B2 (en) 2017-08-18 2021-06-08 Autel Robotics Co., Ltd. Method and apparatus for improving 3D image depth information and unmanned aerial vehicle
CN110719532A (en) * 2018-02-23 2020-01-21 索尼互动娱乐欧洲有限公司 Apparatus and method for mapping virtual environment
CN110719532B (en) * 2018-02-23 2023-10-31 索尼互动娱乐欧洲有限公司 Apparatus and method for mapping virtual environment
CN111275611A (en) * 2020-01-13 2020-06-12 深圳市华橙数字科技有限公司 Method, device, terminal and storage medium for determining depth of object in three-dimensional scene
CN111275611B (en) * 2020-01-13 2024-02-06 深圳市华橙数字科技有限公司 Method, device, terminal and storage medium for determining object depth in three-dimensional scene
CN112684883A (en) * 2020-12-18 2021-04-20 上海影创信息科技有限公司 Method and system for multi-user object distinguishing processing

Also Published As

Publication number Publication date
KR20140004115A (en) 2014-01-10
CN103329165B (en) 2016-08-24
CN103348360B (en) 2017-06-20
CN103283241A (en) 2013-09-04
RU2013129687A (en) 2015-02-20
CN105898273A (en) 2016-08-24
WO2012094074A2 (en) 2012-07-12
WO2012094077A1 (en) 2012-07-12
RU2013136687A (en) 2015-02-20
CN103947198B (en) 2017-02-15
CN103329165A (en) 2013-09-25
WO2012094076A9 (en) 2013-07-25
RU2562759C2 (en) 2015-09-10
CN103283241B (en) 2016-03-16
KR101741468B1 (en) 2017-05-30
CN105894567B (en) 2020-06-30
CN105959664A (en) 2016-09-21
BR112013017321A2 (en) 2019-09-24
KR20130132922A (en) 2013-12-05
CN105959664B (en) 2018-10-30
WO2012094074A3 (en) 2014-04-10
BR112013016887B1 (en) 2021-12-14
KR101851180B1 (en) 2018-04-24
CN103947198A (en) 2014-07-23
RU2573737C2 (en) 2016-01-27
CN103348360A (en) 2013-10-09
WO2012094076A1 (en) 2012-07-12
WO2012094075A1 (en) 2012-07-12
CN105898273B (en) 2018-04-10
BR112013016887A2 (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN103329165B (en) The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls
US9338427B2 (en) Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
US8514225B2 (en) Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
CN106464854B (en) Image encodes and display
KR100812905B1 (en) 3-dimensional image processing method and device
KR101095392B1 (en) System and method for rendering 3-D images on a 3-D image display screen
JP2005295004A (en) Stereoscopic image processing method and apparatus thereof
JP2004221700A (en) Stereoscopic image processing method and apparatus
CN105812768B (en) Playback method and system of a kind of 3D videos in VR equipment
JP2004007395A (en) Stereoscopic image processing method and device
TW201903565A (en) Method, device and non-volatile computer readable storage medium for displaying a bullet
JP2004007396A (en) Stereoscopic image processing method and device
US8947512B1 (en) User wearable viewing devices
WO2018010677A1 (en) Information processing method, wearable electric device, processing apparatus, and system
US20170104982A1 (en) Presentation of a virtual reality scene from a series of images
CN107948631A (en) It is a kind of based on cluster and the bore hole 3D systems that render
JP2004221699A (en) Stereoscopic image processing method and apparatus
Moreau Visual immersion issues in Virtual Reality: a survey
JP2004220127A (en) Stereoscopic image processing method and device
Bickerstaff Case study: the introduction of stereoscopic games on the Sony PlayStation 3
Miyashita et al. Display-size dependent effects of 3D viewing on subjective impressions
Miyashita et al. Perceptual Assessment of Image and Depth Quality of Dynamically Depth-compressed Scene for Automultiscopic 3D Display
US9609313B2 (en) Enhanced 3D display method and system
Kim et al. Adaptive interpupillary distance adjustment for stereoscopic 3d visualization
JP2024148528A (en) Image processing device, image processing method, and program

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant