Nothing Special   »   [go: up one dir, main page]

CN105959664B - The dynamic adjustment of predetermined three-dimensional video setting based on scene content - Google Patents

The dynamic adjustment of predetermined three-dimensional video setting based on scene content Download PDF

Info

Publication number
CN105959664B
CN105959664B CN201610191875.3A CN201610191875A CN105959664B CN 105959664 B CN105959664 B CN 105959664B CN 201610191875 A CN201610191875 A CN 201610191875A CN 105959664 B CN105959664 B CN 105959664B
Authority
CN
China
Prior art keywords
dimensional
scene
depth
video
predetermined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610191875.3A
Other languages
Chinese (zh)
Other versions
CN105959664A (en
Inventor
B.M.吉诺瓦
M.古特曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment America LLC
Original Assignee
Sony Computer Entertainment America LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/986,814 external-priority patent/US9041774B2/en
Priority claimed from US12/986,872 external-priority patent/US9183670B2/en
Priority claimed from US12/986,854 external-priority patent/US8619094B2/en
Priority claimed from US12/986,827 external-priority patent/US8514225B2/en
Application filed by Sony Computer Entertainment America LLC filed Critical Sony Computer Entertainment America LLC
Publication of CN105959664A publication Critical patent/CN105959664A/en
Application granted granted Critical
Publication of CN105959664B publication Critical patent/CN105959664B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • G06T15/205Image-based rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/178Metadata, e.g. disparity information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N2013/40Privacy aspects, i.e. devices showing different images to different viewers, the images not being viewpoints of the same scene
    • H04N2013/405Privacy aspects, i.e. devices showing different images to different viewers, the images not being viewpoints of the same scene the images being stereoscopic or three dimensional

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Computing Systems (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The setting of predetermined three-dimensional video parameter can be dynamically adjusted based on scene content.It can determine one or more three-dimensional characters associated with given scenario.One or more scale factors can be determined according to the three-dimensional character.By the way that the scale factor to be arranged applied to the predetermined three-dimensional video parameter, the predetermined three-dimensional video parameter setting can be adjusted.The scene can be included on three dimensional display by the adjusted predetermined three-dimensional video parameter collection obtained by use.

Description

The dynamic adjustment of predetermined three-dimensional video setting based on scene content
This divisional application be the applying date be on December 2nd, 2011, application No. is 201180063720.7, it is entitled The divisional application of " the dynamic adjustment of the predetermined three-dimensional video setting based on scene content ".
Cross reference to related applications
This application involves entitled " the SCALING PIXEL DEPTH VALUES OF submitted on January 7th, 2011 USER-CONTROLLED VIRTUAL OBJECT IN THREE-DIMENSIONAL SCENE's " is commonly assigned, co-pending Application number 12/986,827 (attorney docket SCEA10053US00).
This application involves entitled " the MORPHOLOGICAL ANTI-ALIASING (MLAA) submitted on January 7th, 2011 Commonly assigned, the co-pending application number 12/ of OF A RE-PROJECTION OF A TWO-DIMENSIONAL IMAGE " 986,854 (attorney docket SCEA10054US00).
This application involves entitled " the MULTI-SAMPLE RESOLVING OF RE- submitted on January 7th, 2011 The commonly assigned, co-pending 12/986,872 (generation of application number of PROJECTION OF TWO-DIMENSIONAL IMAGE " Manage people Reference Number SCEA10055US00).
Technical field
Embodiment of the present invention is related to the dynamic adjustment of the three-dimensional scenic setting of user's determination.
Background technology
In the past few years, phase has been become by many different technologies to perceive the ability of two dimensional image in three dimensions When popular.It may go out for any discribed scene creation in terms of two dimensional image provides a depth stronger true Sense.This introducing of 3D vision performance greatly enhances viewer experience, the especially experience in video-game boundary.
In the presence of the technology of many three-dimensional renderings for given image.Recently, it has been suggested that one kind is for by one or more A two dimensional image projects the technology into three dimensions, the rendering (DIBR) being referred to as based on depth image.With usually according to Lai Yu " solid " video basic conception (that is, to two detach video flowing-mono- for left eye and one be used for right eye- Acquisition, transmission and display) pervious motion compare, this new idea be based on to list as video is (that is, single video Stream) and associated depth information pixel-by-pixel more flexible joint transmission.It, then can be by according to this Data Representation So-called DIBR technologies generate one or more " virtual " view of 3-D scenes in receiving side in real time.What 3-D view rendered This new way brings several advantages more than previous approach.
First, this approach allows to adjust 3-D projections or show to coordinate broad range of different stereoscopic displays and throwing Shadow system.Because required left-eye view and right-eye view generate only at 3D-TV receivers, it is possible to for specific Viewing condition and presentation of the view in terms of ' perceived depth ' is adjusted.This provides the 3-D of customization for spectators Experience, it is can cosily to watch any kind of three-dimensional or Autostereoscopic 3D-TV displays experience.
DIBR also allows the 2D to 3D based on " exercise recovery structure (structure from motion) " approach to convert, The conversion can be used for single as video material generates required depth information for what is be recorded.Therefore, to broad range Program making (programming) for, can according to 2D videos generate 3D videos, this may send out in the success of 3D-TV Wave important function.
Head movement parallax is (that is, by the apparent position on the perceived position of the object caused by the variation in viewing angle Shifting or difference) it can be supported at DIBR, it is implied in order to provide another additional three-dimensional depth.This eliminate using three-dimensional or Well-known " the shear distortion " that Autostereoscopic 3D-TV systems are usually experienced is (that is, stereo-picture is shown as in observer The observer is followed when changing viewing location).
In addition, the luminosity for eliminating the need for destroy from the beginning between relief left-eye view and right-eye view is asymmetric Property (for example, for brightness, contrast or color), because two views are effectively synthesized from same original image.This Outside, the approach can realize the automatic object segmentation based on depth keying and allow to synthesize 3D objects to " real world " sequence Integration easily in row.
Finally, this approach allows spectators to adjust the reproduction to depth to be suitble to his/her personal preference-like being every A routine 2D-TV allows spectators to be controlled by (going) saturation degree to adjust color rendering.This is very important feature, because not Depth appreciation degree with age group has differences.For example, the recent research of Norman etc. confirms:The elderly is to perceiving three-dimensional depth Not as good as young man's sensitivity.
While each spectators there can be unique preferred depth setting collection, it is presented to each scene of the spectators There can also be unique preferred depth setting collection.Which range of the content provided depth setting of each scene should be used for institute State the best viewing of scene.For each scene, a reprojection parameter set may not be ideal.For example, depending on How many remote background is in visual field, and different parameters can serve preferable.Because whenever in scene change scene Hold and just change, so when determining reprojection parameter, existing 3D systems will not obtain the content of scene.
Embodiment of the present invention generates under this situation.
Description of the drawings
Figure 1A is a kind of dynamic for the three-dimensional scenic setting determined for user for showing embodiment according to the present invention The flow diagram/schematic of the method for adjustment.
Figure 1B is the schematic diagram for the basic conception for showing three-dimensional reprojection.
Fig. 1 C are the embodiments of the virtual camera adjustment for the 3D videos setting for showing embodiment according to the present invention Simplification figure.
Fig. 1 D are the embodiments of the mechanical video camera adjustment for the 3D videos setting for showing embodiment according to the present invention Simplification figure.
Fig. 2A to Fig. 2 B is to show that in three-dimensional scenic virtual objects of user's control penetrate asking for the element of virtual world The schematic diagram of topic.
Fig. 2 C are to show to solve the problems, such as that in three-dimensional scenic virtual objects of user's control penetrate the element of virtual world Pixel depth value scaling schematic diagram.
Fig. 3 be show embodiment according to the present invention a kind of user's control in scaling three-dimensional scenic it is virtual The schematic diagram of the method for the pixel depth value of object.
Fig. 4 is show embodiment according to the present invention a kind of for implementing the dynamic of the three-dimensional scenic setting of user's determination The block diagram of the equipment of the pixel depth value of the virtual objects of state adjustment and/or the user's control in scaling three-dimensional scenic.
Fig. 5 is show embodiment according to the present invention a kind of for implementing the dynamic of the three-dimensional scenic setting of user's determination The Cell processor of the equipment of the pixel depth value of the virtual objects of state adjustment and/or the user's control in scaling three-dimensional scenic is real The block diagram of the embodiment of existing mode.
Fig. 6 A show a kind of three-dimensional scenic setting having for implementing user's determination of embodiment according to the present invention Dynamic adjustment instruction non-transient computer readable storage medium embodiment.
Fig. 6 B show that a kind of of embodiment according to the present invention has for implementing the control of the user in scaling three-dimensional scenic The embodiment of the non-transient computer readable storage medium of the instruction of the pixel depth value of the virtual objects of system.
Fig. 7 is the isometric view of three-dimensional viewing glasses according to an aspect of the present invention.
Fig. 8 is the system level block diagram of three-dimensional viewing glasses according to an aspect of the present invention.
Specific implementation mode
For any spectators of the 3-D view of projection, several characteristic/hints dominate their perception to depth. It is unique for themselves eyes that each spectators, which perceive the ability of the depth in tripleplane,.Certain hints Certain depth characteristics associated with given scenario can be provided for spectators.Citing is without in a manner of limitation, these eyes Hint may include stereoscopic vision (stereopsis), convergence and shade stereoscopic vision.
Stereoscopic vision refer to spectators by handle by the difference on object to each retina project obtained information come Judge the ability of depth.By using two images of the Same Scene obtained from slightly different angle, it is possible to height The accuracy of degree by object apart from triangle division.If object is remote, that image is fallen on two retinas Aberration (disparity) will very little.If object is adjacent or near to aberration will be very big.By adjusting Same Scene Angle difference between difference projection, spectators can optimize his perception to depth.
Convergence is that another eyes of depth perception imply.When two eyeball fixes are on same target, they will It is poly-.This convergence will stretch extraocular muscle.Exactly the kinaesthesia sense of these extraocular muscles helps the perception to depth.When eye gaze exists When on remote object, the angle of convergence is smaller, and when watching attentively on nearlyr object, the angle of convergence is larger.By be directed to Determine the convergence of scene adjustment eyes, spectators can optimize his perception to depth.
Shade stereoscopic vision refers to the stereoscopic fusion of the shade to assign depth to given scenario.Enhancing reduces scene The intensity of shade can further optimize perception of the spectators to depth.
By adjusting scene setting associated with these eyes hint, spectators can optimize him to overall the three of depth Dimension perception.Although given user can select general three-dimensional scenic that collection is arranged to watch all scenes, each Scene is all unique, and therefore, depends on the content of that concrete scene, it may be necessary to it is dark to dynamically adjust certain visions Show/user setting.For example, in the situation of virtual world, the specific object that spectators watch attentively in given scenario may be important. However, the predetermined three-dimensional scene setting of spectators for that specific object may not be best for watching.Here, spectators Setting will be dynamically adjusted according to the scene so that being perceived under more preferably three-dimensional scenic setting collection described Specific object.
Figure 1A is a kind of dynamic for the three-dimensional scenic setting determined for user for showing embodiment according to the present invention The flow chart of the method for adjustment.Initially, spectators 115 and it is configured to the place for making three dimensional video data flow to visual displays 111 Reason device 113 is communicated.Processor 113 can be in the form of the following:Video game console, computer equipment, or can handle Any other device of three dimensional video data.Citing is without in a manner of limitation, visual displays 111 can be in 3-D ready The form of television set, text, number, graphical symbol or other visual objects, which are shown as, to watch glasses 119 by a pair of of 3-D The stereo-picture of perception.Describe and be described in more detail below the embodiment of 3-D viewing glasses in Fig. 7 to Fig. 8.3-D is seen See that glasses 119 can be in the form of the following:Active liquid crystal shutter glasses, active " blood-shot eye illness " shutter glasses, passive linear polarization glasses, Passive Circular Polarisation glasses, interference light filter glasses, complementary colours stereoscopic projection film are configured to viewing by visual displays 111 Any one secondary others 3-D of the image projected in three dimensions watches glasses.Spectators 115 can by user interface 117 with Processor 113 is communicated, and the user interface can take the following form:Rocking bar, controller, remote controler, keyboard, or can be with Any other device being used in combination with graphical user interface (GUI).
Spectators 115 can initially select to present to audience one group general three of each three-dimensional scenic of 115 Tie up video setting.Citing is without in a manner of limitation, spectators can just be projected in institute with the outer boundary of selected depth, three-dimensional scenic It states within outer boundary.As additional embodiment, the predetermined value of stereoscopic vision, convergence or shade stereoscopic vision can be arranged in user. In addition, if user is not provided with the predetermined value of these parameters, then the predetermined value can be default setting default value.
The reality for the other 3D video parameters setting that can be dynamically adjusted by user setting and based on scene content Example include but not limited to:Both 3D depth effects and 3D ranges.How many deep-controlled 3D effect is presented to user.Depth Outer boundary substantially indicates range and parallax (our depth and effect slider).In the realization method for being related to reprojection, Drop shadow curve can be adjusted as described below.Adjustment to reprojection curve can be the shape characteristic to the reprojection curve Adjustment, the shape can be linear or may is that the S-shaped at the center of emphasizing.Furthermore it is possible to adjust the ginseng of the shape Number.Citing is without that in a manner of limitation, for linear reprojection curve, can adjust endpoint or slope.For S-shaped Reprojection curve for, can to S oblique ascensions how soon etc. adjust.
In the other embodiments for being related to reprojection, certain edge ambiguity can be provided to repair loophole, and see Crowd 115 can drive the repairing.Furthermore it is possible to using the present invention using the embodiment of reprojection or other means to drive Dynamic color contrast to allow to be based on user's scaling adjust by scene to help to reduce ghost image-.In addition, not being related to In the case of reprojection, user can adjust to from input video camera will be how far scaling or to the slight of camera angle Fine tuning.The other video camera settings that can be adjusted in a manner of by scene include depth of field setting or camera aperture.
Because of the differently perception 3D vision performance of one or more spectators 115, different spectators can be according to him Preference and combined with different general three-dimensional scene setting.For example, research has confirmed:The elderly is three-dimensional to perception deep Degree is sensitive not as good as young man, and therefore, and the elderly may benefit from the scene setting for increasing the perception to depth.Similarly, Young man may be found that reduction can mitigate eye fatigue and tired out to the setting of the perception of depth, while still provide order for spectators The pleasant three-dimensional experience of people.
When spectators 115 are observing the stationary flow of three-dimensional scenic 103, one or more scenes for not shown to spectators also It can be stored in output buffer 101.They can be arranged according to the presentation sequence of scene 103.Scene 103 is Refer to one or more 3 D video frames characterized by one group of shared characteristic.For example, the different views of one group of same landscape of expression Video frame can be characterized as being a scene.However, the close-up view of object and the perspective view of object can indicate different fields Scape.It is important to note that:The combination of any amount of frame can be characterized as being a scene.
Scene 103 passes through two stages before being presented to spectators.The scene is handled first to determine and given field The associated one or more characteristics of scape 105.Then the predetermined set that will be applied to user is determined according to those characteristics One or more scale factors 107.It is then possible to be transmitted to processor 113 simultaneously using the scale factor as metadata 109 And applied to dynamically adjusting the setting of spectators, as indicated at 110.It is then possible to be arranged the field using adjusted Scape is presented on the display 111, as indicated at 112.This allows each scene to present to audience in such a way:Retain The basic preference of spectators, while still maintaining the vision of the scene complete by taking into account the materialization content of scene Property.In the case where not being related to reprojection, metadata can be transmitted to harvester to adjust, adjustment in gaming we Virtual camera position, or adjustment for example, as 3D chat embodiment used in physics video camera.
Before illustrating the embodiment of the method for the present invention, it is useful to discuss about some backgrounds of three-dimensional video system. Embodiment of the present invention can be applied to the reprojection for the 3D videos generated according to 2D videos by reprojection process Setting.It, can be according to the associated depth pixel-by-pixel of each pixel in normal two dimensional image and described image in reprojection Information synthesizes the left eye virtual view and right eye virtual view of a scene.This process can be implemented such as by processor 113 Under.
It first, will be in original graph picture point reprojection to the worlds 3D using the depth data of each pixel in original image.This It afterwards, will be in the plane of delineation of these 3d space spot projections to " virtual " video camera being located at required viewing location.Reprojection The linking of (2D to 3D) and follow-up projection (3D to 2D) is sometimes referred to as 3D rendering distortion or reprojection.As shown in fig. 1b, pass through Compared with the operation of " true " stereo camera, it is possible to understand that the reprojection.In " true ", high quality stereo camera, So-called parallax free setting (ZPS) is usually established using one kind in two kinds of distinct methods, that is, to select the meeting in 3D scenes Poly- distance Zc.In " introversion (toed-in) " approach, combining to rotate inward and select by left-eye camera and right-eye camera Select the ZPS.In displacement sensor approach, focusing distance ZcPlane can be established by the thin tail sheep h of imaging sensor, institute Imaging sensor is stated for standoff distance tcLeft eye " virtual " video camera being placed in parallel and right eye " virtual " video camera, such as scheme Shown in 1B.Each virtual camera can be characterized by determining focal length f, and the focal length indicates virtual camera camera lens and figure As the distance between sensor.This distance arrives hither plane P with used in some realization methods described hereinn Hither plane distance ZnIt is corresponding.
Technically, " introversion " approach is easier to realize in " true " stereo camera.However, displacement sensor approach Sometimes it is furthermore preferred that because it will not introduce unwanted vertical differentiation for reprojection, the vertical differentiation can be The potential source of eye fatigue between left-eye view and right-eye view.
It, can be in view of the depth information Z of horizontal coordinate in original 2D images and each pixel at vertical coordinate (u, v) Using displacement sensor approach, according to following equation generate left-eye view and right-eye view respective pixel coordinate (u ', v '), (u",v"):
For left-eye view,V '=v;
For right-eye view,V "=v.
In aforementioned equation, αuIt is convergence angle in the horizontal direction, as seen in this fig. 1b.thmpItem is to illustrate spectators Practical viewing location optional translation item (sometimes referred to as head movement parallax item).
By following equation, the displacement h of left-eye view and right-eye view can be with convergence angle [alpha]u, focusing distance ZcAnd Horizontal convergence angle [alpha]uIt is related:
For left-eye view,
For right-eye view,
Processor 113 can receive scene 103 with regard to following aspect:Original 2D images and pixel-by-pixel depth information are together with can To be arranged by scene acquiescence scaling applied to 3D video parameters, such as αu、tc、Zc, f and thmpOr combinations thereof (for example, ratio). For example, scaling setting can indicate between 0 (being perceived for no 3D) and some value (being perceived for the 3D of enhancing) for being more than 1 The multiplier of variation.Changing the 3D video parameters setting of virtual camera influences the qualitative perception of 3D videos.Citing is without with limitation Increase (+) for mode, described in following table I or reduces some qualitative effects of 3D video parameters selected by (-).
Table I
In tablei, term " screen parallax " refers to the level difference between left-eye view and right-eye view;Term " perception Depth " refers to the Apparent Depth of the shown scene perceived by spectators;Term " object size " refers to by the spectators institute The apparent size of the object being shown on screen 111 perceived.
It in some implementations, can be with regard to hither plane PnWith far plane PfRather than convergence angle αuAnd sensor distance tcTo describe math equation used above.Term " hither plane " refer to by video camera — that is, what imaging sensor was acquired Closest approach in scene.Term " far plane " refers to the farthest point in the scene acquired by video camera.Not to rendering beyond remote Plane Pf, that is, exceed far plane distance ZfThe anything of (as described in Figure 1B) is attempted.A kind of use is retouched above The system for the math equation stated can select hither plane and far plane indirectly by selecting the value of certain variables in equation. Alternatively, convergence angle α can be adjusted based on selected hither plane and far planeuWith sensor distance tcValue.
The operation of three-dimensional reprojection system can be required to be described below:1) to the choosing of the hither plane of given scenario It selects;2) to the selection of the far plane of given scenario;3) it is that the reprojection of the given scenario is defined from the hither plane to described The transformation of far plane.The transformation, sometimes referred to as reprojection curve, substantially make the amount of horizontal pixel and vertical pixel displacement It is related with pixel depth;4) a kind of method for inessential/important pixel to be filtered and/or weighted;5) a kind of system, It is used to make any variation smoothing to 1 to 3 that may occur in scene conversion process, to prevent by 115 institute of spectators The uncoordinated editing of the depth perceived.Three-dimensional video system also typically includes certain machine for 6) allowing spectators' scaling 3-D effect System.
Above 6 are required to be specifically designated as follows by typical reprojection system:1) hither plane of the video camera of scene;2) field The far plane of the video camera of scape;3) transformation of pixel only horizontal displacement.Fixed displacement (commonly referred to as convergence) is turned down one Amount-pixel that a depth value with each pixel is inversely proportional is deeper or remoter, and pixel is fewer because of the displacement of convergence.Such as it is logical This requirement can be described by crossing math equation presented above;4) it because 1 to 3 is constant, need not be weighted;5) Because 1 to 3 is constant, need not be smoothed;And 6) slider can be used for for example passing through linearly scaling picture Element will the amount of displacement adjust the transformation.This be equal to by above equation for u ' or u " second (and Possible third) item add a constant ratio factor.It is real that this kind of constant ratio factor can adjust slider via user It applies, the user can adjust slider and be intended to that hither plane and far plane (and therefore average effect) is made to move towards screen plane It is dynamic.
This may lead to the poor use to three dimensions.Given scenario may be unbalanced and cause unnecessary Eyes are tired out.3D video editors or 3D game developers must be careful to build all scenes and film, so that correctly All objects being laid out in scene.
For given 3 D video, exist positioned at the viewing comfort zone 121 in the region of visual displays.Institute The image off screen curtain perceived is remoter, watches more uncomfortable (for most people).Therefore, associated with given scenario Three-dimensional scenic setting be intended to make the use of comfort zone 121 to maximize.Although some things can in the outside of comfort zone 121, But the most things watched attentively it is generally desirable to spectators are all in comfort zone 121.Citing is without in a manner of limitation, spectators The boundary of comfort zone 121 can be set, and simultaneous processor 113 can dynamically adjust scene setting, so that being directed to each field The use of comfort zone 121 is maximized for scape.
Making the use of comfort zone 121, maximumlly simple and direct approach can relate to:Hither plane is set equal to and given scenario Associated minimum pixel depth, and far plane is set equal to maximum pixel depth associated with given scenario, together When retain as above in relation to property 3 to 6 defined in typical reprojection system.This will be such that the use of comfort zone 121 maximizes, But it does not consider the effect of the object to fly in the scene or outside but, this may cause huge in three dimensions Displacement.
Citing is without in a manner of limitation, certain embodiments of method of the invention can be additionally by the scene Mean depth take into account.It can be towards the mean depth of a target drives scene.Scene data can be given Scene setting target, at the same allow user to they perceive the scene (for example, boundary of comfort zone) have from the target it is more It is remote to carry out scaling.
Pseudocode for calculating such a average value can be imagined as follows:
The minimum depth value that hither plane can be arranged to for all pixels in scene, and can be by far plane The maximum depth value being arranged to for all pixels in the scene.Target apperception depth can be by content creating Person is specifically designated and is subject to a value of scaling by the preference of user.By using the average value that is calculated with from On Transformation Properties 3, it is possible to it is how far from target apperception depth to calculate average scene depth.Citing is without in a manner of limiting For, then by simply adjusting convergence and target delta (as shown in table 1), overall perception scene depth position can be made It moves.The target delta can also be made to smooth, as below to hither plane as far plane is done.Adjustment can also be used Other methods of target depth, such as used in 3D films ensuring the method for consistent depth in scene changes.However, It should be noted that:3D films cannot provide a kind of method of adjustment target scene depth for spectators at present.
Citing without in a manner of limitation, a kind of determination one or more three-dimensional characters associated with given scenario Approach is to determine and uses following two important scene characteristics:The mean pixel depth of scene and the pixel of that scene are deep The standard deviation of degree.The pseudocode of average value and standard deviation for calculating pixel depth can be imagined as follows:
Then hither plane can be arranged to scene mean pixel depth subtract that scene pixel depth standard Deviation.Likewise it is possible to which the mean pixel depth that far plane is arranged to scene adds the standard of the pixel depth of that scene Deviation.If these are the result is that insufficient, reprojection system can will indicate the data conversion of scene into frequency domain, for To the mean pixel depth of given scenario and the calculating of standard deviation.Such as above embodiment, driving to target depth can be with Same mode is completed.
To provide a kind of method for unessential pixel to be filtered and weighted, can study in detail scene and Mark unessential pixel.Unessential pixel is likely to that the particle leapt and other incoherent small geometries will be will include Body.In the situation of video-game, this can be easily accomplished in rasterization process, otherwise, it is likely that will use one kind Algorithm for finding small cluster depth aberration.If a kind of method can distinguish the place that user is watched attentively, should incite somebody to action The depth of neighbouring pixel is considered that more important-our focal points are remoter, and pixel is more inessential.Such a method can Include but not limited to:Cursor or graticule are determined whether in image and their positions in the picture, or by using acting on one's own Industry glasses feed back to measure the rotation of eyes.This kind of glasses may include the simple camera shooting being directed toward at the eyeball of wearer Machine.The video camera can provide image, and in described image, the white of the eye of the eyes of user can be with dark parts (for example, pupil Hole) it differentiates.By analyzing image to determine the position of pupil and keep the position associated with eyeball angles, it may be determined that Rotation of eyeball.For example, pupil placed in the middle will roughly correspond to the eyeball being forwardly directed straight.
In some embodiments, it may be necessary to the pixel being highlighted in the center portion of device 111, because edge Value is likely to less important.It, can be with following if the distance between pixel is fixed at the two-dimensional distance for ignoring depth Pseudocode simply has inclined weighting statistic model to imagine to emphasize this kind of center pixel or focus:
For provide it is a kind of keep picture the system being mostly in comfort zone 121, in addition to or substitute above example Described in convergence except, it should adjust (or the other changes in math equation described above of hither plane and far plane Amount).Processor 113 can be arranged to implement a process, and the process is as by the process contemplated by following pseudocode:
1-scale=viewerScale*contentScale
2-nearPlane'=nearPlane*scale+ (mean-standardDeviation) * (1-scale)
3-farPlane'=farPlane*scale+ (mean+standardDeviation) * (1-scale)
Both viewerScale and contentScale are the values between 0 and 1 for controlling change rate.Spectators 115 adjust The value of whole viewerScale, and the value of creator of content setting contentScale.Same smoothing can be applied to above Assemble adjustment.
In certain realization methods (such as video-game), because may need processor 113 that can drive pair in scene As off screen curtain 111 is farther or closer to so it may be useful to increase target adjustment step as follows:
1-nearPlane'=nearPlane*scale+ (mean+nearShift-standardDeviation) * (1- scale)
2-farPlane'=farPlane*scale+ (mean+farShift+standardDeviation) * (1- scale)
Positive displacement will be intended to that nearPlane and farPlane is made to move back in scene.Similarly, negative displacement will Keep things mobile closer to.
In the one or more characteristics for determining given scenario (for example, hither plane, far plane, mean pixel depth, standard deviation Poor pixel depth etc.) after 105, it may be determined that scale factor collection 107.These scale factors can indicate how to make the scene It is maximized in the boundary for the comfort zone 121 that user determines.In addition, one in these scale factors can be used for controlling it is on the scene The rate of three-dimensional setting is changed in scape conversion process.
Once it is determined that the scale factor of the characteristic corresponding to given scenario, so that it may using by the scale factor as metadata 109 are stored in scene data.Scene 103 (and its adjoint three-dimensional data) can be together with associated with that scene Metadata 109 is transmitted to processor 113 together.Then processor 113 can adjust the three-dimensional scenic according to the metadata Setting.
It is important to note that:Scene can be processed to the different phase that three-dimensional data crossfire is handled determine ratio because Son and metadata, and the scene is not limited to be handled after being placed in output buffer 101.In addition, user determines Three-dimensional scenic setting collection be not limited to setting tripleplane boundary.Citing is without the field that in a manner of limitation, user determines Scape setting may also include the clarity of the object in control three-dimensional scenic or the intensity of the shade in the three-dimensional scenic.
Although previous embodiment is described under the situation of reprojection, embodiment of the present invention is not limited to This kind of realization method.The depth of scaling reprojection and the concept of range can equally well be suitable for adjustment input parameter, described Input parameter is for example for virtual or true stereo video camera the position of real-time 3D videos.If video camera feed-in is dynamic , then the adjustment to the input parameter for real-time volume content can be implemented.Fig. 1 C and Fig. 1 D show according to the present invention The embodiment of the dynamic adjustment of the video camera feed-in of alternate embodiment.
As seen in fig. 1 c, processor 113 can generate the left-eye view of scene 103 according to three-dimensional data and right eye regards Figure, the three-dimensional data indicate object and include the virtual three-dimensional video camera of left-eye camera 114A and right-eye camera 114B 114 position in simulated environment 102, such as position in video-game or virtual world.For the purpose of embodiment, virtually Stereo camera can be considered to be tool, and there are two the parts of a unit of individual camera.However, the embodiment party of the present invention Case includes following implementations:Wherein virtual three-dimensional video camera is individual and not the part of a unit.It should be noted that:It is empty The position and orientation of quasi- video camera 114A, 114B determine things shown in the scene.For example, it is assumed that simulated environment is The rank of one person shooter (FPS) game, wherein incarnation 115A indicate user 115.User is by using processor 113 and is suitble to Controller 117 control the movement and action of incarnation 115A.In response to user command, processor 113 can select virtually to take the photograph The position and orientation of camera 114A, 114B.If virtual camera is directed toward remote object (such as non-player role 116), with The video camera is directed toward the case where object (such as non-player role 118) nearby and compares, and the scene can be with the depth of bigger. These objects can be by processor according to the physical modeler component institute by playing relative to all positions of virtual camera The three-dimensional information of generation determines.Depth of the object in the visual field of video camera can be calculated for the scene.Then may be used Mean depth, depth capacity, depth bounds etc. are calculated to be directed to the scene, and these can be used for selecting by scene value 3D parameters (such as αu、tc、Zc, f and thmp) default value and/or scale factor.Citing is without in a manner of limitation, processor 113 can implement that specific 3D parameters is made specifically to combine related look-up table or function with by scene value.It can by rule of thumb really 3D parameters and acquiescence are determined by the sheet format relationship or functional relation between scene value and/or scale factor.Processor 113 then can To be preferably provided with the other default value of modification and/or scale factor according to user.
In about Figure 1A to Fig. 1 C in the variant of discribed embodiment, it is also possible to be taken the photograph with motorization physics solid Camera implements the similar adjustment to 3D parameter settings.For example, it is contemplated that Video chat embodiment, for example, as described in Fig. 1 D. In this case, the first user 115 and second user 115 ' respectively via first processor 113 and second processor 113 ', First 3D video cameras 114 and the 2nd 3D video cameras 114 ' and the first controller 117 and second controller 117 ' into Row interaction.Processor 113,113 ' is coupled to each other for example, by network 120, and the network can be cable network or wireless network Network, LAN (LAN), wide area network or other communication networks.The 3D video cameras 114 of first user include left-eye camera 114A and right-eye camera 114B.The left-eye image and eye image of the environment of first user, which are shown in, is attached to second user Processor 113 ' video display 111 ' on.In the same manner, the 3D video cameras 114 ' of second user include left eye Video camera 114A ' and right-eye camera 114B '.For the purpose of embodiment, left eye stereo camera and right eye stereo camera Can be tool there are two a unit of integrated camera (for example, for left view and right view independent lens unit and Separated sensor) physical part.However, embodiment of the present invention includes following implementations:Wherein virtual left eye camera shooting Machine and right-eye camera are physically independently of one another and not the part of a unit.
The left-eye image and eye image of the environment of second user are shown in the processor 113 for being attached to the first user On video display 111.The processor 113 of first user can be determined according to left-eye image and eye image by scene 3D values. For example, two video cameras are usually acquired color buffer.With suitable depth recovery algorithm, can be imaged according to left eye The color buffer Information recovering depth information of machine and right-eye camera.Processor 113 can be by depth information together with image one Act the processor 113 ' for being transmitted to second user.It should be noted that:Depending on scene content, depth information can change.For example, The scene acquired by video camera 114A ', 114B ' can contain the object in different depth, such as user 115 ' and remote right As 118 '.Different depth of these objects in the scene can influence the mean pixel depth and pixel depth of the scene Standard deviation.
Can be used in both the video camera 114 of the first user and the video camera 114 ' of second user left-eye camera and Right-eye camera motorization, so that the parameter for the left-eye camera and right-eye camera can be adjusted in operation (such as f, tcAnd " introversion " angle) value.First user can select the initial setting up of the 3D video parameters of video camera 114, such as Spacing t between video cameracAnd/or the relative level of left-eye camera 114A and right-eye camera 114B (for " introversion ") rotation Gyration.For example, as described above, second user 115 ' can use second controller 117 ' and second processor 113 The setting of the 3D video parameters of the video camera 114 of the first user is adjusted (for example, f, tcOr inclined angle) so as to adjust ratio because Son.Indicate then the data of the adjustment of the comparative example factor can be transmitted to first processor 113 via network 120.First processing Device can be arranged using the adjustment to adjust the 3D video parameters of the video camera 114 of the first user.In a similar manner, it first uses Family 115 can adjust the setting of the 3D video cameras 114 of second user.In this way, each user 115,115 ' can be with The 3D video images of the environment of viewing another party under comfortable 3D settings.
The pixel depth value of the virtual objects of user's control in scaling three-dimensional scenic
The improvement that 3-D view renders has great shadow in the region using the interactive virtual environments of 3-D technology It rings.Many video-games are implemented 3-D view and are rendered to create the virtual environment interacted for user.However, simulation real world Physical phenomenon is very expensive and is quite difficult to carry out to promote to interact with the user of virtual world.Therefore, in game Certain unwanted visual disorders are likely to occur in implementation procedure.
When the pseudomorphism of 3 D video causes the virtual objects (for example, role and rifle) of user's control to penetrate virtual world (example Such as, background scenery) in other elements when will appear a problem.When the virtual objects of user's control penetrate in virtual world When other elements, the sense of reality of game is greatly reduced.In the situation of first person shooting, the sight of the first person Possibility is hindered or perhaps certain critical elements may be occluded.Therefore, with the void of the user's control in three-dimensional virtual environment It is necessary to eliminate the appearance of these visual disorders for any program that quasi- object interaction is characterized.
Embodiment of the present invention can be configured to the virtual objects pixel depth of scaling user's control, to solve to use The problem of element for the three-dimensional scenic that the virtual objects of family control penetrate virtual world.In first person shooting (FPS) video trip In the situation of play, a possible embodiment by be the gun barrel as seen from ejaculator visual angle end.
Fig. 2A to Fig. 2 B shows that the virtual objects of user's control penetrate the void in the three-dimensional scenic generated using reprojection The problem of element in the quasi- world.When the virtual objects of user's control penetrate other elements in virtual world, greatly weaken The sense of reality of game.As shown in Fig. 2A, the pixel depth value of the unexecuted virtual objects to user's control wherein In the virtual environment (for example, scene) of scaling, the virtual objects 201 (for example, gun barrel) of user's control can penetrate virtual world Another element 203 (for example, wall), so as to cause the sense of reality that potential viewing hinders and weakens, as discussed above. In the case where the first person is shot, the sight of the first person may it is hindered or perhaps certain critical elements (for example, gun barrel End) may be occluded.In fig. 2 with the hiding element of imaginary line displaying.
The common solution of two-dimentional first person video-game is the depth of the object in scaling virtual world, to disappear Except the visual artefacts (or it is not same significant different pseudomorphisms to change the pseudomorphism into) in two dimensional image.Usually in two-dimensional video The scaling is applied in the rasterization process of image.It is shot in embodiment in the first person, it means that the not top of nozzle barrel 201 Whether end passes through wall 203, spectators that will all see the top of the gun barrel.The solution talks about two-dimensional video To good effect, however, there is a problem when this solution is applied to measurements of the chest, waist and hips video.Described problem is:Relatively In the remainder of two dimensional image, the depth value of scaling no longer indicates the real point in three-dimensional.Therefore, it is generated when application reprojection When left-eye view and right-eye view, depth scaling causes object that compression is presented on depth dimensions and is on errors present. For example, as shown in Figure 2 B, perceiving gun barrel 201 now in the depth direction will be by " crushing ", and when it should be from object When reason screen is closer, the gun barrel is oriented to be extremely close to spectators.Another problem in reprojection is:Depth scaling The big loophole for being difficult to fill up can also be left in the picture at the end.
In addition, depth scaling is returned to original value with the real depth value from three-dimensional scene information or rewrites depth value meaning Taste:Spectators will still see gun barrel, but the gun barrel will be perceived as behind wall.Despite the fact that virtual Object 201 should be stopped by wall 203, but spectators will see the mirage phantom part of the virtual objects.This depth pierces through effect Fruit is bothersome, because spectators it is expected still to see wall.
To solve this problem, second group of scaling is applied to the object in scene by embodiment of the present invention, to incite somebody to action They are placed in the appropriate perceived position in the scene.It can be after the rasterisation of two dimensional image but in described image Reprojection before or during using second scaling to generate left-eye view and right-eye view.Fig. 2 C show wherein to carry out To the virtual environment (for example, scene) of the scaling of the virtual objects pixel depth value of user's control.Here, by as discussed above The scaling to pixel depth stated, the virtual objects 201 of user's control can close to another element 203 of virtual world, but It is but to be restricted and be unable to piercing elements 203.Second scaling limitation depth value is between close values N and remote value F.Substantially, Object may be rendered as still being crushed on depth dimensions, but can apply control completely on its thickness.This is a kind of flat Weighing apparatus, of course, it is possible to the control of this second scaling be provided for spectators, for example, as discussed above.
Therefore, it can eliminate or significantly decrease and be penetrated caused by the element of virtual world by the virtual objects of user's control Visual disorders.
Fig. 3 be show embodiment according to the present invention a kind of user's control in scaling three-dimensional scenic it is virtual The schematic diagram of the method for the pixel depth value of object.
To solve this problem, program can be controlled user to apply according to the three-dimensional scenic content for needing to be presented to the user Second scaling of the pixel depth value of the virtual objects of system.
Scene 103 can be located in output buffer 101 before presentation to a user.It can be according to these scenes 103 Presentation is sequentially arranged them.Scene 103 refers to one or more 3 D videos characterized by one group of shared characteristic Frame.For example, the video frame of the different views of one group of same landscape of expression can be characterized as being a scene.However, same target Close-up view and perspective view can also indicate different scenes.It is important to note that:The combination of any amount of frame can be by It is characterized as a scene.
As indicated at 133, to the initial depth scaling of the two dimensional image of three-dimensional scenic 103.Usually in two dimensional image Rasterization process in the initial depth scaling is carried out using the view projections matrix changed.This believes the depth of scaling In breath write-in to the depth buffer of the scene.
It, can be detailed before dimensionally scene 103 is presented to the user by (for example, as left-eye view and right-eye view) Study the scene with determine for solves the problems, such as it is discussed above for be key key property.For given scenario For 103, it is first determined minimum threshold limit value indicates such as at 135.This minimum threshold limit value indicates minimum pixel depth value, uses Any segment of the virtual objects of family control must not be fallen below the minimum pixel depth value.Secondly, terminal threshold is determined Value, as indicated at 137.This terminal threshold value indicate maximum pixel depth value, the virtual objects of user's control it is any Segment must not exceed the maximum pixel depth value.These threshold limit values can be in virtual environment to the virtual objects of user's control A limitation is arranged in the case where interior traveling, so that the virtual objects of the user's control are restricted and cannot penetrate the void Other elements in near-ring border.
When the virtual objects of user's control move in virtual world, virtual objects are tracked with their pixel depth value And it is set to be compared with the pixel depth value of threshold limit value determined above, as indicated at 139.No matter user's control The pixel depth values of any segment of virtual objects when fall in minimum threshold limit value hereinafter, all those pixel depth values are arranged At low value, as indicated at 141.Citing is without in a manner of limitation, this low value can be the minimum threshold limit value.It replaces Dai Di, this low value can be the scaling values of the virtual objects pixel depth value of user's control.For example, by being multiplied by with inverse proportion It falls in minimum threshold limit value pixel depth value below and smallest offset is then added into product, it may be determined that is described low Value.
No matter when the pixel depth value of any segment of the virtual objects of user's control is more than terminal threshold value, all by that A little pixel depth values are arranged to high level, as indicated at 143.Citing is without in a manner of limitation, this high level can be The terminal threshold value.Alternatively, this high level can be the scaling value of the virtual objects pixel depth value of user's control.Example Such as, by being multiplied by more than the pixel depth value of the terminal threshold value with inverse proportion and then subtracting product from peak excursion, It can determine the high level.
For it is substantially tiny need not enhance for the virtual objects of the perception of depth, low/high value is arranged Play the role of at min/max threshold limit value especially good.These low/high value effectively make the virtual objects are separate virtually to take the photograph Camera displacement.However, for needs enhance for the virtual objects (such as sighting device) of the perception of depth, it is mentioned above Scaling low/high value can more effectively play a role.
Before program is executed by processor 113, minimum threshold limit value and terminal threshold value can be determined by described program.Also These values can be determined by processor 113 while executing the content of described program.In the implementation procedure of described program, by Processor 113 completes the comparison of the pixel depth value and threshold limit value of the virtual objects of user's control.Similarly, in described program In implementation procedure, is completed more than threshold limit value by the processor or fall virtual objects pixel in threshold limit value user's control below The low value of depth and the establishment of high level.
After carrying out the second scaling on pixel depth value, processor 113 can use two dimensional image and make Carry out reprojection with the pixel depth value collection of the virtual objects of the user's control of gained, so as to generate two of three-dimensional scenic or More views (for example, left-eye view and right-eye view), as indicated at 145.The two or more views can be with It is shown on three dimensional display, as indicated at 147.
By will be more than that any pixel depth value of virtual objects of user's control of threshold limit value is arranged to low value and high level, It solves the problems, such as to penetrate other virtual world elements.Although simulating the physical phenomenon of virtual objects and the interaction of its virtual world This problem will be efficiently solved, but in fact this is quite difficult to carry out.Therefore, it is put according to method described above The ability of the pixel depth value of the virtual objects of contracting user's control provides a kind of simple, cost-effective solution for described problem Scheme.
Equipment
Fig. 4 shows that a kind of of embodiment according to the present invention can be used for implementing what the three-dimensional scenic that user determines was arranged The block diagram of dynamic adjustment and/or the computer equipment to the scaling of pixel depth value.Equipment 200 generally may include processor Module 201 and memory 205.Processor module 201 may include the one or more processors core heart.Use multiple processor dies The embodiment of the processing system of block is Cell processor, and embodiment is described in detail in for exampleCell Broadband Engine ArchitectureIn, it can online with
http://www-306.ibm.com/chip/techlib/techlib.nsf/techdocs/ 1AEEE1270EA2776387257060006E61BA/ $ file/CBEA_01_pub.pdf are obtained, by it by reference simultaneously Enter herein.
Memory 205 can be in the form of integrated circuit, such as RAM, DRAM, ROM etc..Memory 205 can also be can By the main memory of all processor die block access.In some embodiments, processor module 201 can have and each core The associated local memory of the heart.The shape for the processor readable instruction that program 203 can execute on the processor module Formula is stored in main memory 205.Program 203 can be configured to carry out the dynamic for the three-dimensional scenic setting collection for determining user Adjustment.Program 203 can also be configured to carry out the pixel depth value of the virtual objects to the user's control in three-dimensional scenic Scaling, for example, as described in above with respect to Fig. 3.Can with any suitable processor readable language (for example, C, C++, JAVA, Assembly, MATLAB, FORTRAN) and many other language carry out write-in program 203.Input data 207 can also be stored in In reservoir.This kind of input data 207 may include three-dimensional setting collection, the three-dimensional character associated with given scenario that user determines Or scale factor associated with certain three-dimensional characters.Input data 207 can also include threshold associated with three-dimensional scenic Value and pixel depth value associated with the object of user's control.In the implementation procedure of program 203, program code and/or The part of data can be loaded onto in the local memory of memory or processor core, for by multiple processor cores simultaneously Row processing.
Equipment 200 can also include well-known support function 209, such as input/output (I/O) element 211, power supply (P/S) 213, clock (CLK) 215 and cache 217.Equipment 200 can optionally include high-capacity storage 219, such as Disc driver, CD-ROM drive, tape drive or the like are to store program and/or data.Device 200 can be optional Ground includes display unit 221 and user interface section 225 to promote the interaction between the equipment and user.Citing without with For limitation mode, display unit 221 can in the form of 3-D ready television machines, by text, number, graphical symbol or Other visual objects are shown as the stereo-picture that will be perceived by a pair of of 3-D viewing glasses 227, and the 3-D viewings glasses can join It is connected to I/O elements 211.Stereo refers to wrong by the way that slightly different images to be presented to depth in the two dimensional image of each eye The amplification of feel.User interface 225 may include keyboard, mouse, rocking bar, light pen, or can be combined with graphical user interface (GUI) The other devices used.Equipment 200 can also include network interface 223 to allow described device through network (such as internet) and its Its device is communicated.
The component of system 200, including processor 201, memory 205, support function 209, high-capacity storage 219, use Family interface 225, network interface 223 and display 221 operationally can each other connect via one or more data/address bus 227 It connects.These components can be embodied in some combinations of two or more in hardware, software or firmware or these components.
There are many other modes to make the parallel processing rationalization using multiple processors in the equipment.Example Such as, in some implementations, such as by replicating code and making each place in the heart in two or more processor cores It manages device core and implements the code to handle different data block, it is possible to " unlocking " treatment loop.This kind of realization method can be kept away Exempt from the stand-by period associated with the cycle is set.When applied to embodiment of the present invention, multiple processors can be simultaneously The scale factor of different scenes is determined capablely.Concurrently the ability of processing data can also save valuable processing time, to Obtain more having for the pixel depth value for corresponding to the virtual objects of one or more of three-dimensional scenic user's control for scaling Effect and the system rationalized.Concurrently the ability of processing data can also save valuable processing time, to obtain being used for three More effective and rationalization the system of the dynamic adjustment for the scene setting collection that Wesy family determines.
One embodiment other than it can implement the processing system of parallel processing on three or more processors It is Cell processor.In the presence of many different processor architectures that can be classified as Cell processor.Citing and it is unlimited For ground processed, Fig. 5 shows a type of Cell processor.Cell processor 300 includes main memory 301, single supply processing Device element (PPE) 307 and eight coprocessor elements (SPE) 311.Alternatively, the Cell processor can be configured There is any amount of SPE.With reference to Fig. 3, memory 301, PPE307 and SPE311 can through ring-type element interconnection bus 317 and It communicates with one another and is communicated with I/O devices 315.Memory 301, which contains, has spy identical with input data described above The input data 303 of sign and program 305 with feature identical with program described above.At least one of SPE311 Can include the part for having pending parallel processing of program instruction 313 and input data 303 in its local memory (LS), For example, as described above.PPE307 can include program instruction 309 in its L1 cache.Program instruction 309,313 can To be arranged to implement embodiment of the present invention, for example, as above with respect to described by Fig. 1 or Fig. 3.Citing is without with limitation side For formula, instruction 309,313 can have feature identical with program 203 described above.Instruction 309,313 and data 303 can also be stored in memory 301 for being accessed when needed by SPE311 and PPE307.
Citing is without in a manner of limitation, instruction 309,313 may include for implementing as above with respect to described by Fig. 1 User determine three-dimensional scenic setting dynamic adjust instruction instruction.Alternatively, instruction 309,313 can be configured to reality The scaling of the pixel depth value to the virtual objects of user's control is applied, for example, as above with respect to described by Fig. 3.
For example, PPE307 can be 64 Power PC Processor units of associated cache (PPU).PPE307 may include optional vector multimedia extension unit.Each SPE311 includes coprocessor unit (SPU) and local memory (LS).In some implementations, local memory can have for program and data for example The memory capacity of about 256 kilobytes.SPU is the more uncomplicated computing unit compared with PPU, because the SPU is usually unreal Row system management function.SPU, which can have single-instruction multiple-data (SIMD) ability and usually handle data and initialize, to be appointed What required data transmission (being limited by by the access property set by PPE), to carry out the task that they obtain distribution.SPU Permission system is implemented to need the application program of higher computing unit density and provided instruction set can be efficiently used.By A large amount of SPU in the system that PPE is managed allow to carry out cost-effective processing through broad range of application program.Citing comes It says, the feature of Cell processor can be to be referred to as the architecture of unit bandwidth engine architecture (CBEA).It is simultaneous in CBEA In volume architecture, multiple PPE can be combined into a PPE group, and multiple SPE can be combined into a SPE group.For reality The purpose of example is applied, Cell processor is depicted as with single SPE groups with list SPE and single PPE groups with single PPE.It substitutes Ground, Cell processor may include multigroup power processor element (PPE groups) and multigroup coprocessor element (SPE groups). CBEA compatible processors are described in detail in for exampleCell Broadband Engine ArchitectureIn, it can online with https://www-306.ibm.com/chips/techlib/techlib.nsf/techdocs/ 1AEEE1270EA277638725706000E61BA/ $ file/CBEA_01_pub.pdf are obtained, by it by reference simultaneously Enter herein.
It can be stored according to the instruction of another embodiment, the dynamic adjustment of the three-dimensional scenic setting for user's determination In a computer-readable storage medium.Citing is without in a manner of limitation, Fig. 6 A show embodiment according to the present invention The embodiment of non-transient computer readable storage medium 400.Storage medium 400 contains can be filled with one kind by computer disposal Set the computer-readable instruction of the format storage of retrieval, interpretation and execution.Citing is without in a manner of limitation, computer can It can be computer-readable memory to read storage medium, such as random access memory (RAM) or read-only memory (ROM), be used for The computer-readable storage disk or removable disk drive of fixed disk drive (for example, hard disk drive).In addition, Computer readable storage medium 400 can be flash memory device, computer-readable tape, CD-ROM, DVD-ROM, Blu-ray Disc (Blu-Ray), HD-DVD, UMD or other optical storage medium.
Storage medium 400 contains the instruction 401 of the dynamic adjustment for the three-dimensional scenic setting for being useful for user's determination.User determines Three-dimensional scenic setting dynamic adjustment instruction 401 can be configured to according to above with respect to method described in Fig. 1 come reality Dynamic is applied to adjust.Specifically, dynamic adjust instruction 401 can include determining that the instruction 403 of the three-dimensional character of scene, the finger Enable certain characteristics related with the three-dimensional optimization of viewing setting of the scene for determining given scenario.Dynamic adjustment refers to Enable 401 instructions 405 that may further include determining scale factor, the characteristic that described instruction is configured to based upon given scenario true Fixed one or more scale factor is to indicate that the certain optimizations that will be made adjust.
Dynamic adjust instruction 401 can also include adjustment user determine three-dimensional setting instruction 407, described instruction by with It sets and one or more of scale factors is applied to the three-dimensional scenic setting that the user determines, so that the result is that:It will The 3-D for the scene that both user preference and intrinsic scene characteristics are taken into account is projected.It is described the result is that scene is according to the pre- of user Surely the predetermined set of the visual performance being arranged, the user can be repaiied according to certain characteristics associated with the scene Change, so that perception of each user to given scenario can be optimized uniquely.
In addition dynamic adjust instruction 401 may include the instruction 409 for showing scene, described instruction is configured to more than It includes on visual displays that the three-dimensional scenic of the adjustment of dynamic obtained, which is arranged scene,.
According to another embodiment, the pixel depth value of the virtual objects for the user's control in scaling three-dimensional scenic Instruction can store in a computer-readable storage medium.Citing is without in a manner of limitation, Fig. 6 B are shown according to this hair The embodiment of the non-transient computer readable storage medium 410 of bright embodiment.Storage medium 410 contains can be with one kind By the computer-readable instruction of computer processor unit retrieval, the format storage for interpreting and executing.Citing is without with limitation side For formula, computer readable storage medium can be computer-readable memory, such as random access memory (RAM) or read-only deposit Reservoir (ROM), the computer-readable storage disk for fixed disk drive (for example, hard disk drive) or removable magnetic Disk drive.In addition, computer readable storage medium 410 can be flash memory device, computer-readable tape, CD-ROM, DVD- ROM, Blu-ray Disc, HD-DVD, UMD or other optical storage medium.
Storage medium 410 contains the finger of the pixel depth value for the virtual objects for being useful for the user's control in scaling three-dimensional scenic Enable 411.The instruction 411 of the pixel depth value of virtual objects for the user's control in scaling three-dimensional scenic can be configured to Implement pixel depth scaling according to above with respect to method described in Fig. 3.Specifically, pixel depth scaling instruction 411 can To include initial scaling instruction 412, the initial scaling instruction can carry out the two dimensional image of three-dimensional scenic when executed Initial scaling.Instruction 411 may further include the finger of the minimum threshold of the determination three-dimensional scenic for determining minimum threshold limit value 413 are enabled, for concrete scene, the pixel depth value of the virtual objects of user's control may not be fallen in the minimum threshold Value is following.Similarly, pixel depth scaling instruction 411 can also include the determination three-dimensional scenic for determining terminal threshold value The instruction 415 of terminal threshold, for concrete scene, the pixel depth value of the virtual objects of user's control may not be more than The terminal threshold value.
Pixel depth scaling instruction 411 can also include the instruction 417 for comparing virtual objects pixel depth, and described instruction is used It is compared with threshold limit value determined above in by pixel depth associated with the virtual objects of user's control.By that will use The pixel depth value of virtual objects of family control is compared with the pixel depth value of threshold limit value, can continuously track user's control The position of the virtual objects of system is to ensure that it will not penetrate other virtual components in three-dimensional scenic.
Pixel depth scaling instruction 411 may further include the instruction that virtual objects pixel depth is arranged to low value 419, any part of the depth of described instruction limitation virtual objects will not be fallen below minimum threshold limit value.It is assigned to virtual objects The low value of too low pixel depth value can be the minimum threshold limit value itself or the scaling value of low pixel depth value, as more than It is discussed.
Pixel depth scaling instruction 411 may include the instruction 421 that virtual objects pixel depth is arranged to high level in addition, Any part that described instruction limits the depth of virtual objects is no more than terminal threshold value.It is assigned to the excessively high pixel of virtual objects The high level of depth value can be the terminal threshold value itself or the scaling value of high pixel depth value, as discussed above.
The instruction of pixel depth scaling may further include reprojection instruction 423, user's control obtained by described instruction use The pixel depth value set pair two dimensional images of the virtual objects of system carry out reprojection with generate three-dimensional scenic two or more regard Figure.In addition pixel depth scaling instruction 411 may include the instruction 425 for showing scene, described instruction is configured to use gained Virtual objects pixel depth setting collection by scene include on visual displays.
As mentioned above, embodiment of the present invention can utilize three-dimensional viewing glasses.Displaying is according to the present invention in Fig. 7 One side three-dimensional viewing glasses 501 embodiment.Glasses may include for holding left LCD eyeglass 510 and the right side The frame 505 of LCD eyeglass 512.As mentioned above, each eyeglass 510 and 512 can rapidly and selectively Ground blackening, to prevent wearer from seeing through eyeglass.Left earphone 530 and right earphone 532 are also preferably connected to frame 505.For The antenna 520 for sending and receiving wireless messages can also include in frame 505 or on.Eye can be tracked via any means Whether mirror is just being seen to screen with the determination glasses.For example, the front of glasses can also include for detecting the glasses direction One or more photoelectric detectors 540 in the orientation of monitor.
It can be shown using various known technologies to provide the replacement of the image from video feed-in.The visual display of Fig. 1 Device 111 can be configured to operate each video feed-in shared on the screen with progressive scan pattern.However, this hair Bright embodiment can also be configured to work to interlaced video, such as described.For standard television monitor, such as use Staggeredly for those of NTSC or PAL format video monitor, the image of two screen feed-ins can be staggeredly and from one Multiple rows of one image of video feed-in can interlock with multiple rows of an image from another video feed-in.Example Such as, the odd-numbered line that display is obtained from the image from the first video feed-in, and then show from from the second video feed-in The even number line that image obtains.
The system-level diagram for the glasses that can be used in combination with embodiment of the present invention is shown in Fig. 8.Glasses can wrap Processor 602 is included, the instruction of the program 608 from storage in the memory 604 is executed.Memory 604 can also store by It is supplied to any other memory scan/memory element of processor 602 and glasses or from the processor and the glasses The data of any other memory scan/memory element output.Other elements of processor 602, memory 604 and glasses can be with It is communicated with each other through bus 606.This kind of other elements may include LCD driver 610, and it is left to provide selectively shield The drive signal of LCD eyeglasses 612 and right LCD eyeglasses 614.LCD driver can be in different times and with various durations Individually, or in same time or with the identical duration left LCD eyeglasses and right LCD eyeglasses are blocked together.
Frequency when blocking LCD eyeglasses can shift to an earlier date memory in glasses (for example, given frequency based on NTSC).It replaces Dai Di can input 616 (for example, knobs or button of frequency needed for adjustment or key entry) to select frequency by means of user.Separately Outside, required frequency and initially block the time started, or the instruction period other information (during the period, it should Or LCD eyeglasses should not be blocked, no matter under whether this kind of period in setting frequency and during continuing) can be via wireless transmission Device receiver 601 or any other input element are transmitted to glasses.Wireless transmitter/receiver 601 may include any wireless Transmitter, including bluetooth transmitters/receiver.
Audio-frequency amplifier 620 can also receive the information from wireless transmitter/receiver 601, that is, will be supplied to a left side The L channel and right channel of the audio of loud speaker 622 or right loud speaker 624.Glasses can also include microphone 630.Microphone 630 can be used in combination with game to provide voice communication;Voice signal can be transmitted via wireless transmitter/receiver 601 To game console or another device.
Glasses can also include one or more photoelectric detectors 634.Whether photoelectric detector is determined for glasses It is oriented towards monitor.For example, photoelectric detector can detect the intensity of the light of the incident photoelectric detector and by information It is transmitted to processor 602.If the processor detects may be transitioned off the related luminous intensity of monitor with user's sight On essence decline, eyeglass is blocked then the processor can terminate.It can also use and determine therefore glasses (and are used Family) whether towards monitor orient other methods.It is, for example, possible to use the one or more instead of photoelectric detector images Machine, and acquired image is checked to determine whether glasses are taken the photograph towards monitor orientation using such a by processor 602 Several possible embodiments of camera may include:Examine contrast level to detect whether the video camera is directed toward the prison Device is surveyed, or attempts to detect the luminance test pattern on the monitor.By via wireless transmitter/receiver 601 by information It is transmitted to processor 602, the device of multiple feed-ins is provided to the monitor can indicate the presence of this kind of test pattern.
It should be noted that:For example, by the software or firmware implemented on processor 602, this hair can be implemented by glasses The some aspects of bright embodiment.For example, can implement in the glasses by content driven and by user's scaling/tune Whole color contrast or correction setting, and additional metadata streams is made to be sent to glasses.In addition, changing with wireless and LCD Into processor 113 directly can broadcast left eye image data and right eye image data to glasses 119, to eliminate to individually showing Show the needs of device 111.Alternatively, it can be fed from display 111 or processor 113 to the glasses single as image and associated Pixel depth value.Both means that reprojection process will actually occur on the glasses.
Glasses wherein are watched using passive or active 3D to watch the realization method of three-dimensional 3D rendering although having been described Embodiment, but embodiment of the present invention is not limited to this kind of realization method.Particularly, embodiment of the present invention can fit For independent of head tracking is passive or active 3D watches the stereo 3 D video technologies of glasses.This kind of " exempting from wear a pair of spectacles " stands The embodiment of body 3D video techniques is sometimes referred to as automatic stereo technology or free stereo.The embodiment of this kind of technology includes but not It is limited to the technology used based on lenticular lens.Lenticular lens are the arrays of magnifying glass, are designed to so that when from slightly When micro- different angle is watched, amplify different images.Different images can be selected to watch lens at different angles Three-dimensional viewing effect is provided when shape screen.The quantity of the image generated proportionally increases to the number of views of the screen.
It more particularly, can be according to every in original 2D images and described image in lenticular lens video system The depth information of a pixel generates the reprojection image from slightly different viewing angle of scene.Use reprojection skill Art, can according to the original 2D images and depth information come generate the scene from progressively different viewing angles not Same view.Indicate that the image of different views can be divided into band and is shown on automatic stereoscopic display device with staggered pattern, The automatic stereoscopic display device has the indicator screen between lenticular lens array and viewing location.It constitutes described The eyeglass of specular eyeglass can be aligned with the band and be usually the cylindrical magnifying glass of wide twice of the band.Depend on In the angle of viewing screen, spectators perceive the different views of scene.Different views can be selected to be shown to provide depth Illusion in the scene shown.
Although carrying out description in considerable detail, other types to the present invention with reference to certain preferred styles of the present invention What formula was possible to.Therefore, the spirit and scope of the appended claims should be not limited to the preferred styles contained by this paper Description.On the contrary, the scope of the present invention should be determined together with the full scope of their equivalents with reference to the appended claims.
All features disclosed in this specification (including any appended claims, abstract and schema) can be by It is replaced for identical, of equal value or similar purpose alternative features, unless otherwise expressly provided.Therefore, unless otherwise expressly provided, institute Disclosed each feature is only a series of general one embodiment of equal value or similar characteristics.Any feature (is whether excellent Choosing) it can be combined with any other feature (being whether preferred).In appended claims, indefinite article " (pcs/species) " refers to the amount of one or more of item after the article, is unless otherwise expressly provided the feelings of exception Condition.It is specified such as in 35 the 6th section of the 112nd article of United States Code No., regulation is not known in a claim and refers to for carrying out " device " or " step " clause will not be interpreted by determining any element of " device " of function.Specifically, this paper rights are wanted The use of " step (step of) " is asked in book to be not intended to quote the regulation of 35 the 6th section of the 112nd article of United States Code No..
Reader can direct attention to submit with this specification and disclose for public examination together with this specification simultaneously All Files and official document, and the content of any file and official document is herein incorporated by reference.

Claims (23)

1. a kind of method that dynamic for predetermined three-dimensional three-dimensional video-frequency parameter setting collection adjusts, the method includes:
A) the one or more three dimensional depth characteristics for the special scenes content described from given scenario determine one or more ratios The example factor selects to give field from multiple scenes that the three dimensional depth characteristic of plurality of different scenes changes according to scene content Scape, wherein one or more three dimensional depth characteristics include the change of one or more three dimensional depth characteristics of special scenes content Rate;
B) by the one or more of scale factors of application to predetermined three-dimensional three-dimensional video-frequency parameter setting, the given field is adjusted The predetermined three-dimensional three-dimensional video-frequency parameter setting collection of scape, to generate adjusted predetermined three-dimensional three-dimensional video-frequency parameter set, wherein answering If with one or more of scale factors including changing for one or more three dimensional depth characteristics of the special scenes content The rate of change is more than threshold value, then applies the default scale factor;And
C) the use of the adjusted predetermined three-dimensional three-dimensional video-frequency parameter set include being shown in 3 D stereo by the given scenario On device.
2. the method as described in claim 1, wherein one or more of three dimensional depth characteristics include the given scenario Hither plane and far plane.
3. the method as described in claim 1, wherein one or more of three dimensional depth characteristics include the given scenario Average depth value.
4. method as claimed in claim 2, wherein by subtracting scene pixel depth standards from scene pixel depth-averaged value Deviation and determine the hither plane.
5. method as claimed in claim 2, wherein by the way that scene pixel depth-averaged value is added scene pixel depth standards Deviation and determine the far plane.
6. method as claimed in claim 2, wherein b) includes:According to a correspondence in one or more of scale factors The displacement of each pixel in image of the factor to adjust the scene.
7. the method as described in claim 1 further comprises:Filter out one or more associated with given scenario not Important pixel.
8. the method as described in claim 1 further comprises:Pair each pixel associated with given scenario adds Power.
9. the method as described in claim 1 further comprises:Determine one or more mesh associated with given scenario Mark.
10. the method as described in claim 1 wherein c) includes:In one or more scene change phases across different scenes Between, rate factor is determined to control the rate that the predetermined three-dimensional three-dimensional video-frequency parameter is adjusted.
11. the method as described in claim 1, wherein one or more of scale factors by as with one or more scenes The associated metadata of the given scenario in stream is transmitted.
12. the method as described in claim 1 further comprises determining the rate of the change of the three dimensional depth characteristic.
13. the method as described in claim 1, wherein predetermined three-dimensional three-dimensional video-frequency setting collection includes the side of tripleplane Boundary.
14. the method as described in claim 1, wherein further comprising:Determine the rate of the three dimensional depth characteristic changing, and And wherein b) include:The default scale factor is applied in the case where the rate is more than threshold.
15. the method as described in claim 1, wherein the predetermined three-dimensional three-dimensional video-frequency parameter setting includes stereoscopic vision, meeting Poly- or the spacing between shade stereoscopic vision, video camera, focusing distance, inclined angle, focal length or combinations thereof.
16. the method as described in claim 1 wherein b) includes:There is provided for adjust be used for generating one of the scene or The signal of one or more parameters of the virtual camera of multiple views.
17. the method as described in claim 1 wherein b) includes:It provides and is used for generating described the one of the scene for adjusting The signal of one or more parameters of the physics video camera of a or multiple views.
18. the method as described in claim 1, wherein one or more of three dimensional depth characteristics include the given scenario The standard deviation of the pixel depth of mean pixel depth, the given scenario, the hither plane of the given scenario, the given field The far plane of scape or a combination thereof.
19. the equipment that a kind of dynamic for three-dimensional setting adjusts, the equipment include:
Processor;
Memory;And
Computer code instructs, and is embodied in the memory and can be executed by the processor, wherein the computer Coded command is arranged to implement a kind of method that the dynamic for predetermined three-dimensional three-dimensional video-frequency parameter setting collection adjusts, the side Method includes:
A) the one or more three dimensional depth characteristics for the special scenes content described from given scenario determine one or more ratios The example factor selects to give field from multiple scenes that the three dimensional depth characteristic of plurality of different scenes changes according to scene content Scape, wherein one or more three dimensional depth characteristics include the change of one or more three dimensional depth characteristics of special scenes content Rate;
B) by the one or more of scale factors of application to predetermined three-dimensional three-dimensional video-frequency parameter setting, the given field is adjusted The predetermined three-dimensional three-dimensional video-frequency parameter setting collection of scape, to generate adjusted predetermined three-dimensional three-dimensional video-frequency parameter set, wherein answering If with one or more of scale factors including changing for one or more three dimensional depth characteristics of the special scenes content The rate of change is more than threshold value, then applies the default scale factor;And
C) the use of the adjusted predetermined three-dimensional three-dimensional video-frequency parameter set include being shown in 3 D stereo by the given scenario On device.
20. equipment as claimed in claim 19 further comprises 3D vision display, the 3D vision display quilt It configures to show the given scenario according to the adjusted predetermined three-dimensional three-dimensional video-frequency parameter set.
21. equipment as claimed in claim 19, wherein b) includes:It provides and is configured to one that adjustment is used for generating the scene Or the signal of one or more parameters of the virtual camera of multiple views.
22. equipment as claimed in claim 19, wherein b) includes:Offer is configured to adjustment and is used for generating the described of the scene The signal of one or more parameters of the physics video camera of one or more views.
23. a kind of non-transient, computer readable storage medium have the computer-readable program embodied in the medium Code, dynamic of the computer readable program code for predetermined three-dimensional three-dimensional video-frequency parameter setting collection adjust, the calculating Machine program is configured to:
A) upon being performed, the one or more three dimensional depth characteristics for the special scenes content described from given scenario determine one A or multiple scale factors, from multiple scenes that the three dimensional depth characteristic of plurality of different scenes changes according to scene content It includes that one or more three dimensional depths of special scenes content are special to select given scenario, wherein one or more three dimensional depth characteristics The rate of the change of property
B) upon being performed, by the one or more of scale factors of application to predetermined three-dimensional three-dimensional video-frequency parameter setting, adjust The predetermined three-dimensional three-dimensional video-frequency parameter setting collection of the whole given scenario, to generate adjusted predetermined three-dimensional three-dimensional video-frequency ginseng Manifold, if wherein including the one or more three-dimensional deep of the special scenes content using one or more of scale factors The rate for spending the change of characteristic is more than threshold value, then applies the default scale factor;And
C) upon being performed, the use of the adjusted predetermined three-dimensional three-dimensional video-frequency parameter set include three by the given scenario It ties up on stereoscopic display.
CN201610191875.3A 2011-01-07 2011-12-02 The dynamic adjustment of predetermined three-dimensional video setting based on scene content Active CN105959664B (en)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US12/986,814 US9041774B2 (en) 2011-01-07 2011-01-07 Dynamic adjustment of predetermined three-dimensional video settings based on scene content
US12/986,872 US9183670B2 (en) 2011-01-07 2011-01-07 Multi-sample resolving of re-projection of two-dimensional image
US12/986,827 2011-01-07
US12/986,854 US8619094B2 (en) 2011-01-07 2011-01-07 Morphological anti-aliasing (MLAA) of a re-projection of a two-dimensional image
US12/986,827 US8514225B2 (en) 2011-01-07 2011-01-07 Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
US12/986,814 2011-01-07
US12/986,872 2011-01-07
US12/986,854 2011-01-07
CN201180063720.7A CN103947198B (en) 2011-01-07 2011-12-02 Dynamic adjustment of predetermined three-dimensional video settings based on scene content

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201180063720.7A Division CN103947198B (en) 2011-01-07 2011-12-02 Dynamic adjustment of predetermined three-dimensional video settings based on scene content

Publications (2)

Publication Number Publication Date
CN105959664A CN105959664A (en) 2016-09-21
CN105959664B true CN105959664B (en) 2018-10-30

Family

ID=46457655

Family Applications (7)

Application Number Title Priority Date Filing Date
CN201180063813.XA Active CN103348360B (en) 2011-01-07 2011-12-02 The morphology anti aliasing (MLAA) of the reprojection of two dimensional image
CN201610191451.7A Active CN105894567B (en) 2011-01-07 2011-12-02 Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene
CN201180063836.0A Active CN103283241B (en) 2011-01-07 2011-12-02 The multisample of the reprojection of two dimensional image is resolved
CN201180064484.0A Active CN103329165B (en) 2011-01-07 2011-12-02 The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls
CN201180063720.7A Active CN103947198B (en) 2011-01-07 2011-12-02 Dynamic adjustment of predetermined three-dimensional video settings based on scene content
CN201610095198.5A Active CN105898273B (en) 2011-01-07 2011-12-02 The multisample parsing of the reprojection of two dimensional image
CN201610191875.3A Active CN105959664B (en) 2011-01-07 2011-12-02 The dynamic adjustment of predetermined three-dimensional video setting based on scene content

Family Applications Before (6)

Application Number Title Priority Date Filing Date
CN201180063813.XA Active CN103348360B (en) 2011-01-07 2011-12-02 The morphology anti aliasing (MLAA) of the reprojection of two dimensional image
CN201610191451.7A Active CN105894567B (en) 2011-01-07 2011-12-02 Scaling pixel depth values of user-controlled virtual objects in a three-dimensional scene
CN201180063836.0A Active CN103283241B (en) 2011-01-07 2011-12-02 The multisample of the reprojection of two dimensional image is resolved
CN201180064484.0A Active CN103329165B (en) 2011-01-07 2011-12-02 The pixel depth value of the virtual objects that the user in scaling three-dimensional scenic controls
CN201180063720.7A Active CN103947198B (en) 2011-01-07 2011-12-02 Dynamic adjustment of predetermined three-dimensional video settings based on scene content
CN201610095198.5A Active CN105898273B (en) 2011-01-07 2011-12-02 The multisample parsing of the reprojection of two dimensional image

Country Status (5)

Country Link
KR (2) KR101741468B1 (en)
CN (7) CN103348360B (en)
BR (2) BR112013017321A2 (en)
RU (2) RU2562759C2 (en)
WO (4) WO2012094076A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3301645B1 (en) * 2013-10-02 2019-05-15 Given Imaging Ltd. System and method for size estimation of in-vivo objects
CN105323573B (en) 2014-07-16 2019-02-05 北京三星通信技术研究有限公司 3-D image display device and method
WO2016010246A1 (en) * 2014-07-16 2016-01-21 삼성전자주식회사 3d image display device and method
EP3232406B1 (en) * 2016-04-15 2020-03-11 Ecole Nationale de l'Aviation Civile Selective display in a computer generated environment
CN107329690B (en) * 2017-06-29 2020-04-17 网易(杭州)网络有限公司 Virtual object control method and device, storage medium and electronic equipment
CN109398731B (en) * 2017-08-18 2020-09-08 深圳市道通智能航空技术有限公司 Method and device for improving depth information of 3D image and unmanned aerial vehicle
GB2571306A (en) * 2018-02-23 2019-08-28 Sony Interactive Entertainment Europe Ltd Video recording and playback systems and methods
CN109992175B (en) * 2019-04-03 2021-10-26 腾讯科技(深圳)有限公司 Object display method, device and storage medium for simulating blind feeling
RU2749749C1 (en) * 2020-04-15 2021-06-16 Самсунг Электроникс Ко., Лтд. Method of synthesis of a two-dimensional image of a scene viewed from a required view point and electronic computing apparatus for implementation thereof
CN111275611B (en) * 2020-01-13 2024-02-06 深圳市华橙数字科技有限公司 Method, device, terminal and storage medium for determining object depth in three-dimensional scene
CN112684883A (en) * 2020-12-18 2021-04-20 上海影创信息科技有限公司 Method and system for multi-user object distinguishing processing
US11882295B2 (en) 2022-04-15 2024-01-23 Meta Platforms Technologies, Llc Low-power high throughput hardware decoder with random block access
US20230334736A1 (en) * 2022-04-15 2023-10-19 Meta Platforms Technologies, Llc Rasterization Optimization for Analytic Anti-Aliasing

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2724033B1 (en) * 1994-08-30 1997-01-03 Thomson Broadband Systems SYNTHESIS IMAGE GENERATION METHOD
US5790086A (en) * 1995-01-04 1998-08-04 Visualabs Inc. 3-D imaging system
GB9511519D0 (en) * 1995-06-07 1995-08-02 Richmond Holographic Res Autostereoscopic display with enlargeable image volume
US8369607B2 (en) * 2002-03-27 2013-02-05 Sanyo Electric Co., Ltd. Method and apparatus for processing three-dimensional images
EP2357838B1 (en) * 2002-03-27 2016-03-16 Sanyo Electric Co., Ltd. Method and apparatus for processing three-dimensional images
KR20050010846A (en) * 2002-06-03 2005-01-28 코닌클리케 필립스 일렉트로닉스 엔.브이. Adaptive scaling of video signals
EP1437898A1 (en) * 2002-12-30 2004-07-14 Koninklijke Philips Electronics N.V. Video filtering for stereo images
US7663689B2 (en) * 2004-01-16 2010-02-16 Sony Computer Entertainment Inc. Method and apparatus for optimizing capture device settings through depth information
US8094927B2 (en) * 2004-02-27 2012-01-10 Eastman Kodak Company Stereoscopic display system with flexible rendering of disparity map according to the stereoscopic fusing capability of the observer
US20050248560A1 (en) * 2004-05-10 2005-11-10 Microsoft Corporation Interactive exploded views from 2D images
US7643672B2 (en) * 2004-10-21 2010-01-05 Kazunari Era Image processing apparatus, image pickup device and program therefor
CA2599483A1 (en) * 2005-02-23 2006-08-31 Craig Summers Automatic scene modeling for the 3d camera and 3d video
JP4555722B2 (en) * 2005-04-13 2010-10-06 株式会社 日立ディスプレイズ 3D image generator
US20070146360A1 (en) * 2005-12-18 2007-06-28 Powerproduction Software System And Method For Generating 3D Scenes
GB0601287D0 (en) * 2006-01-23 2006-03-01 Ocuity Ltd Printed image display apparatus
US8044994B2 (en) * 2006-04-04 2011-10-25 Mitsubishi Electric Research Laboratories, Inc. Method and system for decoding and displaying 3D light fields
US7778491B2 (en) 2006-04-10 2010-08-17 Microsoft Corporation Oblique image stitching
CN100510773C (en) * 2006-04-14 2009-07-08 武汉大学 Single satellite remote sensing image small target super resolution ratio reconstruction method
US20080085040A1 (en) * 2006-10-05 2008-04-10 General Electric Company System and method for iterative reconstruction using mask images
US20080174659A1 (en) * 2007-01-18 2008-07-24 Mcdowall Ian Wide field of view display device and method
GB0716776D0 (en) * 2007-08-29 2007-10-10 Setred As Rendering improvement for 3D display
KR101484487B1 (en) * 2007-10-11 2015-01-28 코닌클리케 필립스 엔.브이. Method and device for processing a depth-map
US8493437B2 (en) * 2007-12-11 2013-07-23 Raytheon Bbn Technologies Corp. Methods and systems for marking stereo pairs of images
EP2235955A1 (en) * 2008-01-29 2010-10-06 Thomson Licensing Method and system for converting 2d image data to stereoscopic image data
JP4695664B2 (en) * 2008-03-26 2011-06-08 富士フイルム株式会社 3D image processing apparatus, method, and program
US9019381B2 (en) * 2008-05-09 2015-04-28 Intuvision Inc. Video tracking systems and methods employing cognitive vision
US8106924B2 (en) 2008-07-31 2012-01-31 Stmicroelectronics S.R.L. Method and system for video rendering, computer program product therefor
US8743114B2 (en) * 2008-09-22 2014-06-03 Intel Corporation Methods and systems to determine conservative view cell occlusion
CN101383046B (en) * 2008-10-17 2011-03-16 北京大学 Three-dimensional reconstruction method on basis of image
BRPI0914482A2 (en) * 2008-10-28 2015-10-27 Koninkl Philips Electronics Nv three-dimensional display system, method of operation for a three-dimensional display system and computer program product
US8335425B2 (en) * 2008-11-18 2012-12-18 Panasonic Corporation Playback apparatus, playback method, and program for performing stereoscopic playback
CN101783966A (en) * 2009-01-21 2010-07-21 中国科学院自动化研究所 Real three-dimensional display system and display method
RU2421933C2 (en) * 2009-03-24 2011-06-20 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." System and method to generate and reproduce 3d video image
US8289346B2 (en) 2009-05-06 2012-10-16 Christie Digital Systems Usa, Inc. DLP edge blending artefact reduction
US9269184B2 (en) * 2009-05-21 2016-02-23 Sony Computer Entertainment America Llc Method and apparatus for rendering image based projected shadows with multiple depth aware blurs
US8933925B2 (en) * 2009-06-15 2015-01-13 Microsoft Corporation Piecewise planar reconstruction of three-dimensional scenes
CN101937079B (en) * 2010-06-29 2012-07-25 中国农业大学 Remote sensing image variation detection method based on region similarity

Also Published As

Publication number Publication date
KR20140004115A (en) 2014-01-10
CN103329165B (en) 2016-08-24
CN103348360B (en) 2017-06-20
CN103283241A (en) 2013-09-04
RU2013129687A (en) 2015-02-20
CN105898273A (en) 2016-08-24
WO2012094074A2 (en) 2012-07-12
WO2012094077A1 (en) 2012-07-12
RU2013136687A (en) 2015-02-20
CN103947198B (en) 2017-02-15
CN103329165A (en) 2013-09-25
WO2012094076A9 (en) 2013-07-25
RU2562759C2 (en) 2015-09-10
CN103283241B (en) 2016-03-16
KR101741468B1 (en) 2017-05-30
CN105894567A (en) 2016-08-24
CN105894567B (en) 2020-06-30
CN105959664A (en) 2016-09-21
BR112013017321A2 (en) 2019-09-24
KR20130132922A (en) 2013-12-05
WO2012094074A3 (en) 2014-04-10
BR112013016887B1 (en) 2021-12-14
KR101851180B1 (en) 2018-04-24
CN103947198A (en) 2014-07-23
RU2573737C2 (en) 2016-01-27
CN103348360A (en) 2013-10-09
WO2012094076A1 (en) 2012-07-12
WO2012094075A1 (en) 2012-07-12
CN105898273B (en) 2018-04-10
BR112013016887A2 (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN105959664B (en) The dynamic adjustment of predetermined three-dimensional video setting based on scene content
US9723289B2 (en) Dynamic adjustment of predetermined three-dimensional video settings based on scene content
US8514225B2 (en) Scaling pixel depth values of user-controlled virtual object in three-dimensional scene
KR101095392B1 (en) System and method for rendering 3-D images on a 3-D image display screen
JP7072633B2 (en) Video generation method and equipment
EP3712856B1 (en) Method and system for generating an image
KR100812905B1 (en) 3-dimensional image processing method and device
US20110316853A1 (en) Telepresence systems with viewer perspective adjustment
JP2004221700A (en) Stereoscopic image processing method and apparatus
EP2323416A2 (en) Stereoscopic editing for video production, post-production and display adaptation
JP2004007395A (en) Stereoscopic image processing method and device
JP2004007396A (en) Stereoscopic image processing method and device
CN108141578A (en) Camera is presented
WO2011099896A1 (en) Method for representing an initial three-dimensional scene on the basis of results of an image recording in a two-dimensional projection (variants)
JP2003284095A (en) Stereoscopic image processing method and apparatus therefor
Selmanović et al. Generating stereoscopic HDR images using HDR-LDR image pairs
US12081722B2 (en) Stereo image generation method and electronic apparatus using the same
Bickerstaff Case study: the introduction of stereoscopic games on the Sony PlayStation 3
Miyashita et al. Perceptual Assessment of Image and Depth Quality of Dynamically Depth-compressed Scene for Automultiscopic 3D Display
US9609313B2 (en) Enhanced 3D display method and system
US20220148253A1 (en) Image rendering system and method
JP2024148528A (en) Image processing device, image processing method, and program
Shen et al. 3-D perception enhancement in autostereoscopic TV by depth cue for 3-D model interaction
Johansson Stereoscopy: Fooling the Brain into Believing There is Depth in a Flat Image
CA2982015A1 (en) Method and apparatus for depth enhanced imaging

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant