Nothing Special   »   [go: up one dir, main page]

CN101455084A - A method and apparatus for decoding/encoding a video signal - Google Patents

A method and apparatus for decoding/encoding a video signal Download PDF

Info

Publication number
CN101455084A
CN101455084A CN 200780019504 CN200780019504A CN101455084A CN 101455084 A CN101455084 A CN 101455084A CN 200780019504 CN200780019504 CN 200780019504 CN 200780019504 A CN200780019504 A CN 200780019504A CN 101455084 A CN101455084 A CN 101455084A
Authority
CN
China
Prior art keywords
view
information
prediction
picture
reference picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200780019504
Other languages
Chinese (zh)
Inventor
全炳文
朴胜煜
具汉书
全勇俊
朴志皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LG Electronics Inc
Original Assignee
LG Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LG Electronics Inc filed Critical LG Electronics Inc
Publication of CN101455084A publication Critical patent/CN101455084A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention provides a method of decoding a video signal. The method includes the steps of checking an encoding scheme of the video signal, obtaining configuration information for the video signal according to the encoding scheme, recognizing a total number of views using the configuration information, recognizing inter-view reference information based on the total number of the views, and decoding the video signal based on the inter- view reference information, wherein the configuration information includes at least view information for identifying a view of the video signal.

Description

The method and apparatus that is used for the decoding/encoding vision signal
Technical field
The present invention relates to a kind of method and device thereof that is used for the decoding/encoding vision signal.
Background technology
Compressed encoding is meant and is used for transmitting digital information or with a series of signal treatment technology of the form storage digital information that is suitable for storage medium via telecommunication circuit.The object of compressed encoding comprises audio frequency, video, text etc.Particularly, be used for the technology of sequence execution compressed encoding is called as the video sequence compression.Video sequence is characterised in that to have spatial redundancy and time redundancy usually.
Summary of the invention
Technical purpose
The objective of the invention is to improve the code efficiency of vision signal.
Technical scheme
One object of the present invention is to be used to discern by definition the view information of the view of picture and comes encoded video signal effectively.
Another object of the present invention is by improving code efficiency based on inter-view reference information encoded video signal.
Another object of the present invention is to provide by definition view class information the view scalability of vision signal.
Another object of the present invention is to indicate whether to obtain by definition that synthetic prediction identifier comes encoded video signal efficiently between the view of picture of virtual view.
Beneficial effect
In encoded video signal, the present invention is used to discern the view of picture by use view information is carried out inter-view prediction and is made it possible to carry out more effectively coding.And so that the class information of view scalability to be provided, the present invention can provide the view sequence that is applicable to the user by information redetermination, that indication is used for hierarchy.And, by the definition with minimum rank corresponding view as the reference view, the invention provides compatibility with conventional decoder.In addition, whether the present invention predicts that by decision during carrying out inter-view prediction the picture of virtual view improves code efficiency.In the situation of the picture of predicting virtual view, the invention enables and can predict more accurately, reduce the figure place that will transmit thus.
Description of drawings
Fig. 1 is the schematic block diagram that is used for the device of decoded video signal according to of the present invention.
Fig. 2 is the chart about the configuration information of multi-view video that can be added to the multi-view video coding bit stream according to an embodiment of the invention.
Fig. 3 is the internal frame diagram of reference picture list structural unit 620 according to an embodiment of the invention.
Fig. 4 is the chart of hierarchy of the class information of the view scalability (scalability) that is used to provide vision signal according to an embodiment of the invention.
Fig. 5 is the chart that comprises the NAL configuration of cells of class information in the extended area of NAL header according to an embodiment of the invention.
Fig. 6 is the chart of the macro-forecast structure of multi-view video signal according to an embodiment of the invention, is used to explain the notion of set of pictures between view.
Fig. 7 is the chart of predictability structure according to an embodiment of the invention, is used to explain the notion of set of pictures between the view of redetermination.
Fig. 8 is used to use the decode schematic block diagram of device of multi-view video of set of pictures identifying information between view according to an embodiment of the invention.
Fig. 9 is the flow chart that is used to construct the process of reference picture list according to an embodiment of the invention.
Figure 10 is the chart that is used for explaining the method for initialization reference picture list when current fragment (slice) is P-fragment (P-slice) according to an embodiment of the invention.
Figure 11 is the chart that is used for explaining the method for initialization reference picture list when current fragment is B-fragment (B-slice) according to an embodiment of the invention.
Figure 12 is the reference picture list internal frame diagram of resetting unit 630 according to an embodiment of the invention.
Figure 13 is that reference key according to an embodiment of the invention distributes the internal frame diagram that changes unit 643B or 645B.
Figure 14 be according to an embodiment of the invention be used to explain use view information to reset the chart of the process of reference picture list.
Figure 15 is the internal frame diagram that reference picture list is according to another embodiment of the present invention reset unit 630.
Figure 16 is the internal frame diagram that is used for the reference picture list rearrangement unit 970 of inter-view prediction according to an embodiment of the invention.
Figure 17 and Figure 18 are the charts that is used for the sentence structure of reference picture list rearrangement according to an embodiment of the invention.
Figure 19 is the chart of the sentence structure that is used for the reference picture list rearrangement according to another embodiment of the present invention.
Figure 20 is the chart of the process of the luminance difference that is used to obtain current block according to an embodiment of the invention.
Figure 21 is the flow chart of process that is used to carry out the luminance compensation of current block according to an embodiment of the invention.
Figure 22 is the chart that is used to use the process of the luminance difference predicted value that information about adjacent block obtains current block according to an embodiment of the invention.
Figure 23 is the flow chart of carrying out the process of luminance compensation about the information of adjacent block that is used to use according to an embodiment of the invention.
Figure 24 be according to another embodiment of the present invention be used to use the flow chart of carrying out the process of luminance compensation about the information of adjacent block.
Figure 25 is the chart that the picture that is used for using virtual view according to an embodiment of the invention is predicted the process of photo current.
Figure 26 is the flow chart that is used for the process of the picture in the synthetic virtual view during MVC carries out inter-view prediction according to an embodiment of the invention.
Figure 27 is the flow chart that is used for carrying out according to clip types at video signal coding the method for weight estimation according to of the present invention.
Figure 28 be according to of the present invention in video signal coding in clip types the chart of admissible macro block (mb) type.
Figure 29 and Figure 30 are the charts that is used for carrying out according to the clip types of redetermination the sentence structure of weight estimation according to an embodiment of the invention.
Whether Figure 31 carries out the flow chart that the flag information of weight estimation between view is carried out the method for weight estimation according to use according to the present invention indication in video signal coding.
Figure 32 is the chart of weight predicting method of carrying out the flag information of weight estimation about the information of the picture of the view different with the view of photo current according to indicating whether to use that is used for explaining according to an embodiment of the invention.
Figure 33 is the chart that is used for carrying out according to the flag information of redetermination the sentence structure of weight estimation according to an embodiment of the invention.
Figure 34 is a flow chart of carrying out the method for weight estimation according to an embodiment of the invention according to NAL (network abstract layer) cell type.
Figure 35 and Figure 36 are that according to an embodiment of the invention to be used at the NAL cell type be the chart of carrying out the sentence structure of weight estimation about the situation of multi-view video coding.
Figure 37 is according to an embodiment of the invention according to the partial block diagram of the video signal decoding apparatus of the clip types of redetermination.
Figure 38 is the flow chart that is used for explaining in the method for device decoded video signal shown in Figure 37 according to of the present invention.
Figure 39 is the chart of macroblock prediction pattern according to an embodiment of the invention.
Figure 40 and Figure 41 have used the chart of the sentence structure of clip types and macro block mode according to of the present invention to it.
Figure 42 is a chart of using the embodiment of the clip types among Figure 41 to it.
Figure 43 is the chart that is included in the various embodiment of the clip types in the clip types shown in Figure 41.
Figure 44 is the chart for the admissible macro block of mixing tab segment type that carries out according to the prediction of two hybrid predictings according to an embodiment of the invention.
Figure 45 to 47 is charts of the macro block (mb) type of the macro block that exists in the mixing fragment according to the prediction of two hybrid predictings according to an embodiment of the invention.
Figure 48 is according to an embodiment of the invention according to the partial block diagram of the video signal coding apparatus of the clip types of redetermination.
Figure 49 be according to of the present invention in device shown in Figure 48 the flow chart of the method for encoded video signal.
Embodiment
In order to realize as here embodying and broadly described these and other advantage and according to purpose of the present invention, a kind of method of decoded video signal may further comprise the steps: the encoding scheme of checking vision signal, obtain to be used for the configuration information of vision signal according to encoding scheme, use the total number of configuration information identification view, total number identification inter-view reference information based on view, and based on inter-view reference information decoding vision signal, wherein configuration information comprises the view information of the view that is used for the identification video signal at least.
For further these and other advantage of realization and according to purpose of the present invention, a kind of method of decoded video signal may further comprise the steps: the encoding scheme of checking vision signal, obtain to be used for the configuration information of vision signal according to encoding scheme, check the rank that is used for view scalability of vision signal from configuration information, use configuration information identification inter-view reference information, and based on rank and inter-view reference information decoding vision signal, wherein configuration information comprises the view information of the view that is used to discern picture.
Be used for pattern of the present invention
Now will be in detail with reference to the preferred embodiments of the present invention, its example shown in the drawings.
The technology of compression and encoded video signal data is considered between spatial redundancy, time redundancy, gradable redundancy and view redundant.And, can also in compression encoding process, carry out compressed encoding by the mutual redundancy of consideration between view.Consider that the technology that is used for compressed encoding redundant between view only is one embodiment of the present of invention.And technological thought of the present invention can be applicable to time redundancy, gradable redundancy etc.
There is the independent layer structure that is called as NAL (network abstract layer) in the configuration of the bit stream in the research H.264/AVC between the following layer system of the VCL (video coding layer) that handles moving picture coding process self and transmission and memory encoding information.From the output of cataloged procedure be VCL data and its before being transmitted or storing by the NAL unit maps.Each NAL unit comprises that (raw byte sequence payload: the result data of motion picture compression), RBSP is the data corresponding with header information for compressed video data or RBSP.
The NAL unit consists essentially of NAL header and RBSP.The NAL header comprises the identifier (nal_unit_type) that indicates whether to comprise as the type of the flag information (nal_ref_idc) of the fragment of the reference picture of NAL unit and indication NAL unit.Compressed initial data is stored among the RBSP.And the decline that position, RBSP end is added to RBSP is 8-position multiplication with the length of representing RBSP.Sequenceparameter set), PPS (image parameters collection: picture parameter set), SEI (additional enhancing information: supplemental enhancement information) etc. as the type of NAL unit, IDR (instantaneous decoding refresh: picture, SPS (sequence parameter set: instantaneous decoding refresh) is arranged.
In standardization, be provided with for various types of (profile) and other restriction of level, so that can realize target product with suitable cost.In this case, decoder should satisfy the restriction according to corresponding class and rank decision.Therefore, having defined two notions " class " and " rank " indicates and is used to represent that decoder can handle the scope of compressed sequence the function or the parameter of which kind of degree.And class identifier (profile_idc) can identify the class that bit stream is based on regulation.Class identifier be meant the indication bit stream based on the sign of class.For example, in H.264/AVC, if class identifier is 66, then this expression bit stream is based on the baseline class.If class identifier is 77, then this expression bit stream is based on main class.If class identifier is 88, then this expression bit stream is based on the expansion class.And class identifier can be included in sequential parameter and concentrate.
So,, need whether the class of identification incoming bit stream is many views in order to handle multi-view video.If the class of incoming bit stream is many views, then be necessary to add sentence structure so that can transmit at least one additional information about many views.In this case, the quasi-mode of the processing multi-view video of many view indication conducts revision technique H.264/AVC.In MVC, adding conduct may be more effective than adding unconditional sentence structure about the sentence structure of the extraneous information of MVC pattern.For example, when the class identifier of AVC is indicated many views, if interpolation then can improve code efficiency about the information of multi-view video.
The sequence parameter set indication contains the header information of the information encoded (for example class, rank etc.) that contains overall sequence.Whole compressed motion picture, be that sequence should begin with sequence header.Therefore, corresponding with header information sequence parameter set should arrive decoder before the data arrival with reference to this parameter set.That is, sequence parameter set RBSP plays the part of the role of header information for the result data of motion picture compression.In case bit stream is transfused to, then class identifier is preferably discerned incoming bit stream and is based in a plurality of classes which.Therefore, by add to be used to determine whether incoming bit stream relates to many views sector of breakdown (for example, " If (profile_idc==MULTI_VIEW_PROFILE) ") in sentence structure, whether the decision incoming bit stream relates to many views.Only when definite incoming bit stream relates to many views, can add various configuration informations.For example, can under the situation of set of pictures between view, add the number of all views, the number (List0/1) of inter-view reference picture, under the situation of set of pictures between non-view, add the number (List0/1) of inter-view reference picture etc.And, can be used for producing and manage reference picture list in the decoding picture buffer about the various information of view.
Fig. 1 is the schematic block diagram that is used for the device of decoded video signal according to of the present invention.
With reference to figure 1, the device that is used for decoded video signal according to the present invention comprises NAL resolver 100, entropy decoding unit 200, inverse quantization/reciprocal transformation unit 300, intraprediction unit 400, block elimination filtering unit 500, decoding picture buffer unit 600, inter prediction unit 700 etc.
Decoding picture buffer unit 600 comprises reference picture memory cell 610, reference picture list structural unit 620, reference picture administrative unit 650 etc.And reference picture list structural unit 620 comprises variable derivation unit 625, reference picture list initialization unit 630 and reference picture list rearrangement unit 640.
And inter prediction unit 700 comprises motion compensation units 710, luminance compensation unit 720, luminance difference predicting unit 730, the synthetic predicting unit 740 of view etc.
NAL resolver 100 is carried out by the NAL unit and is resolved the video sequence that is received with decoding.Usually, before decoding section headers and fragment data, at least one sequence parameter set and at least one image parameters collection are passed to decoder.In this case, various configuration informations can be included in the extended area of NAL header zone or NAL header.Because MVC is the revision technique for traditional AVC technology, so in the situation of MVC bit stream, only add configuration information but not unconditional interpolation may be more effective.For example, can in the extended area of NAL header zone or NAL header, add and be used to discern the flag information that exists or do not have the MVC bit stream.Only when being the multi-view video coding bit stream, can add configuration information about multi-view video according to the flag information incoming bit stream.For example, this configuration information can comprise set of pictures identifying information, view recognition information etc. between time class information, view class information, view.Following with reference to figure 2 to its detailed explanation.
Fig. 2 is the chart about the configuration information of multi-view video that adds the multi-view video coding bit stream to according to an embodiment of the invention.Explain details in the following description about the configuration information of multi-view video.
At first, time class information indication is about being used to provide the information (1.) according to the hierarchy of the temporal scalability of vision signal.By the time class information, can be provided at sequence in the district of various times (zone) for the user.
View class information indication is about being used to provide the information (2.) according to the hierarchy of the view scalability of vision signal.In multi-view video, be necessary to define about the rank of time and about the rank of view, thereby provide various times and view sequence for the user.Under the situation of class information more than the definition, can service time gradability and view scalability.Therefore, the user can select to be in the sequence at special time and view place, perhaps can limit selecteed sequence by condition.
Can class information be set in every way according to specified conditions.For example, can be according to position of camera or camera calibration and class information differently is set.And, can determine class information by considering view dependency (dependency).For example, rank about the view with I-picture in the set of pictures between view is set to 0, rank about the view with P-picture in the set of pictures between view is set to 1, and is set to 2 about the rank of the view with B-picture in the set of pictures between view.And, can class information be set randomly based on specified conditions.To at length explain the view class information with reference to figure 4 and Fig. 5 subsequently.
Set of pictures identifying information indication is used to identify whether the encoded picture of current NAL unit is the information (3.) of set of pictures between view between view.In this case, set of pictures is meant that wherein all fragments are only with reference to the encoded picture with fragment of identical picture sequence number between view.For example, set of pictures only is meant with reference to the fragment in the different views not encoded picture with reference to the fragment in front view between view.In the decode procedure of multi-view video, may need random access between view.The set of pictures identifying information may be necessary for realizing effective random access between view.And inter-view reference information may be necessary for inter-view prediction.So the set of pictures identifying information can be used to obtain inter-view reference information between view.And the set of pictures identifying information can be used to add the reference picture that is used for inter-view prediction during the structure reference picture list between view.In addition, the set of pictures identifying information can be used to the reference picture that is used for inter-view prediction that administrative institute adds between view.For example, reference picture can be classified between view set of pictures between set of pictures and non-view, and the reference picture of being classified can be marked then, thereby will not use the reference picture that can not be used to inter-view prediction.Simultaneously, the set of pictures identifying information can be applicable to hypothetical reference decoder between view.The details of set of pictures identifying information between views will explained with reference to figure 6 subsequently.
View recognition information is meant and is used for and will works as the picture of front view and other information of picture phase region (4.) in the different views.During encoded video signal, POC (picture order number) or " frame_num (frame number) " can be used to identify each picture.Under the situation of multi-view video sequence, can carry out inter-view prediction.So, need be used for and will work as the picture of front view and other identifying information of picture phase region in another view.So, be necessary to define the view recognition information of the view that is used to discern picture.Can from the header zone of vision signal, obtain this view recognition information.For example, described header zone can be the extended area or the section headers zone of NAL header zone, NAL header.Use view recognition information obtains the information about the picture in the view different with the view of photo current, and can use the information of the picture in this different views to come decoded video signal.View recognition information can be applicable to the overall coding/decoding process of vision signal.And, use " frame_num " that consider view rather than consider the particular figure identifier, view recognition information can be applied to multi-view video coding.
Simultaneously, entropy decoding unit 200 is carried out the entropy decoding for the bit stream of being resolved, and extracts the coefficient, motion vector etc. of each macro block then.Inverse quantization/reciprocal transformation unit 300 by the quantized value multiplication by constants that will be received obtain through the conversion coefficient value, and then this coefficient value of reciprocal transformation with the reconstructed pixel value.Use the pixel value of reconstruct, intraprediction unit 400 is sampled according to the decoding in the photo current and is carried out infra-frame prediction.Simultaneously, block elimination filtering unit 500 is applied to each coded macroblocks to reduce the piece distortion.The filter smoothing block edge is to improve the picture quality of decoded frame.The boundary intensity and the gradient of the image sampling around the selection of filtering depends at the edge.Picture by filtering is output or is stored in the decoding picture buffer unit 600, to be used as reference picture.
Decoding picture buffer unit 600 serves as storage or opens the role of the picture of previous coding with the execution inter prediction.In this case, for picture in decoding picture buffer unit 600 or open picture, use " frame_num " and the POC (picture order number) of each picture.So,, can use with " frame_num " and POC so be used to discern the view information of the view of picture because formerly exist in picture in the view different in Bian Ma the picture with the view of photo current.Decoding picture buffer unit 600 comprises reference picture memory cell 610, reference picture list structural unit 620 and reference picture administrative unit 650.Reference picture memory cell 610 is stored as coding photo current and will be by the picture of reference.Reference picture list structural unit 620 is configured to the tabulation of the reference picture of (inter-picture) prediction between picture.In multi-view video coding, may need inter-view prediction.So, if photo current with reference to the picture in another view, then has the reference picture list that necessity is configured to inter-view prediction.In this case, reference picture list structural unit 620 can use the information about view during generation is used for the reference picture list of inter-view prediction.Will be subsequently with reference to the details of figure 3 explanation reference pictures tabulations structural unit 620.
Fig. 3 is the internal frame diagram of reference picture list structural unit 620 according to an embodiment of the invention.
Reference picture list structural unit 620 comprises variable derivation unit 625, reference picture list initialization unit 630 and reference listing rearrangement unit 640.
Variable derivation unit 625 is derived and is used for the initialized variable of reference picture list.For example, can use " frame_num " of the indication picture identifier variable of deriving.Particularly, variable FrameNum (frame number) and FrameNumWrap (frame number line feed) can be used to each short-term reference picture.At first, variable FrameNum equals the value of syntax elements frame_num.Variable FrameNumWrap can be used to decoding picture buffer unit 600 and think that each reference picture distributes less number.And, can be from the variable FrameNum variable FrameNumWrap that derives.So, can use the variable FrameNumWrap that the derives variable PicNum (picture number) that derives.In this case, variable PicNum can refer to the identifier of decoding picture buffer unit 600 employed pictures.Under the situation of indication long term reference picture, can use variables L ongTermPicNum (long-term picture number).
In order to be configured to the reference picture list of inter-view prediction, can derive first variable (for example, ViewNum (looking figure number)) to be configured to the reference picture list of inter-view prediction.For example, can use " view_id (view identifier) " second variable of deriving (for example, ViewId (view identifier)) of the view that is used to discern picture.At first, second variable can equal the value of syntax elements " view_id ".And ternary (for example, ViewIdWrap (view identifier line feed)) can be used to decoding picture buffer unit 600 distributing less view recognition number to each reference picture, and can derive from second variable.In this case, the first variable V iewNum can refer to the view recognition number of decoding picture buffer unit 600 employed pictures.Yet, because in multi-view video coding, be used for the number of the reference picture of inter-view prediction may be relatively less than the number of the reference picture that is used for time prediction, so can not define another variable that is used in reference to the view recognition number of showing the long term reference picture.
Reference picture list initialization unit 630 is used above-mentioned initialization of variable reference picture list.In this case, being used for the initialization procedure of reference picture list can be according to clip types and difference.For example, under the situation of decoding P-fragment, can come the assigned references index based on the decoding order.Under the situation of decoding B-fragment, can export order based on picture and come the assigned references index.Be used in initialization under the situation of reference picture list of inter-view prediction, can be based on first variable, the variable of promptly deriving from view information to the reference picture allocation index.
Reference picture list is reset unit 640 and is served as by the picture of the reference role that distributes littler index to improve compression efficiency continually of the quilt in the reference picture list that is initialised.This is because distribute less position if the reference key that is used to encode becomes littler.
And reference picture list is reset unit 640 and is comprised that clip types inspection unit 642, reference picture list 0 are reset unit 643 and reference picture list 1 is reset unit 645.If the reference picture list that input is initialised, then clip types inspection unit 642 checks with the type of decoded fragment and determines then to be to reset reference picture list 0 or reference picture list 1.So if clip types is not the I-fragment, then reference picture list 0/1 is reset the rearrangement that reference picture list 0 are carried out in unit 643,645, and if clip types be the B-fragment, then also additionally carry out the rearrangement of reference picture list 1.Therefore, after rearrangement process finished, reference picture list was constructed.
Reference picture list 0/1 is reset unit 643,645 and is comprised that respectively identifying information obtains unit 643A, 645A and reference key distributes change unit 643B, 645B.If carry out the rearrangement of reference picture list according to the flag information of the rearrangement that indicates whether to carry out reference picture list, then identifying information obtains the identifying information (reordering_of_pic_nums_idc) that unit 643A, 645A receive the distribution method of indication reference key.And reference key distributes change unit 643B, 645B to reset reference picture list by the distribution that changes reference key according to identifying information.
And, can utilize another kind of method to operate reference picture list and reset unit 640.For example, can be by checking the NAL cell type that transmits before by clip types inspection unit 642 and then the NAL cell type being categorized into MVC NAL situation and non-MVCNAL situation is carried out rearrangement.
Reference picture administrative unit 650 management reference picture are to carry out inter prediction more neatly.For example, can use storage management control operation method and slip (sliding) windowhood method.This is by being that a memory is managed reference picture memory and non-reference picture memory and utilized less memory to realize the effective memory management with the memory unification.During multi-view video coding,, can be used to along view direction mark picture so be used to identify the information of the view of each picture because the picture in view direction has identical picture order number.And inter prediction unit 700 can be used according to the reference picture with the upper type management.
Inter prediction unit 700 uses the reference picture of storage in decoding picture buffer unit 600 to carry out inter prediction.Inter-coded macroblocks can be divided into macroblock partitions (partition).And, can predict each macroblock partitions according to one or two reference picture.Inter prediction unit 700 comprises motion compensation units 710, luminance compensation unit 720, luminance difference predicting unit 730, the synthetic predicting unit 740 of view, weight estimation unit 750 etc.
Motion compensation units 710 is used the motion that compensates current block from the information of entropy decoding unit 200 transmission.From vision signal, extract the motion vector of the adjacent block of current block, and then by the motion-vector prediction of the motion vector derivation current block of adjacent block.And, use motion-vector prediction of deriving and the differential motion vector of from vision signal, extracting to compensate the motion of current block.And, can use a reference picture or a plurality of picture to carry out motion compensation.During multi-view video coding, in the situation of photo current, can use the reference picture list information that is used for inter-view prediction of storage in decoding picture buffer unit 600 to carry out motion compensation with reference to the picture in the different views.And, can also use the view information of the view that is used to identify reference picture to carry out motion compensation.Direct Model is the coding mode that is used for predicting according to the movable information that is used for encoding block the movable information of current block.Because this method can be saved the number that is used for the needed position of coded motion information, so compression efficiency is improved.For example, the correlation of the movable information in time orientation pattern direction service time is predicted the movable information about current block.Use is similar to the method for this method, and the present invention can use the correlation of the movable information in the view direction to predict movable information about current block.
Simultaneously, under the situation of incoming bit stream, because each view sequence is obtained by different cameras, so owing to the inside and outside factor of camera produces luminance difference corresponding to multi-view video.In order to prevent this point, luminance compensation unit 720 compensate for brightness are poor.During carrying out luminance compensation, can use the flag information that indicates whether on the certain layer of vision signal, to carry out luminance compensation.For example, can use the flag information that indicates whether corresponding fragment or macro block execution luminance compensation to carry out luminance compensation.During service marking information and executing luminance compensation, luminance compensation can be applicable to various macro block (mb) types (for example, interframe 16x16 pattern, B-jump (B-skip) pattern, Direct Model etc.).
During carrying out luminance compensation, can use about the information of adjacent block or about the information of the piece in the view different and come the reconstruct current block with the view of current block.And, can also use the luminance difference of current block.In this case, if current block with reference to the piece in the different views, then can use the reference picture list information that is used for inter-view prediction of storage in decoding picture buffer unit 600 to carry out luminance compensation.In this case, the luminance difference of current block indication is the average pixel value of current block and poor corresponding between the average pixel value of the reference block of current block.As the example of using luminance difference, use the adjacent block of current block to obtain the luminance difference predicted value of current block, and use the difference (luminance difference residual value) between luminance difference and luminance difference predicted value.Therefore, decoding unit can use the luminance difference that this luminance difference residual value and luminance difference predicted value are come the reconstruct current block.During the luminance difference predicted value that obtains current block, can use information about adjacent block.For example, can use the luminance difference of adjacent block to predict the luminance difference of current block.Before prediction, check whether the reference key of current block equals the reference key of adjacent block.According to check result, decision will be used any adjacent block or value then.
The synthetic predicting unit 740 of view is used to use the picture in the view adjacent with the view of photo current synthesize picture in the virtual view, and the synthesising picture in the use virtual view is predicted photo current.Decoding unit can determine whether the picture the synthetic virtual view according to the synthetic identifier of predicting between the view of coding unit transmission.For example, if view_synthesize_pred_flag=1 or view_syn_pred_flag=1, fragment or macro block in the then synthetic virtual view.In this case, when synthetic prediction identifier is informed the generation virtual view between view, can use the view information of the view that is used for discerning picture to produce the picture of virtual view.And, during predicting photo current, can use view information to use picture in the virtual view as the reference picture according to the synthesising picture in the virtual view.
Weight estimation unit 750 is used to the significantly reduced phenomenon of picture quality of this sequence of compensation under the situation of the temporary transient sequence that changes of its brightness of coding.In MVC, can carry out the luminance difference of weight estimation, and the temporary transient sequence that changes of its brightness is carried out weight estimation with the sequence in compensation and the different views.For example, weight predicting method can be classified into explicit weighting Forecasting Methodology and implicit expression weight predicting method.
Particularly, explicit weighting Forecasting Methodology can be used a reference picture or two reference picture.Under the situation of using a reference picture, produce prediction signal by weight coefficient will be multiply by corresponding to the prediction signal of motion compensation.Under the situation of using two reference picture, by deviant is increased to the prediction signal corresponding to motion compensation be multiply by the resulting value of weight coefficient, thereby produce prediction signal.
And the implicit expression weight estimation uses from the distance of reference picture and carries out weight estimation.As the method that obtains from the distance of reference picture, can for example use the POC (picture order number) of indication picture output order.The sign of view that in this case, can be by considering each picture obtains POC.During the weight coefficient that obtains about the picture in the different views, the view information that can use the view that is used to discern picture is to obtain the distance between the view of each picture.
During video signal coding, depth information can be used for application-specific or another purpose.In this case, depth information can refer to indicate the information of parallax difference between view.For example, can obtain difference vector by inter-view prediction.And the difference vector that is obtained should be passed to decoding device to be used for the parallax compensation of current block.Yet,, can release difference vector and difference vector need not be delivered to decoding device according to depth map (perhaps parallax mapping) if obtain depth map and it is passed to decoding device then.In this case, the figure place that it is advantageous that the depth information that will be passed to decoding device can be lowered.So,, can provide a kind of new parallax compensation method by according to depth map derivation difference vector.Therefore, during according to depth map derivation difference vector, use under the situation of the picture in the different views, can use the view information of the view that is used to identify picture.
Select inter prediction by the process explained above or infra-frame prediction picture with the reconstruct photo current according to predictive mode.In the following description, the various embodiment to coding/decoding method that effective vision signal is provided make an explanation.
Fig. 4 is the chart of hierarchy of the class information of the view scalability that is used to provide vision signal according to an embodiment of the invention.
With reference to figure 4, can decide class information by considering inter-view reference information about each view.For example, because can have I-picture ground decoding P-picture and B-picture, so can between its view, set of pictures be the basic view allocation " level=0 " of I-picture, set of pictures is the basic view allocation " level=1 " of P-picture between its view, and set of pictures is the basic view allocation " level=2 " of B-picture between its view.Yet, can also determine class information randomly according to specific criteria.
Can or need not study plot according to specific criteria and determine class information randomly.For example, under situation based on view decision class information, can be set to view rank 0 as the view V0 of basic view, use the view of the picture of the picture prediction in the view to be set to view rank 1, and use the view of the picture of the picture prediction in a plurality of views to be set to view rank 2.In this case, may need to have at least one view sequence with the compatibility of traditional decoder (for example, H.264/AVC, MPEG-2, MPEG-4 etc.).This basic view becomes multi-view coded basis, and it can be corresponding to the reference-view that is used to predict another view.In MVC (multi-view video coding), can be configured to independent bit stream by utilizing traditional sequential coding scheme (MPEG-2, MPEG-4, H.263, H.264 wait) coding corresponding to the sequence of basic view.Can be corresponding to the sequence of basic view with H.264/AVC compatible mutually or can be compatible mutually.Yet, with H.264/AVC compatible mutually view in sequence corresponding to basic view.
As can in Fig. 4, seeing, can use the picture of the picture prediction among the view V0 view V2, use the picture of the picture prediction among the view V2 view V4, use the picture prediction among the view V4 picture view V6 and use the view V7 of the picture of the picture prediction among the view V6 to be set to view rank 1.And, can use the view V1 of picture of the picture prediction among view V0 and the V2 and the view V3 of prediction in the same manner and the view V5 that predicts in the same manner to be set to view rank 2.So, can not watch under the situation of multi-view video sequence at user's decoder, it is only decoded corresponding to the sequence in the view of view rank 0.Decoder the user is subjected under the situation of category information restriction restricted other information of view level of can only decoding.In this case, class is meant that the skill element of the algorithm that is used for the encoding and decoding of video process is by standardization.Particularly, class is to be used for the needed one group of skill element of bit sequence of decoding compressed sequence and can is a kind of substandard.
According to another embodiment of the invention, class information can change according to the position of camera.For example, suppose that view V0 and V1 are the sequences that is obtained by the camera that is positioned at the front, view V2 and V3 are the sequences that is obtained by the camera that is positioned at the back, view V4 and V5 are the sequences that is obtained by the camera that is positioned at the left side, and view V6 and V7 are the sequences that is obtained by the camera that is positioned at the right side, then can view V0 and V1 be set to view rank 0, view V2 and V3 are set to view rank 1, view V4 and V5 are set to view rank 2, and view V6 and V7 are set to view rank 3.Alternately, class information can change according to camera calibration.Alternately, class information can not determined based on specific criteria ground randomly.
Fig. 5 is the chart that comprises the NAL configuration of cells of class information in the extended area of NAL header according to an embodiment of the invention.
With reference to figure 5, the NAL unit consists essentially of NAL header and RBSP.The NAL header comprises the flag information (nal_ref_idc) of the fragment that indicates whether to comprise the reference picture that becomes the NAL unit and indicates the identifier (nal_unit_type) of the type of NAL unit.And the NAL header can comprise that also indication is about the class information (view_level) of hierarchy with information that view scalability is provided.
The initial data of compression is stored among the RBSP, and last decline that is added to RBSP of RBSP is 8-position multiplication number with the length of representing RBSP.Instantaneous decoding refresh), SPS (sequence parameter set: sequence parameter set), PPS (image parameters collection: pictureparameter set), SEI (additional enhancing information: supplemental enhancementinformation) etc. as the type of NAL unit, IDR (instantaneous decoding refresh: is arranged.
The NAL header comprises the information about view identifier.And, during carrying out decoding according to the view rank, other video sequence of reference-view identifier decoding corresponding views level.
The NAL unit comprises NAL header 51 and slice layer 53.NAL header 51 comprises NAL header extension 52.And slice layer 53 comprises section headers 54 and fragment data 55.
NAL header 51 comprises the identifier (nal_unit_type) of the type of indication NAL unit.For example, the identifier of indication NAL cell type can be the identifier about graduated encoding and multi-view video coding.In this case, NAL header extension 52 can comprise and distinguishes the flag information that current NAL is used for the NAL of gradable video encoding or is used for the NAL of multi-view video coding.And according to flag information, NAL header extension 52 can comprise the extend information about current NAL.For example, in that current NAL is used under the situation of NAL of multi-view video coding according to flag information, NAL header extension 52 can comprise that indication is about the class information (view_level) of hierarchy with information that view scalability is provided.
Fig. 6 is the chart of the macro-forecast structure of multi-view video signal according to an embodiment of the invention, is used to explain the notion of set of pictures between view.
With reference to figure 6, the T0 on the trunnion axis is to the frame of T100 indication according to the time, and the S0 on the vertical axis is to the frame of S7 indication according to view.For example, the picture at the T0 place is meant distinguishes the frame of T0 by different captured by camera at one time, and the picture at the S0 place is meant the sequence by single captured by camera in the different time district.And arrow is in the drawings indicated the prediction direction and the prediction order of each picture.For example, the picture P0 among the view S2 on time district T0 is the picture from I0 prediction, and it becomes the reference picture of the picture P0 among the view S4 on time district T0.And it becomes time district T4 and picture B1 on the T2 and the reference picture of B2 in view S2 respectively.
In the multi-view video decode procedure, may need random access between view.So, by reducing decoding intensity (effort), should be possible to the visit of view at random.In this case, realize that effective visit may need the notion of set of pictures between view.Set of pictures is meant that wherein all fragments are only with reference to the encoded picture with fragment of identical picture sequence number between view.For example, set of pictures only is meant with reference to the fragment in the different views not encoded picture with reference to the fragment in front view between view.In Fig. 6, if the picture I0 among the view S0 on time district T0 is a set of pictures between view, then distinguish at one time, be that all pictures in the different views on the time district T0 all become set of pictures between view.As another example, if the picture I0 among the view S0 on the time district T8 is a set of pictures between view, then distinguish at one time, be that all pictures in the different views on the time district T8 all are set of pictures between view.Similarly, T16 ..., T96, and all pictures also become set of pictures between view among the T100.
Fig. 7 is the chart of predict according to an embodiment of the invention, is used to explain the notion of set of pictures between the view of redetermination.
In the macro-forecast structure of MVC, GOP can begin with the I-picture.And the I-picture is with H.264/AVC compatible mutually.So, always can become the I-picture with set of pictures between H.264/AVC compatible mutually all views.Yet, replacing with the P-picture under the situation of I-picture, can carry out efficient coding more.Particularly, make that GOP can be to make it possible to carry out efficient coding more with predict that H.264/AVC compatible mutually P-picture begins.
In this case, if set of pictures is redefined between view, then all fragments become the not only encoded picture of the fragment in the same view in the different time district with reference to fragment in the frame of distinguishing at one time but also reference.Yet, in situation with reference to the fragment in the different time district in the same view, its only can be restricted to H.264/AVC compatible mutually view between set of pictures.For example, the P-picture on the sequential point T8 among the view S0 in Fig. 6 can become set of pictures between the view of redetermination.Similarly, P-picture on the sequential point T96 in view S0 or the P-picture on the sequential point T100 in view S0 can become set of pictures between the view of redetermination.And, can only when it is basic view, define set of pictures between view.
Between view set of pictures decoded after, according to the sequential encoding picture of decoding all, and do not carry out inter prediction according to output order decoded picture before set of pictures between this view.
Consider the overall coding structure of Fig. 6 and multi-view video shown in Figure 7, because the inter-view reference information of set of pictures is different from the inter-view reference information of set of pictures between non-view between view, so be necessary set of pictures between set of pictures between view and non-view to be distinguished mutually according to set of pictures identifying information between view.
Inter-view reference information is meant the information that can identify the predict between the picture between view.This can obtain from the data area of vision signal.For example, can from the sequence parameter set zone, obtain.And, can use the number of reference picture and discern inter-view reference information about the view information of reference picture.For example, obtain the number of whole views and can obtain to be used to discern the view information of each view then based on the number of all views.And, can obtain number for the reference picture of the reference direction of each view.According to the number of reference picture, can obtain view information about each reference picture.In this way, can obtain inter-view reference information.And,, can discern inter-view reference information by set of pictures between set of pictures between view and non-view is distinguished mutually.Whether this can use the encode fragment among the current NAL of indication obtained as set of pictures identifying information between the view of set of pictures between view.The following details of explaining set of pictures identifying information between view with reference to figure 8.
Fig. 8 is the decode schematic block diagram of device of multi-view video of set of pictures identifying information between view that is used to use according to an embodiment of the invention.
With reference to figure 8, decoding device according to an embodiment of the invention comprises that the set of pictures identifying information obtains unit 82 and multi-view video decoding unit 83 between bit stream decision unit 81, view.
If incoming bit stream, then bit stream decision unit 81 decision incoming bit streams are used for the coding stream of gradable video encoding or are used for the coding stream of multi-view video coding.This can be determined by the flag information that comprises in bit stream.
If as determination result, incoming bit stream is the bit stream that is used for multi-view video coding, and then the set of pictures identifying information obtains unit 82 and can obtain set of pictures identifying information between view between view.If the set of pictures identifying information is between the view that is obtained " true (very) ", then it is meant that the encode fragment of current NAL is a set of pictures between view.If the set of pictures identifying information is between the view that is obtained " false (vacation) ", then it is meant that the encode fragment of current NAL is a set of pictures between non-view.Can from the extended area of NAL header or slice layer zone, obtain set of pictures identifying information between this view.
Multi-view video decoding unit 83 is according to set of pictures identifying information decoding multi-view video between view.According to the overall coding structure of multi-view video sequence, the inter-view reference information of set of pictures is different from the inter-view reference information of set of pictures between non-view between view.So, for example during interpolation is used for the reference picture of inter-view prediction, can use set of pictures identifying information between view, to produce reference picture list.And, can also use between view the set of pictures identifying information to manage the reference picture that is used for inter-view prediction.And the set of pictures identifying information can be applied to hypothetical reference decoder between view.
As another example of using set of pictures identifying information between view, using under the situation of the information in the different views for each decode procedure, can use in sequential parameter and concentrate the inter-view reference information that comprises.In this case, may need to be used to distinguish photo current is that set of pictures also is the information of set of pictures between non-view between view, that is, and and set of pictures identifying information between view.So, can use different inter-view reference information for each decode procedure.
Fig. 9 is the flow chart that is used to produce the process of reference picture list according to an embodiment of the invention.
With reference to figure 9, decoding picture buffer unit 600 serves as storage or opens the previous coding picture to carry out the role of inter-picture prediction.
At first, the picture that was encoded before photo current is stored in the reference picture memory cell 610, to be used as reference picture (S91).
During multi-view video coding, because some in the previous coding picture are arranged in the view different with the view of photo current, the view information that therefore is used to discern the view of picture can be used to utilize these pictures as the reference picture.So decoder should obtain to be used to discern the view information (S92) of the view of picture.For example, view information can comprise " view_id " of the view that is used to discern picture.
Decoding picture buffer unit 600 needs the variable of derivation use therein to produce reference picture list.Because may need inter-view prediction, so if photo current with reference to the picture in the different views, then has the reference picture list that necessary generation is used for inter-view prediction for multi-view video coding.In this case, decoding picture buffer unit 600 needs to use the variable (S93) that the view information that obtained is derived and is used for producing the reference picture list that is used for inter-view prediction.
According to the clip types of current fragment, the reference picture list (S94) that can produce the reference picture list that is used for time prediction or be used for inter-view prediction by diverse ways.For example, if clip types is the P/SP fragment, then produce reference picture list 0 (S95).In clip types is in the situation of B-fragment, produces reference picture list 0 and reference picture list 1 (S96).In this case, reference picture list 0 or 1 can only comprise the reference picture list that is used for time prediction or be used for the reference picture list of time prediction and be used for the reference picture list of inter-view prediction.This will explain with reference to figure 8 and Fig. 9 subsequently in detail.
The reference picture list that is initialised experience is used for to distributed the process (S97) with further raising compression speed littler number by the picture of reference continually.And this can be called as the rearrangement process that is used for reference picture list, and this will explain referring to figs. 12 to 19 subsequently in detail.Use is through the reference picture list of the resetting photo current of decoding, and decoding picture buffer unit 600 reference picture that needs the management decoding is with operation buffer (S98) more effectively.Reference picture by above process management is read out by inter prediction unit 700, to be used to inter prediction.In multi-view video coding, inter prediction can comprise inter-view prediction.In this case, can use the reference picture list that is used for inter-view prediction.
The following detailed example that is used for producing the method for reference picture list of explaining with reference to Figure 10 and Figure 11 according to clip types.
Figure 10 is the chart that is used to explain the method for initialization reference picture list when current fragment is the P-fragment according to an embodiment of the invention.
With reference to Figure 10, the time is by T0, T1 ... the TN indication, and view is by V0, V1 ..., the V4 indication.For example, photo current indication in view V4 at the picture of time T 3.And the clip types of photo current is the P-fragment." PN " is the abbreviation of variable PicNum, and " LPN " is the abbreviation of variables L ongTermPicNum, and " VN " is the abbreviation of variable V iewNum.The numeral that appends to each end-of-variable part is indicated the index of the view (for VN) of time (for PN or LPN) of each picture or each picture.This can be applied to Figure 11 in the same manner.
Can produce reference picture list that is used for time prediction or the reference picture list that is used for inter-view prediction according to the clip types of current fragment by different way.For example, the clip types in Figure 12 is the P/SP fragment.In this case, produce reference picture list 0.Particularly, reference picture list 0 can comprise reference picture list that is used for time prediction and/or the reference picture list that is used for inter-view prediction.In the present embodiment, the tabulation of hypothetical reference picture comprises reference picture list that is used for time prediction and the reference picture list that is used for inter-view prediction.
There is the whole bag of tricks reference picture that sorts.For example, can arrange reference picture according to the order of decoding or picture output.Alternately, can arrange reference picture based on the variable that uses view information to derive.Alternately, can arrange reference picture according to the inter-view reference information of indication inter-view prediction structure.
In the situation of the reference picture list that is used for time prediction, can arrange short-term reference picture and long term reference picture based on the decoding order.For example, can arrange them according to variable PicNum that derives from the value (for example, frame_num or Longtermframeidx) of indication picture identifier or the value of LongTermPicNum.At first, the short-term reference picture can be initialised before the long term reference picture.The ordering of short-term reference picture can be set to the reference picture with minimum variate-value from the reference picture of variable PicNum with peak.For example, in PN2, can arrange the short-term reference picture at PN0 according to the PN1 with the highest variable, the order that has the PN2 of intermediate variable and have a PN0 of minimum variable.Can to having the reference picture of high variate-value the ordering of long term reference picture be set from the reference picture of variables L ongTermPicNum with minimum.For example, can arrange the long term reference picture according to LPN0 with the highest variable and the order with LPN1 of minimum variable.
In the situation of the reference picture list that is used for inter-view prediction, can arrange reference picture based on the first variable V iewNum that uses view information to derive.Particularly, can arrange reference picture to the order of reference picture according to reference picture with the highest first variable (ViewNum) value with minimum first variable (ViewNum) value.For example, in VN0, VN1, VN2 and VN3, can arrange reference picture according to VN3, VN2, VN1 with the highest variable and the order with VN0 of minimum variable.
Therefore, can be used as a reference picture list and manage reference picture list that is used for time prediction and the reference picture list that is used for inter-view prediction.Alternately, can be used as independent reference picture list and manage reference picture list that is used for time prediction and the reference picture list that is used for inter-view prediction respectively.In that management is used for the reference picture list of time prediction and is used for the situation of the reference picture list of inter-view prediction as reference picture list, can according to order or initialization side by side they.For example, in that initialization is used for the reference picture list of time prediction and is used for the situation of the reference picture list of inter-view prediction according to order, the reference picture list that is used for time prediction is by preferentially initialization, and the reference picture list that is used for inter-view prediction is then by additionally initialization.This notion also can be applicable to Figure 11.
Followingly explain that with reference to Figure 11 the clip types of photo current is the situation of B-fragment.
Figure 11 is the chart that is used to explain the method for initialization reference picture list when current fragment is the B-fragment according to an embodiment of the invention.
With reference to figure 9, be in the situation of B-fragment in clip types, produce reference picture list 0 and reference picture list 1.In this case, reference picture list 0 or reference picture list 1 can comprise the reference picture list that only is used for time prediction or be used for the reference picture list of time prediction and be used for the reference picture list of inter-view prediction.
In the situation of the reference picture list that is used for time prediction, short-term reference picture aligning method can be different from the long term reference picture arrangement method.For example, in the situation of short-term reference picture, can arrange reference picture according to picture order number (being called for short POC hereinafter).In the situation of long term reference picture, can arrange reference picture according to variable (LongtermPicNum) value.And the short-term reference picture can be initialised before the long term reference picture.
In the order of the short-term reference picture of arranging reference picture list 0, in the reference picture that has less than the POC value of photo current, preferentially arrange reference picture to reference picture with minimum POC value from reference picture with the highest POC value, and in the reference picture that has greater than the POC value of photo current, arrange to reference picture then with the highest POC value from reference picture with minimum POC value.For example, can be preferentially arrange reference picture to PN0, and arrange to PN4 from the PN3 that during having, has minimum POC value then less than the reference picture PN3 of the POC value of photo current and PN4 from the PN1 that during having, has the highest POC value less than the reference picture PN0 of the POC value of photo current and PN1.
In the order of the long term reference picture of arranging reference picture list 0, arrange reference picture to having the reference picture of high variable from reference picture with minimum variables L ongtermPicNum.For example, arrange reference picture from the LPN0 that among LPNO and LPN1, has minimum to LPN1 with second minimum variable.
In the situation of the reference picture list that is used for inter-view prediction, can arrange reference picture based on the first variable V iewNum that uses view information to derive.For example, in the situation of the reference picture list 0 that is used for inter-view prediction, can arrange reference picture to reference picture from the reference picture that reference picture, has the first the highest variate-value with first variate-value lower than photo current with first minimum variate-value.Arrange reference picture from the reference picture that the reference picture that has greater than first variate-value of photo current, has the first minimum variate-value to reference picture then with first the highest variate-value.For example, preferentially arrange reference picture to VN0, and arrange to VN4 from the VN3 that during having, has the first minimum variate-value then with first the highest variate-value greater than the VN3 of first variate-value of photo current and VN4 with first minimum variate-value from the VN1 that during having, has the first the highest variate-value less than the VN0 of first variate-value of photo current and VN1.
In the situation of reference picture list 1, can be applied in the method for the arrangement reference listing of explaining above 0 similarly.
At first, situation in the reference picture list that is used for time prediction, in the order of the short-term reference picture of arranging reference picture list 1, preferentially in the reference picture that has greater than the POC value of photo current, arrange reference picture to reference picture, and in the reference picture that has less than the POC value of photo current, arrange to reference picture then with minimum POC value from reference picture with the highest POC value with the highest POC value from reference picture with minimum POC value.For example, can be preferentially arrange reference picture to PN4, and arrange to PN0 from the PN1 that during having, has the highest POC value then greater than the reference picture PN0 of the POC value of photo current and PN1 from the PN3 that during having, has minimum POC value greater than the reference picture PN3 of the POC value of photo current and PN4.
In the order of the long term reference picture of arranging reference picture list 1, arrange reference picture to having the reference picture of high variable from reference picture with minimum variables L ongtermPicNum.For example, arrange reference picture from the LPN0 that among LPN0 and LPN1, has minimum to LPN1 with minimum variable.
In the situation of the reference picture list that is used for inter-view prediction, can arrange reference picture based on the first variable V iewNum that uses view information to derive.For example, in the situation of the reference picture list 1 that is used for inter-view prediction, can in the reference picture that has greater than first variate-value of photo current, arrange reference picture to reference picture with first the highest variate-value from reference picture with first minimum variate-value.In the reference picture that has less than first variate-value of photo current, arrange reference picture to reference picture then with first minimum variate-value from reference picture with first the highest variate-value.For example, preferentially in having, arrange reference picture to VN4, and in having, arrange to VN0 then with first minimum variate-value from VN1 with first the highest variate-value less than the VN0 of first variate-value of photo current and VN1 with first the highest variate-value from VN3 with first minimum variate-value greater than the VN3 of first variate-value of photo current and VN4.
The reference picture list that is initialised by above process is passed to reference picture list rearrangement unit 640.Initialized reference picture list is rearranged then more effectively to encode.Rearrangement process is used for by the operation decodes picture buffer by distributing trumpet to reduce bit rate to the reference picture of the high likelihood with the reference picture of being selected as.Following the whole bag of tricks referring to figs. 12 to 19 explanation rearrangement reference picture list.
Figure 12 is the internal frame diagram that reference picture list according to an embodiment of the invention is reset unit 640.
With reference to Figure 12, reference picture list is reset unit 640 and is consisted essentially of clip types inspection unit 642, reference picture list 0 rearrangement unit 643 and reference picture list 1 rearrangement unit 645.
Particularly, reference picture list 0 rearrangement unit 643 comprises that first identifying information obtains unit 643A and first reference key distributes change unit 643B.And reference picture list 1 is reset unit 645 and is comprised that second identification obtains unit 645A and second reference key distributes change unit 645B.
Clip types inspection unit 642 is checked the clip types of current fragment.Whether decision resets reference picture list 0 and/or reference picture list 1 according to clip types then.For example, if the clip types of current fragment is the I-fragment, then reference picture list 0 and reference picture list 1 all are not rearranged.If the clip types of current fragment is the P-fragment, then only reset reference picture list 0.If the clip types of current fragment is the B-fragment, then reference picture list 0 and reference picture list 1 all are rearranged.
If be used to carry out the flag information of the rearrangement of reference picture list 0 and be " very " and if the clip types of current fragment be not the I-fragment, then activate reference picture list 0 and reset unit 643.First identifying information obtains the identifying information that unit 643A obtains indication reference key distribution method.First reference key distributes change unit 643B to change the reference key of each reference picture that is assigned to reference picture list 0 according to identifying information.
Similarly, if to be used to carry out the flag information of the rearrangement of reference picture list 1 be " very " and if the clip types of current fragment be the B-fragment, then activate reference picture list 1 and reset unit 645.Second identifying information obtains the identifying information that unit 645A obtains indication reference key distribution method.Second reference key distributes change unit 645B to change the reference key of each reference picture that is assigned to reference picture list 1 according to identifying information.
So, reset unit 645 by reference picture list 0 rearrangement unit 643 and reference picture list 1 and produce the reference picture list information that is used for actual inter prediction.
The following explanation by first or second reference key with reference to Figure 13 distributes change unit 643B or 645B to change the method for the reference key that is assigned to each reference picture.
Figure 13 is that reference key according to an embodiment of the invention distributes the internal frame diagram that changes unit 643B or 645B.In the following description, explain together in 0 rearrangement unit 643 of the reference picture list shown in Figure 12 and reference picture list 1 rearrangement unit 645.
With reference to Figure 13, each of first and second reference keys distribution change unit 643B and 645B comprises reference key distribution change unit 644A, the reference key distribution change unit 644B that is used for the long term reference picture, the reference key distribution change unit 644C that is used for inter-view prediction and the reference key distribution change termination unit 644D that is used for time prediction.According to obtain the identifying information that unit 643A and 645A obtain by first and second identifying informations, activate respectively at first and second reference keys and distribute the parts that change among unit 643B and the 645B.And, keep carrying out rearrangement process, be used to stop reference key until input and distribute the identifying information that changes.
For example, if obtain the identifying information that unit 643A or 645A receive the distribution that is used to change the reference key that is used for time prediction, then activate the reference key that is used for time prediction and distribute and change unit 644A from first or second identifying information.The reference key that is used for time prediction distributes change unit 644A poor according to the identifying information acquisition picture that is received.In this case, the picture difference is meant poor between the picture of the picture of photo current number and prediction number.And, the reference picture of just having distributed before the picture of prediction number can refer to number.So, can use the picture difference that is obtained to change the distribution of reference key.In this case, according to identifying information, can to/from the picture of prediction number increase/to deduct picture poor.
As another example,, then activate the reference key that is used for the long term reference picture and distribute change unit 644B if receive the identifying information that is used for the distribution of reference key is changed into the long term reference picture of appointment.The reference key that is used for the long term reference picture distribute to change unit 644B obtains designated pictures according to identifier long term reference picture number.
As another example,, then activate the reference key that is used for inter-view prediction and distribute change unit 644C if receive the identifying information of the distribution that is used to change the reference key that is used for inter-view prediction.The reference key that is used for inter-view prediction distributes change unit 644C poor according to identifying information acquisition view information.In this case, poor between the figure number of looking that looks figure number and prediction that be meant at photo current of view information difference.And, the figure number of looking of looking reference picture that figure number just distributed before can indicating of prediction.So, can use the view information difference that is obtained to change the distribution of reference key.In this case, according to identifying information, can to/from prediction look figure number increase/to deduct view information poor.
For another example, be used to stop the identifying information that reference key distributes change if receive, then activate reference key and distribute change to stop unit 644D.Reference key distributes change to stop unit 644D stops reference key according to the identifying information that is received distribution change.So reference picture list is reset unit 640 and is produced reference picture list information.
Therefore, can with the reference picture that is used for time prediction manage the reference picture that is used for inter-view prediction.Alternately, can be independent of the reference picture that is used for time prediction and manage the reference picture that is used for inter-view prediction.For this reason, may need to be used to manage the fresh information of the reference picture that is used for inter-view prediction.This will explain with reference to Figure 15 to 19 subsequently.
The following details of explaining the reference key distribution change unit 644C that is used for inter-view prediction with reference to Figure 14.
Figure 14 is the chart that is used to explain the process of using view information rearrangement reference picture list according to an embodiment of the invention.
With reference to Figure 14, if the figure number VN that looks of photo current is 3, if the dimension D PBsize of decoding picture buffer is 4, and if the clip types of current fragment be the P-fragment, then description below is used for the rearrangement process of reference picture list 0.
At first, the figure number of looking of initial predicted is " 3 ", and it is the figure number of looking of photo current.And the initial arrangement that is used for the reference picture list 0 of inter-view prediction is " 4,5,6,2 " (1.).In this case, be used for by deducting the identifying information that the view information difference changes the distribution of the reference key that is used for inter-view prediction if receive, it is poor as view information then to obtain " 1 " according to the identifying information that is received.By from prediction look figure number (=3) deducts that view information poor (=1) calculates the prediction that makes new advances look figure number (=2).Particularly, first index assignment that will be used for the reference picture list 0 of inter-view prediction is given the reference picture with figure number of looking 2.And, the picture of before distributing to first index can be moved to the decline of reference picture list 0.So the reference picture list 0 of rearrangement is " 2,5,6,4 " (2.).Subsequently, be used for by deducting the identifying information that the view information difference changes the distribution of the reference key that is used for inter-view prediction if receive, it is poor as view information then to obtain " 2 " according to identifying information.Then by from prediction look figure number (=2) deducts that view information poor (=-2) calculates the prediction that makes new advances look figure number (=4).Particularly, second index assignment that will be used for the reference picture list 0 of inter-view prediction is given the reference picture with figure number of looking 4.Therefore, the reference picture list 0 of rearrangement is " 2,4,6,5 " (3.).Subsequently, be used to stop the identifying information that reference key distributes change, then produce the reference picture list 0 (4.) of the reference picture list 0 that finally has rearrangement according to the identifying information that is received if receive.Therefore, the order of the final reference picture list that is used for inter-view prediction 0 that produces is " 2,4,6,5 ".
For another example of after first index that divides the reference picture list 0 that is used in inter-view prediction, resetting all the other pictures, the picture of distributing to each index can be moved to just in time position in the back of corresponding picture.Particularly, give picture, the 3rd index assignment is distributed the picture (looking figure number 5) of second index to it, and the 4th index assignment is distributed the picture (looking figure number 6) of the 3rd index to it with figure number of looking 4 with second index assignment.Therefore, the reference picture list 0 of rearrangement becomes " 2,4,5,6 ".And, can carry out rearrangement process subsequently in an identical manner.
The reference picture list that process by above explanation produces is used to inter prediction.Can be used as a reference picture list and manage reference picture list that is used for inter-view prediction and the reference picture list that is used for time prediction.Alternately, each of reference picture list that is used for the reference picture list of inter-view prediction and is used for time prediction can be used as independent reference picture list and is managed.Followingly this is made an explanation with reference to Figure 15 to 19.
Figure 15 is the internal frame diagram that reference picture list is according to another embodiment of the present invention reset unit 640.
With reference to Figure 15,, may need new information in order to manage the reference picture list that is used for inter-view prediction as independent reference picture list.For example, in some cases, the reference picture list that is used for time prediction is rearranged, and the reference picture list that is used for inter-view prediction then is rearranged.
Reference picture list is reset unit 640 and is consisted essentially of the reference picture list rearrangement unit 970 that the reference picture list that is used for time prediction is reset unit 910, NAL type checking unit 960 and is used for inter-view prediction.
The reference picture list rearrangement unit 910 that is used for time prediction comprises that clip types inspection unit 642, the 3rd identifying information obtain unit 920, the 3rd reference key distributes change unit 930, the 4th identifying information to obtain unit 940 and the distribution of the 4th reference key changes unit 950.The distribution of the 3rd reference key changes unit 930 and comprises that the reference key distribution change unit 930A that is used for time prediction, the reference key that is used for the long term reference picture distribute change unit 930B and reference key to distribute change termination unit 930C.Similarly, the 4th reference key distribution change unit 950 comprises that the reference key distribution change unit 950A that is used for time prediction, the reference key that is used for the long term reference picture distribute change unit 950B and reference key distribution change termination unit 950C.
The reference picture list that is used for time prediction is reset the reference picture that unit 910 rearrangements are used for time prediction.Be used for operation that the reference picture list of time prediction resets unit 910 except the tabulate operation of rearrangement unit 640 of the information above-mentioned reference picture and shown in Figure 10 of the reference picture that is used for inter-view prediction identical.So, omitted the details that the reference picture list that is used for time prediction is reset unit 910 in the following description.
The NAL type of the bit stream that is received is checked in NAL type checking unit 960.If the NAL type is the NAL that is used for multi-view video coding, then resets unit 970 and reset the reference picture that is used for inter-view prediction by the reference picture list that is used for time prediction.The reference picture list that is used for inter-view prediction that is produced is used to inter prediction with the reference picture list of being reset unit 910 generations by the reference picture list that is used for time prediction.Yet if the NAL type is not the NAL that is used for multi-view video coding, the reference picture list that is used for inter-view prediction is not rearranged.In this case, only produce the reference picture list that is used for time prediction.And the inter-view prediction reference picture list is reset unit 970 and is reset the reference picture that is used for inter-view prediction.Followingly this is explained in detail with reference to Figure 16.
Figure 16 is the internal frame diagram that the reference picture list that is used for inter-view prediction according to an embodiment of the invention is reset unit 970.
With reference to Figure 16, the reference picture list rearrangement unit 970 that is used for inter-view prediction comprises that clip types inspection unit 642, the 5th identifying information obtain unit 971, Wucan is examined index assignment and changed unit 972, the 6th identifying information acquisition unit 973 and the 6th reference key distribution change unit 974.
Clip types inspection unit 642 is checked the clip types of current fragment.If determine whether to carry out the rearrangement of reference picture list 0 and/or reference picture list 1 like this, then according to clip types.Can release the details of clip types inspection unit 642 from Figure 10, it is omitted in the following description.
The the 5th and the 6th identifying information obtains the identifying information of each the acquisition indication reference key distribution method in the unit 971 and 973.And the 5th and the 6th reference key distributes each change that changes in the unit 972 and 974 to be assigned to the reference key of each reference picture of reference picture list 0 and/or 1.In this case, reference key can only refer to the figure number of looking of reference picture.And the identifying information of indication reference key distribution method can be a flag information.For example, if flag information is true, then changes and look the distribution of figure number.If flag information is false, then can stop looking the rearrangement process of figure number.If flag information is true, then to distribute each that change in the unit 972 and 974 to obtain to look figure number according to flag information poor for the 5th and the 6th reference key.In this case, look poor between the figure number of looking that looks figure number and predicted pictures that the figure number difference is meant at photo current.And, the figure number of looking of looking reference picture that figure number just distributed before can referring to of predicted pictures.Can use then and look the figure number difference and change and look figure number and distribute.In this case, according to identifying information, can to/increase/deduct from the figure number of looking of predicted pictures that to look figure number poor.
Therefore, in order to manage the reference picture list that is used for inter-view prediction, be necessary a kind of syntactic structure of redetermination as independent reference picture list.As an embodiment of the content of in Figure 15 and Figure 16, explaining, followingly sentence structure is made an explanation with reference to Figure 17, Figure 18 and Figure 19.
Figure 17 and Figure 18 are the charts that is used for the sentence structure of reference picture list rearrangement according to an embodiment of the invention.
With reference to Figure 17, the operation that the reference picture list that is used for time prediction shown in Figure 15 is reset unit 910 is represented as sentence structure.Compare with piece shown in Figure 15, clip types inspection unit 642 is corresponding to S1 and S6, and the 4th identifying information obtains unit 940 corresponding to S7.The 3rd reference key distributes the internal block that changes unit 930 to correspond respectively to S3, S4 and S5.And the 4th reference key distributes the internal block that changes unit 950 to correspond respectively to S8, S9 and S10.
With reference to Figure 18, the operation that unit 970 is reset in NAL type checking unit 960 and the tabulation of inter-view reference picture is represented as sentence structure.Compare with Figure 15 and each piece shown in Figure 16, NAL type checking unit 960 is corresponding to S11, clip types inspection unit 642 is corresponding to S13 and S16, and the 5th identifying information obtains unit 971 corresponding to S14, and the 6th identifying information obtains unit 973 corresponding to S17.Wucan is examined index assignment and is changed unit 972 corresponding to S15, and the 6th reference key distributes change unit 974 corresponding to S18.
Figure 19 is the chart of the sentence structure that is used for the reference picture list rearrangement according to another embodiment of the invention.
With reference to Figure 19, the operation that unit 970 is reset in NAL type checking unit 960 and the tabulation of inter-view reference picture is represented as sentence structure.Compare with Figure 15 and each piece shown in Figure 16, NAL type checking unit 960 is corresponding to S21, clip types inspection unit 642 is corresponding to S22 and S25, and the 5th identifying information obtains unit 971 corresponding to S23, and the 6th identifying information obtains unit 973 corresponding to S26.Wucan is examined index assignment and is changed unit 972 corresponding to S24, and the 6th reference key distributes change unit 974 corresponding to S27.
Described in explanation in front, the reference picture list that is used for inter-view prediction can be used by inter prediction unit 700, and also can be used to carry out luminance compensation.During carrying out motion estimation/motion compensation, can use luminance compensation.Use in the situation of the reference picture in the different views at photo current, can use the reference picture list that is used for inter-view prediction to carry out more efficiently luminance compensation.Description below is luminance compensation according to an embodiment of the invention.
Figure 20 is the chart of the process of the luminance difference that is used to obtain current block according to an embodiment of the invention.
Luminance compensation is meant and is used for changing the process of coming self-adaption of decoding motion compensated video signal according to brightness.And it can be applicable to the predict of vision signal, for example predicts in inter-view prediction, the view etc.
Luminance compensation is meant and is used to the process using the luminance difference residual value and come decoded video signal with the luminance difference predicted value that decoded piece is corresponding.In this case, can obtain the luminance difference predicted value from the adjacent block of current block.Can use the reference information that is used for adjacent block to decide to be used for the process that obtains the luminance difference predicted value from adjacent block, and sequence and direction can be carried out consideration during the search adjacent block.Adjacent block is meant decoded piece and refers to by consider piece of decoding about the redundancy of view or time or the sequence of decoding by the redundancy of considering in the different pictures in identical picture.
When the similarity that compares between current block and candidate's reference block, the luminance difference between two pieces should be carried out consideration.Poor for compensate for brightness, carry out new motion estimation/compensation.Can use formula 1 to find new SAD.
[formula 1]
M cur = 1 S × T &Tgr; i = m m + S - 1 Σ j = n n + T - 1 f ( i , j ) M ref ( p , q ) = 1 S × T Σ i = p p + S - 1 Σ j = q q + T - 1 r ( i , j )
[formula 2]
NewSAD ( x , y ) = Σ i = m m + S - 1 Σ j = n n + T - 1 | { f ( i , j ) - M cur } - { r ( i + x , j + y ) - M ref ( m + x , n + y ) } |
In this case, the average pixel value of " Mcurr " indication current block, and the average pixel value of " Mref " indication reference block.The pixel value of " f (i, j) " indication current block, and the pixel value of " r (i+x, j+y) " indication reference block.By carrying out estimation, can obtain the mean pixel difference between current block and reference block based on new SAD according to formula 2.And the mean pixel difference that is obtained can be called as luminance difference (IC_offset).
In the situation of carrying out the estimation of it being used luminance compensation, produce luminance difference and motion vector.And, use luminance difference and motion vector to carry out luminance compensation according to formula 3.
[formula 3]
NewR(i,j)={f(i,j)-M cur}-{r(i+x',j+y')-M ruf(m+x',n+y')}
={f(i,j)-r(i+x',j+y')}-{M cur-M ref(m+x',n+y')}
={f(i,j)-r(i+x',j+y')}-IC_offsei
In this case, NewR (i, j) indication luminance compensation error amount (residual value), and (x ', y ') the indication motion vector.
Luminance difference (Mcurr-Mref) should be delivered to decoding unit.Decoding unit is carried out luminance compensation in the following manner.
[formula 4]
f ′ ( i , j ) = { NewR ′ ′ ( x ′ , y ′ , i , j ) + r ( i + x ′ , j + y ′ ) } + { M cur - M ref ( m + x ′ , n + y ′ ) } = { NewR ′ ′ ( x ′ , y ′ , i , j ) + r ( i + x ′ , j + y ′ ) } + IC _ offset
In formula 4, NewR " (i, j) the luminance compensation error amount (residual value) of indication reconstruct, and f ' (i, j) pixel value of the current block of indication reconstruct.
For the reconstruct current block, luminance difference should be delivered to decoding unit.And, can predict luminance difference from the information of adjacent block.In order further to reduce the figure place of coding luminance difference, can only be sent in the difference (RIC_offset) between the luminance difference (predIC_offset) of the luminance difference (IC_offset) of current block and adjacent block.This is expressed as formula 5.
[formula 5]
RIC_offset=IC_offset-predIC_offset
Figure 21 is the flow chart of process that is used to carry out the luminance compensation of current block according to an embodiment of the invention.
With reference to Figure 21, at first, from the luminance difference of vision signal extraction adjacent block, this luminance difference indication is at the adjacent block of current block with by the mean pixel difference (S2110) between the piece of adjacent block reference.
Subsequently, use the luminance difference acquisition to be used for the luminance difference predicted value (S2120) of the luminance compensation of current block.So, the luminance difference that can use the luminance difference predicted value that obtained to come the reconstruct current block.
In obtaining the luminance difference predicted value, can make in all sorts of ways.For example, before predicting the luminance difference of current block, check whether the reference key of current block equals the reference key of adjacent block from the luminance difference of adjacent block.Can use which kind of adjacent block or value according to the check result decision then.For another example, in obtaining the luminance difference predicted value, can use the flag information (IC_flag) of the luminance compensation that indicates whether to carry out current block.And, also can use the information of adjacent block to predict the flag information that is used for current block.For another example, can use reference key inspection method and flag information Forecasting Methodology to obtain the luminance difference predicted value.Followingly this is explained in detail referring to figs. 22 to 24.
Figure 22 is the block diagram that is used to use the process of the luminance difference predicted value that information about adjacent block obtains current block according to an embodiment of the invention.
With reference to Figure 22, during the luminance difference predicted value that obtains current block, can use information about adjacent block.In the disclosure, piece can comprise macro block or sub-macro block.For example, can use the luminance difference of adjacent block to predict the luminance difference of current block.Before this, check whether the reference key of current block equals the reference key of adjacent block.Which kind of according to check result, can determine then to use adjacent block or value.In Figure 22, the reference key of " refIdxLX " indication current block, the reference key of " refIdxLXN " indicator collet-N.In this case, " N " is mark and indication A, the B or the C of the piece adjacent with current block.And " PredIC_offsetN " indication is used for the luminance difference of the luminance compensation of adjacent block-N.If can not use the piece-C that is positioned at current block upper right side place, then can use piece-D rather than piece-C.Particularly, can be used as information about the information of piece-D about piece-C.If can not use piece-B and piece-C, then can alternatively use piece-A.That is, can with about the information of piece-A as information about piece-B or piece-C.
For another example, during obtaining the luminance difference predicted value, can use the flag information (IC_flag) of the luminance compensation that indicates whether to carry out current block.Alternately, obtaining to use reference key inspection method and flag information Forecasting Methodology during the luminance difference predicted value.In this case, if luminance compensation is not carried out in indication about the flag information of adjacent block, if i.e. IC_falg==0, then the luminance difference " PredIC_offsetN " with adjacent block is set to 0.
Figure 23 is the flow chart of carrying out the process of luminance compensation about the information of adjacent block that is used to use according to an embodiment of the invention.
With reference to Figure 23, decoding unit extracts the average pixel value of reference block, the reference key of current block, the reference key of reference block etc. from vision signal, and can use the luminance difference predicted value of the information acquisition current block that is extracted then.Decoding unit obtains luminance difference and the difference between the luminance difference predicted value (luminance difference residual value) at current block, and the luminance difference that can use the luminance difference residual value that obtained and luminance difference predicted value to come the reconstruct current block then.In this case, can use about the information of adjacent block to obtain the luminance difference predicted value of current block.For example, can use the luminance difference of the luminance difference prediction current block of adjacent block.Before this, check whether the reference key of current block equals the reference key of adjacent block.Which kind of according to check result, can determine then to use adjacent block or value.
Particularly, from the luminance difference of vision signal extraction adjacent block, this luminance difference indication is at the adjacent block of current block with by the mean pixel difference (S2310) between the piece of adjacent block reference.
Subsequently, whether the reference key of inspection current block equals one reference key (S2320) in a plurality of adjacent blocks.
As the result who checks step S2320,, then check whether there is a corresponding adjacent block (S2325) if exist at least one adjacent block to have the reference key identical with current block.
As the result who checks step S2325, an adjacent block has the reference key identical with current block if only exist, and the luminance difference that then will have the adjacent block of the reference key identical with current block is distributed to the luminance difference predicted value (S2330) of current block.Particularly, it is " PredIC_offset=PredIC_offsetN ".
If do not have adjacent block with reference key identical with current block as the result who checks step S2320, if perhaps exist at least two adjacent blocks to have the reference key identical with current block as the result who checks step S2325, then the intermediate value of the luminance difference (PredIC_offsetN, N=A, B or C) of adjacent block is distributed to the luminance difference predicted value (S650) of current block.Particularly, it is " PredIC_offset=Median (PredIC_offsetA, PredIC_offsetB, PredIC_offsetC) ".
Figure 24 be according to another embodiment of the invention be used to use the flow chart of carrying out the process of luminance compensation about the information of adjacent block.
With reference to Figure 24, the luminance difference of the necessary reconstruct current block of decoding unit is to carry out luminance compensation.In this case, can use the luminance difference predicted value that obtains current block about the information of adjacent block.For example, can use the luminance difference of adjacent block to predict the luminance difference of current block.Before this, check whether the reference key of current block equals the reference key of adjacent block.Which kind of according to check result, can determine then to use adjacent block or value.
Particularly, from the luminance difference of vision signal extraction adjacent block, this luminance difference indication is at the adjacent block of current block with by the mean pixel difference (S2410) between the piece of adjacent block reference.
Subsequently, whether the reference key of inspection current block equals one reference key (S2420) in a plurality of adjacent blocks.
As the result who checks step S720,, then check whether there is a corresponding adjacent block (S2430) if exist at least one adjacent block to have the reference key identical with current block.
As the result who checks step S2430, an adjacent block has the reference key identical with current block if only exist, and the luminance difference that then will have the adjacent block of the reference key identical with current block is distributed to the luminance difference predicted value (S2440) of current block.Particularly, it is " PredIC_offset=PredIC_offsetN ".
If do not have the adjacent block with reference key identical with current block as the result who checks step S2420, then the luminance difference predicted value of current block is set to 0 (S2460).Particularly, it is " PredIC_offset=0 ".
If exist at least two adjacent blocks to have the reference key identical with current block as the result who checks step S2430, the adjacent block that then has the reference key that is different from current block is set to 0, and will comprise that the intermediate value of the luminance difference of the adjacent block that is set to 0 value distributes to the luminance difference predicted value (S2450) of current block.Particularly, it is " PredIC_offset=Median (PredIC_offsetA, PredIC_offsetB, PredIC_offsetC) ".Yet, have in the situation of adjacent block of the reference key that is different from current block in existence, can in PredIC_offsetA, PredIC_offsetB or PredIC_offsetC, comprise value " 0 ".
Simultaneously, be used for discerning the view information of view of picture and the picture that the reference picture list that is used for inter-view prediction can be applicable to synthetic virtual view.In the process of the picture that is used for synthetic virtual view, can be with reference to the picture in different views.So,, then can synthesize the picture in the virtual view more efficiently if use view information and the reference picture list that is used for inter-view prediction.In the following description, explain the method for the picture in the synthetic according to an embodiment of the invention virtual view.
Figure 25 is the block diagram that the picture that is used for using virtual view according to an embodiment of the invention is predicted the process of photo current.
With reference to Figure 25, in multi-view video coding, carry out during the inter-view prediction, can use the picture that is different from the view of front view to predict photo current as the reference picture.Yet, use picture in the view adjacent to obtain picture in the virtual view, and use the picture in the virtual view that is obtained to predict photo current then with the view of photo current.If like this, then can carry out prediction more accurately.In this case, can use the view identifier of view of indication picture to utilize picture in the adjacent view or the picture in the particular figure.In the situation that produces virtual view, must there be the specific sentence structure that is used to indicate whether to produce virtual view.If this sentence structure indication should produce virtual view, then can use view identifier to produce virtual view.Can be used as reference picture by the picture in the virtual view of synthetic predicting unit 740 acquisitions of view.In this case, view identifier can be distributed to picture in the virtual view.Be used for carrying out the process that motion-vector prediction transmits motion vector, the adjacent block of current block can be with reference to the picture that is obtained by the synthetic predicting unit 740 of view.In this case, for the picture in the virtual view is used as reference picture, can utilize the view identifier of the view of indication picture.
Figure 26 is the flow chart that is used for the process of the picture of synthetic virtual view during MVC carries out inter-view prediction according to an embodiment of the invention.
With reference to Figure 26, use the picture in the synthetic virtual view of picture in the view adjacent with the view of photo current.Use the picture prediction photo current in the virtual view that is synthesized then.Predict more accurately if like this, then can realize.In the situation of the picture in synthetic virtual view, there is the specific sentence structure that indicates whether to carry out the prediction of photo current by the picture in the synthetic virtual view.If whether decision carries out the prediction of photo current, then can encode more effectively.This specific sentence structure is defined as synthetic prediction identifier between the view of description below.For example, by the picture in the synthetic virtual view of slice layer, indicate whether to carry out " view_synthesize_pred_flag " of the prediction of photo current with definition.And,, indicate whether to carry out " view_syn_pred_flag " of the prediction of photo current with definition by the picture in the synthetic virtual view of macroblock layer." if view_synthesize_pred_flag=1 ", then current fragment uses the fragment in the view adjacent with the view of current fragment to synthesize fragment in the virtual view.Can use the fragment of being synthesized to predict current fragment then." if viewsynthesize_pred_flag=0 ", the fragment in the then not synthetic virtual view.Similarly, if " view_syn_pred_flag=1 ", then the macro block in the view that the view of current macro use and current macro is adjacent synthesizes the macro block in the virtual view.Can use the macro block that is synthesized to predict current macro then." if view_syn_pred_flag=0 ", the macro block in the then not synthetic virtual view.Therefore, in the present invention, extract synthetic prediction identifier between the view indicate whether to obtain the picture the virtual view from vision signal.Can use the picture in the synthetic prediction identifier acquisition virtual view between view then.
Described in explanation in front, inter prediction unit 700 can be used the view information of the view that is used to discern picture and be used for the reference picture list of inter-view prediction.And they also can be used to carry out weight estimation.Weight estimation can be applicable to be used to carry out the process of motion compensation.Like this, if photo current uses the reference picture in the different views, then can use view information and the reference picture list that is used for inter-view prediction to carry out weight estimation more efficiently.Description below is weight predicting method according to an embodiment of the invention.
Figure 27 is a flow chart of carrying out the method for weight estimation according to clip types according to of the present invention in video signal coding.
With reference to Figure 27, weight estimation is the method for graded movement compensation prediction data sample in P-fragment or B-fragment macro block.Weight predicting method comprises that the weight coefficient information that is used to use from about the information acquisition of reference picture carries out the explicit mode of weight estimation to photo current, and is used for using from photo current being carried out the implicit mode of weight estimation about the weight coefficient information of the information acquisition of the distance between photo current and reference picture one.Can differently use weight predicting method according to the clip types of current macro.For example, in explicit mode, can be the macro block of P-fragment or the macro block of B-fragment changes weight coefficient information according to current macro of it being carried out weight estimation.And the weight coefficient of explicit mode can and can be transmitted by being included in the section headers by the encoder decision.On the other hand, in implicit mode, can obtain weight coefficient based on the relative time position of tabulation 0 and tabulation 1.For example, if the reference picture time goes up near photo current, then can use big weight coefficient.If the reference picture time goes up away from photo current, then can use little weight coefficient.
At first, extract and to use the clip types (S2710) of the macro block of weight estimation to it from vision signal.
Subsequently, can carry out weight estimation (S2720) to macro block according to the clip types of being extracted.
In this case, clip types can comprise the macro block of predicting between its application view.Inter-view prediction is meant that use predicts photo current about the information of the picture in the view different with the view of photo current.For example, clip types can comprise it is used time prediction to be used for using macro block about the information and executing prediction of the picture of the view identical with the view of photo current, to use the macro block of time prediction and inter-view prediction to the macro block predicted between its application view and to it.And, clip types can comprise to its only use time prediction macro block, use the macro block of time prediction and inter-view prediction to its macro block of only predicting between application view or to it.In addition, clip types can comprise two kinds of macro block (mb) types or three kinds of all macro block (mb) types.This will explain with reference to Figure 28 subsequently in detail.Therefore, extracting the situation of the clip types that comprises the macro block of predicting between application view, use information and executing weight estimation about the picture in the view different with the view of photo current from vision signal.Like this, can utilize the view identifier of the view that is used for discerning picture to use about information at the picture of different views.
Figure 28 be according to an embodiment of the invention in video signal coding in clip types the chart of permissible macro block (mb) type.
With reference to Figure 28, if the P-clip types of inter-view prediction is defined as VP (ViewP), then for the P-clip types of inter-view prediction, tolerable intra-frame macro block I, from the macro block VP (2810) of the macro block P of a picture prediction when front view or a picture prediction from different views.
Be defined as in the situation of VB (View_B) in the B-clip types with inter-view prediction, tolerable is from the macro block P of at least one the picture prediction when front view or the macro block VP or the VB (2820) of B or at least one the picture prediction from different views.
In the situation that will the clip types that its prediction service time, inter-view prediction or time prediction and inter-view prediction are carried out prediction be defined as " Mixed (mixing) ", for the mixing tab segment type, tolerable intra-frame macro block I, from the macro block P of at least one picture prediction when the front view or B, from different views at least one picture prediction macro block VP or VB or use the macro block of predicting when picture in the front view and the picture in the different views " Mixed (mixing) " (2830).In this case, in order to use the picture in the different views, can use the view identifier of the view that is used to discern picture.
Figure 29 and Figure 30 are the charts that is used for carrying out according to the clip types of redetermination the sentence structure of weight estimation according to an embodiment of the invention.
Described in the explanation of Figure 28 in front, if the clip types decision is VP, VB or mixing, then can (for example, H.264) sentence structure be modified as Figure 29 or Figure 30 with being used to carry out traditional weight estimation.
=VB) " (2910).
If clip types is the B-fragment of time prediction, then the if-statement may be modified as " if (slice_type==B ‖ slice_type==Mixed) " (2920).
By redetermination VP clip types and VB clip types, can newly add the form (2930,2940) that is similar to Figure 29.In this case, because added information, so syntax elements comprises " view (view) " part respectively about view.For example, " luma_log2_view_weight_denom, chroma_log2_view_weight_denom " arranged.
Figure 31 is the flow chart that uses the flag information that indicates whether to carry out weight estimation between view to carry out the method for weight estimation in video signal coding according to of the present invention.
With reference to Figure 31, it is being used in the video signal coding of the present invention, indicate whether can carry out more efficient coding in use with carrying out in the situation of flag information of weight estimation.
Can define flag information based on clip types.For example, can exist the indication weight estimation will be applied to the flag information whether flag information that the P-fragment still is the SP-fragment or indication weight estimation will be applied to the B-fragment.
Particularly, flag information can be defined as " weighted_pred_flag " or " weighted_bipred_idc "." if weighted_pred_flag=0 ", then its indication is not used weight estimation to P-fragment and SP-fragment." if weighted_pred_flag=1 ", then its indication is used weight estimation to P-fragment and SP-fragment." if weighted_bipred_idc=0 ", then its indication is used the default weighting prediction to the B-fragment." if weighted_bipred_idc=1 ", then its indication is used the explicit weighting prediction to the B-fragment." if weighted_bipred_idc=2 ", then its indication is used the implicit expression weight estimation to the B-fragment.
In multi-view video coding, can define the flag information that indicates whether to use about the information and executing weight estimation of picture between view based on clip types.
At first, from vision signal extract clip types and indicate whether to carry out weight estimation between view flag information (S3110, S3120).In this case, clip types can comprise to its use time prediction be used for using about the macro block of the information and executing prediction of the picture of the view identical with the view of photo current and to prediction between its application view to be used for using macro block about the information and executing prediction of the picture of the view different with the view of photo current.
Then can be based on clip types of being extracted and the flag information that is extracted decision weight estimation pattern (S3130).
Subsequently, can carry out weight estimation (S3140) according to the weight estimation pattern that is determined.In this case, flag information can comprise flag information and aforesaid " weighted_pred_flag " and " weighted_bipred_flag " that indicates whether using about the information and executing weight estimation of the picture in the view different with the view of photo current.This will explain with reference to Figure 32 subsequently in detail.
Therefore, in the clip types of current macro is in the situation that comprises the clip types of the macro block predicted between its application view, indicate whether use is compared about the situation of the flag information of the information and executing weight estimation of the picture in the different views with use, can carry out more efficient coding.
Figure 32 according to an embodiment of the inventionly is used for explaining according to the chart that indicates whether to use about the weight predicting method of the flag information of the information and executing weight estimation of the picture of the view different with the view of photo current.
With reference to Figure 32, for example, can will use flag information to be defined as " view_weighted_pred_flag " or " view_weighted_bipred_flag " with indicating whether about the information and executing weight estimation of the picture in the view different with the view of photo current.
" if view_weighted_pred_flag=0 ", then its indication is not used weight estimation to the VP-fragment.If " view_weighted_pred_flag=1 " then uses the explicit weighting prediction to the VP-fragment." if view_weighted_bipred_flag=0 ", then its indication is used the default weighting prediction to the VB-fragment." if view_weighted_bipred_flag=1 ", then its indication is used the explicit weighting prediction to the VB-fragment." if view_weighted_bipred_flag=2 ", then its indication is used the prediction of implicit expression default weighting to the VB-fragment.
Using in the situation of implicit expression weight estimation to the VB-fragment, can be from obtaining weight coefficient in the relative distance between front view and different views.Using in the situation of implicit expression weight estimation to the VB-fragment, can use the view identifier of the view of discerning picture or carry out weight estimation by the picture order number (POC) that the difference of considering each view provides.
Above flag information can be included in the image parameters collection (PPS).In this case, image parameters collection (PPS) is meant the header information of the coding mode (for example, entropy coding pattern, by the quantization parameter initial value of picture element unit cell etc.) of all pictures of indication.Yet the image parameters collection is not attached to all pictures.If there is no the image parameters collection then is used as header information with the image parameters collection that just in time exists before this.
Figure 33 is the chart that is used for carrying out according to the flag information of redetermination the sentence structure of weight estimation according to an embodiment of the invention.
With reference to Figure 33, it is being used in the multi-view video coding of the present invention, in clip types that definition comprises the macro block that is applied to inter-view prediction and the situation that indicates whether to use about the flag information of the information and executing weight estimation of the picture in the view different, be necessary which kind of weight estimation decision will carry out according to clip types with the view of photo current.
For example, if the clip types of extracting from vision signal (as shown in figure 33) is P-fragment or SP-fragment, if then " weighted_pred_flag=1 " can carry out weight estimation.In clip types is in the situation of B-fragment, if " weighted_bipred_flag=1 " then can carry out weight estimation.In clip types is in the situation of VP-fragment, if " view_weighted_pred_flag=1 " then can carry out weight estimation.In clip types is in the situation of VB-fragment, if " view_weighted_bipred_flag=1 " then can carry out weight estimation.
Figure 34 is a flow chart of carrying out the method for weight estimation according to an embodiment of the invention according to NAL (network abstract layer) unit.
With reference to Figure 34, at first, extract NAL cell type (nal_unit_type) (S910) from vision signal.In this case, the NAL cell type is meant the identifier of the type of indication NAL unit.For example, if " nal_unit_type=5 ", then the NAL unit is the fragment of IDR picture.And IDR (instantaneous decoding refresh) picture is meant the header picture of video sequence.
Subsequently, check whether the NAL cell type that is extracted is the NAL cell type (S3420) that is used for multi-view video coding.
If the NAL cell type is the NAL cell type that is used for multi-view video coding, then use information and executing weight estimation (S3430) about the picture in the view different with the view of photo current.The NAL cell type can be the NAL cell type that can be applicable to the NAL cell type of gradable video encoding and multi-view video coding or only be used for multi-view video coding.Therefore, if the NAL cell type is to be used for multi-view video coding, then should use information and executing weight estimation about the picture in the view different with the view of photo current.So, be necessary to define new sentence structure.This explains with reference to Figure 35 and Figure 36 in detail with following.
Figure 35 and Figure 36 be according to an embodiment of the invention be the chart that the situation that is used for multi-view video coding is used to carry out the sentence structure of weight estimation at the NAL cell type.
At first, if the NAL cell type is the NAL cell type that is used for multi-view video coding, then can (for example, H.264) sentence structure be modified as Figure 35 or sentence structure shown in Figure 36 with being used to carry out traditional weight estimation.For example, reference numerals 3510 indications are used to carry out the syntactic component of traditional weight estimation, and reference numerals 3520 indications are used for carrying out at multi-view video coding the syntactic component of weight estimation.So,, then only carry out weight estimation by syntactic component 3520 if the NAL cell type is the NAL cell type that is used for multi-view video coding.In this case, because added information, so each syntax elements comprises " view (view) " part about view.For example, " Iuma_view_log2_weight_denom, chroma_view_log2_weight_denom " etc. arranged.And reference numerals 3530 indications among Figure 36 are used to carry out the syntactic component of traditional weight estimation, and 3540 indications of the reference numerals among Figure 36 are used for carrying out at multi-view video coding the syntactic component of weight estimation.So,, then only carry out weight estimation by syntactic component 3540 if the NAL cell type is the NAL cell type that is used for multi-view video coding.Similarly, because added information, so each syntax elements comprises " view (view) " part about view.For example, " luma_view_weight_ll_flag, chroma_view_weight_ll_flag " etc. arranged.Therefore, if definition is used for the NAL cell type of multi-view video coding, then can carry out more efficient coding about the mode of the information and executing weight estimation of the picture in the view different with use with the view of photo current.
Figure 37 is the block diagram that is used for the device of decoded video signal according to an embodiment of the invention.
With reference to Figure 37, the device that is used for decoded video signal according to the present invention comprises clip types extraction unit 3710, predictive mode extraction unit 3720 and decoding unit 3730.
Figure 38 be according to an embodiment of the invention in the decoding device shown in Figure 37 the flow chart of the method for decoded video signal.
With reference to Figure 38, the method for decoded video signal according to an embodiment of the invention comprises the step S3810 that extracts clip types and macroblock prediction pattern, and according to the step S3820 of clip types and/or macroblock prediction mode decoding current macro.
At first, explain the prediction scheme that embodiments of the invention use, to help to understand the present invention.Prediction scheme can be classified into prediction (for example, the prediction between the picture in identical view) and inter-view prediction (for example, the prediction between the picture in different views) in the view.And prediction can be the prediction scheme identical with common time prediction in the view.
According to the present invention, clip types extraction unit 3710 extracts the clip types (S3810) of the fragment that comprises current macro.
In this case, the clip types field (slice_type) of the clip types that indication can be used for predicting in the view and/or the indication clip types field (view_slice_type) of clip types that is used for inter-view prediction is provided as a part of vision signal sentence structure so that clip types to be provided.This will describe about Fig. 6 (a) and 6 (b) below in further detail.And each of clip types (view_slice_type) that is used for the clip types (slice_type) of prediction in the view and is used for inter-view prediction can be indicated for example I-clip types (I_SLICE), P-clip types (P_SLICE) or B-clip types (B_SLICE).
For example, if " slice_type " of specific fragment is B-fragment and " view_slice_type " is the P-fragment, then with direction in the view (that is time orientation) by B-fragment (B_SLICE) encoding scheme and/or with view direction by decode macro block in this specific fragment of P-fragment (P_SLICE) encoding scheme.
Simultaneously, clip types can comprise the mixing tab segment type (Mixed) of the prediction that the P-clip types (VP) that is used for inter-view prediction, the B-clip types (VB) that is used for inter-view prediction and two kinds of type of prediction of utilization mixing obtain.That is, the mixing tab segment type provides and uses in the view and the prediction of the combination of inter-view prediction.
In this case, the P-clip types that is used for inter-view prediction is meant from be included in each macro block in the fragment or the situation of macroblock partitions when picture of front view or a picture prediction the different views.The B-clip types that is used for inter-view prediction is meant from " when one or two pictures of front view " or " picture different views or two pictures in the different views respectively " prediction and is included in each macro block in the fragment or the situation of macroblock partitions.And, be used for the mixing tab segment type of the predictions that obtain and be meant from " when one or two pictures of front view ", " picture different views or two pictures in the different views respectively " or " picture in one or two pictures in front view and different views or two pictures in the different views respectively " prediction and be included in each macro block in the fragment or the situation of macroblock partitions from mixing two kinds of predictions.
In other words, different in each clip types with the macro block (mb) type that is allowed to by the picture of reference, this will explain with reference to Figure 43 and Figure 44 subsequently in detail.
And, will be at the sentence structure in the previous embodiment of explaining in detail with reference to Figure 40 and Figure 41 subsequently in clip types.
Predictive mode extraction unit 3720 can extract the macroblock prediction mode identifier, and whether its indication current macro is the macro block of the interior macro block of predicting of view, inter-view prediction or the macro block (S3820) that mixes two types the resulting prediction of prediction.For this reason, defmacro block prediction mode of the present invention (mb_pred_mode).An embodiment of macroblock prediction pattern will explained in detail with reference to Figure 39, Figure 40 and Figure 41 subsequently.
Decoding unit 3730 is according to the clip types and/or the macroblock prediction mode decoding current macro (S3820) of reception/generation current macro.In this case, can be according to the macro block (mb) type decoding current macro of the current macro that determines from macro block type information.And, can be according to macroblock prediction pattern and clip types decision macro block (mb) type.
In the macroblock prediction pattern is the situation of the pattern that is used for predicting in the view, according to the clip types decision macro block (mb) type that is used for predicting in the view, and then according to the macro block (mb) type that is determined by prediction decoding current macro in the view.
In the macroblock prediction pattern is the situation that is used for the pattern of inter-view prediction, according to the clip types decision macro block (mb) type that is used for inter-view prediction, and then according to the macro block (mb) type that is determined by the inter-view prediction current macro of decoding.
In the macroblock prediction pattern is the situation that is used for from the pattern of mixing the prediction that two kinds of predictions obtain, according to the clip types that is used for predicting in the view be used for the clip types decision macro block (mb) type of inter-view prediction, and then according to each of the macro block (mb) type that is determined by from mixing prediction that two kinds of predictions the obtain current macro of decoding.
In this case, macro block (mb) type depends on macroblock prediction pattern and clip types.Particularly, can determine to be used for the prediction scheme of macro block (mb) type from the macroblock prediction pattern, and determine macro block (mb) type by clip types from macro block type information according to prediction scheme then.That is, slice_type that model selection is extracted based on macroblock prediction and among the view_slice_type one or two.
For example, if the macroblock prediction pattern is the pattern that is used for inter-view prediction, then can be from (macro block form B) determines macro block (mb) type for I, P with the corresponding clip types of the clip types that is used for inter-view prediction (view_slice_type).To explain relation between macroblock prediction pattern and macro block (mb) type in detail with reference to Figure 39, Figure 40 and Figure 41 subsequently.
Figure 39 is the chart according to the macroblock prediction pattern of example embodiment of the present invention.
In Figure 39 (a), show with according to a corresponding form of embodiment of macroblock prediction pattern of the present invention (mb_pred_mode).
Prediction (that is, time prediction) is used for the situation of macro block in only with view, " 0 " is distributed to the value of " mb_pred_mode ".In the situation that only inter-view prediction is used for macro block, " 1 " is distributed to the value of " mb_pred_mode ".In the situation that time and inter-view prediction is used for macro block, " 2 " are distributed to the value of " mb_pred_mode ".
In this case, if the value of " mb_pred_mode " is " 1 ", that is, if " mb_pred_mode " indication inter-view prediction, then view direction List0 (ViewList0) or view direction List1 (ViewList1) are defined as being used for the reference picture list of inter-view prediction.
In Figure 39 (b), show the relation between macroblock prediction pattern and macro block (mb) type according to another embodiment.
If the value of " mb_pred_mode " is " 0 ", then only prediction service time.And, according to the clip types that is used for predicting in the view (slice_type) decision macro block (mb) type.
If the value of " mb_pred_mode " is " 1 ", then only use inter-view prediction.And, according to the clip types that is used for inter-view prediction (view_slice_type) decision macro block (mb) type.
If the value of " mb_pred_mode " is " 2 ", the hybrid predicting of prediction in service time and the view then.And, determine two kinds of macro block (mb) types according to clip types that is used for predicting in the view (slice_type) and the clip types (view_slice_type) that is used for inter-view prediction.
Based on the macroblock prediction pattern, provide macro block (mb) type based on the clip types shown in form 1-3 below.[please here insert among the N6540 form 7-12-7-14 as form 1-3]
In other words, in this embodiment, be used for the prediction scheme of macro block and clip types that will reference by macroblock prediction pattern decision.And, according to clip types decision macro block (mb) type.
Figure 40 and Figure 41 are the charts by the example embodiment of the sentence structure of a part of vision signal of the device reception that is used for decoded video signal.As shown, according to embodiments of the invention, this sentence structure has clip types and macroblock prediction pattern information.
In Figure 40, show an example sentence structure.In this sentence structure, field " slice_type " and field " view_slice_type " provide clip types, and field " mb_pred_mode " provides the macroblock prediction pattern.
According to the present invention, " slice_type " field is provided for the clip types of prediction in the view and the clip types that " view_slice_type " field is provided for inter-view prediction.Each clip types can become I-clip types, P-clip types or B-clip types.If the value of " mb_pred_mode " is " 0 " or " 1 ", then determine a kind of macro block (mb) type.Yet, be in the situation of " 2 " in the value of " mb_pred_mode ", further determine another kind of macro block (mb) type (perhaps two types) as can be seen.In other words, add " view_slice_type " in the indication of the sentence structure shown in (a) of Figure 40, so that further (I, P B) are applied to multi-view video coding with traditional clip types.
In Figure 41, show another example sentence structure.In this sentence structure, use " slice_type " field to provide clip types and use " mb_pred_mode " field that the macroblock prediction pattern is provided.
According to the present invention, " slice_type " field can comprise the clip types that is used for inter-view prediction (VP) in other, be used for the clip types-B (VB) of inter-view prediction and be used for from mixing in the view and the mixing tab segment type (Mixed) of the prediction that inter-view prediction obtains.
If the value of " mb_pred_mode " field is " 0 " or " 1 ", then determine a kind of macro block (mb) type.Yet, be in the situation of " 2 " in the value of " mb_pred_mode " field, determine other (that is, altogether two kinds) macro block (mb) type as can be seen.In this embodiment, clip types information is present in the section headers, and this will explain in detail about Figure 42.In other words, add VP, VB and mixing tab segment type to traditional clip types (slice_type) in the indication of the sentence structure shown in Figure 41.
Figure 42 is the chart that is used to use the example of the clip types shown in Figure 41.
Figure among Figure 42 (a) expresses, in section headers, except other clip types, can there be P-clip types (VP), the B-clip types (VB) that is used for inter-view prediction that is used for inter-view prediction and be used for from the mixing tab segment type (Mixed) of mixing the prediction that two kinds of predictions obtain as clip types.Particularly, add clip types VP, VB and Mixed to can in common section headers, exist clip types according to an example embodiment.
Figure among Figure 42 (b) expresses, in the section headers that is used for multi-view video coding (MVC), can there be P-clip types (VP), the B-clip types (VB) that is used for inter-view prediction that is used for inter-view prediction and be used for from the mixing tab segment type (Mixed) of mixing the prediction that two kinds of predictions obtain as clip types.Particularly, in the clip types of the section headers definition that is used for multi-view video coding according to example embodiment.
Figure among Figure 42 (c) expresses, in the section headers that is used for gradable video encoding (SVC), except the clip types (VP), the B-clip types (VB) that is used for inter-view prediction that are used for the existing clip types of gradable video encoding, can exist being used for inter-view prediction and be used for from the mixing tab segment type (Mixed) of mixing the prediction that two kinds of predictions obtain as clip types.Particularly, add clip types VP, VB and Mixed to can in the section headers of gradable video encoding (SVC) standard, exist clip types according to example embodiment.
Figure 43 is the chart of the various clip types examples that comprise in the clip types shown in Figure 41.
In Figure 43 (a), show the situation of a picture predicted segment type from different views.So clip types becomes the clip types (VP) that is used for inter-view prediction.
In Figure 43 (b), show the situation of two picture predicted segment types from different views respectively.So clip types becomes the B-clip types (VB) that is used for inter-view prediction.
In Figure 43 (c) and 43 (f), show from one or two pictures when front view and the situation of a picture predicted segment type in the different views.So clip types becomes and is used for from mixing the mixing tab segment type (Mixed) of the prediction that two kinds of predictions obtain.And, in Figure 43 (d) and 43 (e), show from one or two pictures when front view and the situation of two picture predicted segment types in the different views.So clip types also becomes mixing tab segment type (Mixed).
Figure 44 is the chart of the macro block of allowing of the clip types shown in Figure 41.
With reference to Figure 44, the P-clip types (VP) of inter-view prediction is allowed the macro block (P) of intra-frame macro block (I), a picture prediction from work as front view or the macro block (VP) of a picture prediction from different views.
The B-clip types (VB) of inter-view prediction allows intra-frame macro block (I), from the macro block VP or the VB of the macro block (P or B) of one or two pictures predictions when the front view or picture from different views or two pictures predictions in the different views respectively.
And mixing tab segment type (Mixed) is allowed intra-frame macro block (I); Macro block (P or B) from one or two picture predictions when front view; The macro block (VP or VB) of two pictures predictions in picture from different views or the different views respectively is perhaps respectively from the macro block (Mixed) of picture from one or two pictures when front view, different views or two pictures predictions in the different views.
Figure 45-the 47th, the chart of the macro block (mb) type of the macro block that in mixing tab segment type (Mixed), exists according to an embodiment of the invention.
In Figure 45 (a) and 45 (b), show respectively and be used at the macro block (mb) type (mb_type) that mixes the macro block that fragment exists and the allocation plan of sub-macro block (mb) type (sub_mb_type).
In Figure 46 and 47, show the prediction direction of the macro block that in mixing fragment, exists respectively and mix the binary representation of the actual prediction direction of fragment.
According to embodiments of the invention, both prepare macro block (mb) type (mb_type) size (Partition_Size) by considering macroblock partitions and the prediction direction (Direction) of macroblock partitions.
And both prepare sub-macro block (mb) type (sub_mb_type) prediction direction (Sub_Direction) of the size (Sub_Partition_Size) by considering sub-macroblock partitions and each sub-macroblock partitions.
With reference to Figure 45 (a), " Direction0 " and " Direction1 " indicates the prediction direction of first macroblock partitions and the prediction direction of second macroblock partitions respectively.Particularly, in the situation of 8x16 macro block, " Direction0 " indication is used for the prediction direction of left 8x16 macroblock partitions, and " Direction1 " indication is used for the prediction direction of right 8x16 macroblock partitions.The configuration principle of following detailed explanation macro block (mb) type (mb_type).At first, the division size (Partition_Size) of preceding two positions indication respective macroblock and be worth 0~3 and can be used for preceding two positions.And, be divided in the situation of division four position indication prediction direction (Direction) in back, preceding two positions at macro block.
For example, in the situation of 16x16 macro block, four positions of the prediction direction of indication macro block are attached to the back of preceding two positions.In the situation of 16x8 macro block, prediction direction (Direction0) and other four positions of dividing in four position indications first of back, preceding two positions are attached to preceding four positions to indicate second prediction direction (Direction1) of dividing.Similarly, in the situation of 8x16 macro block, eight positions are attached to the back of preceding two positions.In this case, be attached to prediction direction that preceding four positions indication first in these eight positions of preceding two positions divides and four prediction direction that positions indication second is divided of back.
With reference to Figure 45 (b), use the prediction direction (Sub_Direction) of sub-macro block in the mode identical with the prediction direction (Direction) of the macroblock partitions shown in Figure 45 (a).Following configuration principle of separating monk macro block (mb) type (sub_mb_type) in detail.
At first, the division size (Partition_Size) of preceding two positions indication respective macroblock and in the division size (Sub_Partition_Size) of the sub-macro block of latter two indication respective macroblock of this back, preceding two positions.Value 0-3 can be used for described before and latter two in each.Subsequently, be divided in the situation of sub-macroblock partitions, be attached to four position indication prediction direction (Sub_Direction) of described latter two back at macro block.For example, if the size of the division of macro block (Partition_Size) is 8x8 and is 4x8 as the size (Sub_Partition_Size) of the division of fruit macro block, then preceding two positions have value 3, latter two position has value 2, be used for the prediction direction of the left 4x8 piece of two 4x8 pieces in preceding four positions indication of this latter two back, and be used for the prediction direction of right 4x8 piece in back four positions indication of this back, preceding four positions.
With reference to Figure 46, utilize the prediction direction of four position structure macro blocks.And as can be seen, according to the left side (L) of reference photo current, go up (T), right (R) or the situation of the picture of (B) position down, each binary representation all becomes " 1 ".
With reference to Figure 47, for example, on prediction direction is in the situation of (T), with reference to the picture at the place, top of the view direction that is positioned at photo current.In the situation of prediction direction corresponding to all directions (LTRB), as can be seen, with reference to the picture in all directions (LTRB) of photo current.
Figure 48 is the block diagram that is used for the device of encoded video signal according to an embodiment of the invention.
With reference to Figure 48, the device that is used for encoded video signal is according to an embodiment of the invention described.This device comprises macro block (mb) type decision unit 4810, macro block generation unit 4820 and coding unit 4830.
Figure 49 is the flow chart of the method for encoded video signal in the code device shown in Figure 48 according to an embodiment of the invention.
With reference to Figure 49, the method for encoded video signal comprises the step S4920 that determines the step S4910, the generation that are used for first macro block (mb) type of prediction in the view and are used for second macro block (mb) type of inter-view prediction to have first macro block of first macro block (mb) type and have second macro block of second macro block (mb) type, step S4930 and the macro block (mb) type of coding current macro and the step S4940 of macroblock prediction pattern that use first and second macro blocks produce the 3rd macro block according to an embodiment of the invention.
According to the present invention, macro block (mb) type decision unit 4810 as second macro block (mb) type (S4910) that determines to be used for first macro block (mb) type of prediction in the view and be used for inter-view prediction with describing in detail in the above.
Subsequently, macro block generation unit 4820 uses known Predicting Technique to produce first macro block with first macro block (mb) type and has second macro block (S4920) of second macro block (mb) type, and uses first and second macro blocks to produce the 3rd macro block (S4930) then.In this case, produce the 3rd macro block according to the mean value between first and second macro blocks.
Finally, coding unit 4830 by first to the 3rd macroblock encoding efficient relatively encode the macro block (mb) type (mb_type) of current macro and current macro macroblock prediction pattern (mb_pred_mode) (S4940).
In this case, there is the whole bag of tricks to be used to measure code efficiency.Particularly, in this embodiment of the present invention, use the method for using RD (rate distortion) cost.Such as is known, in the RD cost method, utilize two parts to calculate corresponding cost: the distortion value of the coding figure place that produces from the corresponding piece of encoding and the error of indication and actual sequence.
Can having in the above with selection, the mode of the macro block (mb) type of the minimum value of the RD cost of explanation determines first and second macro block (mb) types.For example, the macro block (mb) type decision that has the minimum value of RD cost in view in the macro block (mb) type that will predict is first macro block (mb) type.And the macro block (mb) type decision that will have the minimum value of RD cost in the macro block (mb) type of inter-view prediction is second macro block (mb) type.
In the step of coded macroblocks type and macroblock prediction pattern, can select with first and second macro blocks with less RD cost in macro block (mb) type that is associated and predictive mode.Subsequently, determine the RD cost of the 3rd macro block.Finally, by mutual more selected first or the RD cost of the RD cost of second macro block and the 3rd macro block the encode macro block (mb) type and the macroblock prediction pattern of current macro.
If the RD cost of selected first or second macro block is equal to, or greater than the RD cost of the 3rd macro block, then macro block (mb) type becomes and the selected first or second corresponding macro block (mb) type of macro block.
For example, if the RD cost of first macro block less than the RD cost of the second and the 3rd macro block, then current macro is set to first macro block (mb) type.And macroblock prediction pattern (that is, in the view) becomes the prediction scheme with the corresponding macro block of this RD cost.
For example, if the RD cost of second macro block less than the RD cost of the first and the 3rd macro block, then becomes the macroblock prediction pattern of current macro as the inter-view prediction scheme of the prediction scheme of second macro block.
Simultaneously, if the RD cost of the 3rd macro block less than the RD cost of first and second macro blocks, then macro block (mb) type is corresponding to first and second macro block (mb) types.Particularly, interior prediction of view and inter-view prediction macro block (mb) type become the macro block (mb) type of current macro.And the macroblock prediction pattern becomes from the hybrid predicting scheme of mixing in the view and inter-view prediction obtains.
Therefore, effect or advantage below the invention provides at least.
Because various prediction scheme between view and such as the information of clip types, macro block (mb) type and macroblock prediction pattern, the present invention can get rid of the redundant information between view; Improve coding/decoding efficient performance thus.
Industrial usability
Though describe and the present invention be described at this, for those skilled in the art significantly, can under the situation that does not deviate from the spirit and scope of the present invention, make various modifications and variations to it with reference to its preferred embodiment.Therefore, the present invention is intended to contain the modifications and variations that fall into the scope of claims and equivalent thereof of the present invention.

Claims (8)

1. the method for a decoded video signal comprises step:
Check the encoding scheme of described vision signal;
Obtain to be used for the configuration information of described vision signal according to described encoding scheme;
Use the total number of described configuration information identification view;
Described total number identification inter-view reference information based on described view; And
Based on the described vision signal of described inter-view reference information decoding,
Wherein said configuration information comprises the view information of the view that is used to discern described vision signal at least.
2. according to the method for claim 1, further comprise step: check the rank that is used for view scalability of described vision signal, wherein said vision signal is decoded based on described rank.
3. according to the process of claim 1 wherein that described configuration information further comprises set of pictures sign and IDR sign between time rank, view.
4. according to the method for claim 2, wherein in described decoding step, if described vision signal according to described rank corresponding to basic view, the described vision signal of the described basic view of then decoding.
5. according to the method for claim 1, further comprise step: use described view information acquisition to be used for the information of the picture of the view different, wherein in the described vision signal of decoding, be used for the described information of the described picture of described different views with the view of photo current.
6. according to the method for claim 5, described vision signal decoding step further comprises step: be used for the described information and executing motion compensation of the described picture of described different views, the described information that wherein is used for the described picture of described different views comprises motion vector information and reference picture information.
7. according to the method for claim 5, described vision signal decoding step further comprises the following steps:
Extract synthetic prediction identifier between the view that the picture that indicates whether to use the view adjacent with the view of photo current obtains the picture in the virtual view from described vision signal; And
Obtain the described picture in the described virtual view according to the synthetic identifier of predicting between described view,
The described information that wherein is used for the described picture of described different views in the described picture in obtaining described virtual view.
8. according to the method for claim 5, described vision signal decoding step further comprises step: the depth information of described information acquisition between different views that is used for the described picture of described different views.
CN 200780019504 2006-03-30 2007-03-30 A method and apparatus for decoding/encoding a video signal Pending CN101455084A (en)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US78717106P 2006-03-30 2006-03-30
US60/787,171 2006-03-30
US60/801,398 2006-05-19
US60/810,642 2006-06-05
US60/830,601 2006-07-14
US60/832,153 2006-07-21
US60/837,925 2006-08-16
US60/840,032 2006-08-25
US60/842,152 2006-09-05

Publications (1)

Publication Number Publication Date
CN101455084A true CN101455084A (en) 2009-06-10

Family

ID=40735978

Family Applications (5)

Application Number Title Priority Date Filing Date
CN 200780019504 Pending CN101455084A (en) 2006-03-30 2007-03-30 A method and apparatus for decoding/encoding a video signal
CN200780019790.6A Expired - Fee Related CN101491095B (en) 2006-03-30 2007-03-30 A method and apparatus for decoding/encoding a video signal
CN200780018834.3A Expired - Fee Related CN101455082B (en) 2006-03-30 2007-03-30 A method and apparatus for decoding/encoding a video signal
CN 200780018827 Pending CN101449585A (en) 2006-03-30 2007-03-30 A method and apparatus for decoding/encoding a video signal
CN200780019560.XA Expired - Fee Related CN101461242B (en) 2006-03-30 2007-03-30 A method and apparatus for decoding/encoding a video signal

Family Applications After (4)

Application Number Title Priority Date Filing Date
CN200780019790.6A Expired - Fee Related CN101491095B (en) 2006-03-30 2007-03-30 A method and apparatus for decoding/encoding a video signal
CN200780018834.3A Expired - Fee Related CN101455082B (en) 2006-03-30 2007-03-30 A method and apparatus for decoding/encoding a video signal
CN 200780018827 Pending CN101449585A (en) 2006-03-30 2007-03-30 A method and apparatus for decoding/encoding a video signal
CN200780019560.XA Expired - Fee Related CN101461242B (en) 2006-03-30 2007-03-30 A method and apparatus for decoding/encoding a video signal

Country Status (1)

Country Link
CN (5) CN101455084A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103718561A (en) * 2011-07-28 2014-04-09 高通股份有限公司 Multiview video coding
CN103843340A (en) * 2011-09-29 2014-06-04 瑞典爱立信有限公司 Reference picture list handling
CN103875249A (en) * 2011-08-09 2014-06-18 三星电子株式会社 Method for multiview video prediction encoding and device for same, and method for multiview video prediction decoding and device for same
CN104396251A (en) * 2012-04-23 2015-03-04 三星电子株式会社 Method for encoding multiview video using reference list for multiview video prediction and device therefor, and method for decoding multiview video using refernece list for multiview video prediction and device therefor
CN104396252A (en) * 2012-04-25 2015-03-04 三星电子株式会社 Multiview video encoding method using reference picture set for multiview video prediction and device thereof, and multiview video decoding method using reference picture set for multiview video prediction and device thereof
CN104620585A (en) * 2012-09-09 2015-05-13 Lg电子株式会社 Image decoding method and apparatus using same
CN104854866A (en) * 2012-11-13 2015-08-19 英特尔公司 Content adaptive, feature compensated prediction for next generation video
CN105245876A (en) * 2010-01-14 2016-01-13 三星电子株式会社 Apparatus for encoding video
CN103765900B (en) * 2011-06-30 2016-12-21 瑞典爱立信有限公司 Absolute or explicit reference picture signalisation
US9635355B2 (en) 2011-07-28 2017-04-25 Qualcomm Incorporated Multiview video coding
CN108769712A (en) * 2012-04-16 2018-11-06 韩国电子通信研究院 Video coding and coding/decoding method, the storage and method for generating bit stream

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011050998A1 (en) 2009-10-29 2011-05-05 Thomas Sikora Method and device for processing a video sequence
US9357229B2 (en) 2010-07-28 2016-05-31 Qualcomm Incorporated Coding motion vectors in video coding
US9066102B2 (en) * 2010-11-17 2015-06-23 Qualcomm Incorporated Reference picture list construction for generalized P/B frames in video coding
SI3554079T1 (en) 2011-01-07 2023-01-31 Lg Electronics Inc. Method for encoding video information, method of decoding video information and decoding apparatus for decoding video information
SG10201902300VA (en) * 2011-06-24 2019-04-29 Mitsubishi Electric Corp Moving image encoding device, moving image decoding device, moving image encoding method, and moving image decoding method
US10237565B2 (en) 2011-08-01 2019-03-19 Qualcomm Incorporated Coding parameter sets for various dimensions in video coding
CA2845548C (en) 2011-08-25 2018-04-24 Panasonic Corporation Methods and apparatuses for encoding and decoding video using periodic buffer description
KR101909308B1 (en) 2011-09-07 2018-10-17 선 페이턴트 트러스트 Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
CN107592528B (en) 2011-09-09 2020-05-12 株式会社Kt Method for decoding video signal
CA2827278C (en) 2011-09-19 2018-11-27 Panasonic Corporation Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus
US9338474B2 (en) 2011-09-23 2016-05-10 Qualcomm Incorporated Reference picture list construction for video coding
PL3742735T3 (en) 2011-10-19 2022-11-14 Sun Patent Trust Image decoding method and image decoding apparatus
US9131217B2 (en) * 2011-12-09 2015-09-08 Qualcomm Incorporated Reference picture list modification for view synthesis reference pictures
CN104054351B (en) * 2012-01-17 2017-07-14 瑞典爱立信有限公司 Reference picture list processing
CA3206389A1 (en) * 2012-01-17 2013-07-25 Gensquare Llc Method of applying edge offset
EP4224845A1 (en) 2012-01-19 2023-08-09 VID SCALE, Inc. Method and apparatus for signaling and construction of video coding reference picture lists
US9503702B2 (en) * 2012-04-13 2016-11-22 Qualcomm Incorporated View synthesis mode for three-dimensional video coding
MY198312A (en) 2012-04-16 2023-08-23 Samsung Electronics Co Ltd Method And Apparatus For Determining Reference Picture Set Of Image
US9319679B2 (en) * 2012-06-07 2016-04-19 Qualcomm Incorporated Signaling data for long term reference pictures for video coding
WO2014091933A1 (en) 2012-12-11 2014-06-19 ソニー株式会社 Encoding device and encoding method, and decoding device and decoding method
US9924197B2 (en) * 2012-12-27 2018-03-20 Nippon Telegraph And Telephone Corporation Image encoding method, image decoding method, image encoding apparatus, image decoding apparatus, image encoding program, and image decoding program
US9930363B2 (en) * 2013-04-12 2018-03-27 Nokia Technologies Oy Harmonized inter-view and view synthesis prediction for 3D video coding
CN105335298B (en) * 2014-08-13 2018-10-09 Tcl集团股份有限公司 A kind of storage method and device of the luminance compensation numerical value of display panel
GB2531271A (en) 2014-10-14 2016-04-20 Nokia Technologies Oy An apparatus, a method and a computer program for image sequence coding and decoding
CN106507106B (en) * 2016-11-08 2018-03-06 中国科学技术大学 Video interprediction encoding method based on reference plate

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
BR0206500A (en) * 2001-01-16 2004-01-13 Matsushita Electric Ind Co Ltd Data recording medium and data recording apparatus and method thereof
KR100491530B1 (en) * 2002-05-03 2005-05-27 엘지전자 주식회사 Method of determining motion vector

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245876A (en) * 2010-01-14 2016-01-13 三星电子株式会社 Apparatus for encoding video
CN105245876B (en) * 2010-01-14 2018-09-18 三星电子株式会社 Video decoding apparatus
CN103765900B (en) * 2011-06-30 2016-12-21 瑞典爱立信有限公司 Absolute or explicit reference picture signalisation
CN103718561A (en) * 2011-07-28 2014-04-09 高通股份有限公司 Multiview video coding
US9635355B2 (en) 2011-07-28 2017-04-25 Qualcomm Incorporated Multiview video coding
CN103718561B (en) * 2011-07-28 2017-03-08 高通股份有限公司 Multi-view video decoding
US9973778B2 (en) 2011-08-09 2018-05-15 Samsung Electronics Co., Ltd. Method for multiview video prediction encoding and device for same, and method for multiview video prediction decoding and device for same
CN103875249A (en) * 2011-08-09 2014-06-18 三星电子株式会社 Method for multiview video prediction encoding and device for same, and method for multiview video prediction decoding and device for same
CN103843340B (en) * 2011-09-29 2018-01-19 瑞典爱立信有限公司 Reference picture list processing
US11825081B2 (en) 2011-09-29 2023-11-21 Telefonaktiebolaget Lm Ericsson (Publ) Reference picture list handling
US11196990B2 (en) 2011-09-29 2021-12-07 Telefonaktiebolaget Lm Ericsson (Publ) Reference picture list handling
CN103843340A (en) * 2011-09-29 2014-06-04 瑞典爱立信有限公司 Reference picture list handling
CN108769686A (en) * 2012-04-16 2018-11-06 韩国电子通信研究院 Video coding and coding/decoding method, the storage and method for generating bit stream
US11490100B2 (en) 2012-04-16 2022-11-01 Electronics And Telecommunications Research Institute Decoding method and device for bit stream supporting plurality of layers
US12028538B2 (en) 2012-04-16 2024-07-02 Electronics And Telecommunications Research Institute Image information decoding method, image decoding method, and device using same
US11949890B2 (en) 2012-04-16 2024-04-02 Electronics And Telecommunications Research Institute Decoding method and device for bit stream supporting plurality of layers
CN108769712A (en) * 2012-04-16 2018-11-06 韩国电子通信研究院 Video coding and coding/decoding method, the storage and method for generating bit stream
US20230035462A1 (en) 2012-04-16 2023-02-02 Electronics And Telecommunications Research Institute Image information decoding method, image decoding method, and device using same
US11483578B2 (en) 2012-04-16 2022-10-25 Electronics And Telecommunications Research Institute Image information decoding method, image decoding method, and device using same
US10958918B2 (en) 2012-04-16 2021-03-23 Electronics And Telecommunications Research Institute Decoding method and device for bit stream supporting plurality of layers
US10958919B2 (en) 2012-04-16 2021-03-23 Electronics And Telecommunications Resarch Institute Image information decoding method, image decoding method, and device using same
CN108769686B (en) * 2012-04-16 2021-07-27 韩国电子通信研究院 Video encoding and decoding method, method of storing and generating bit stream
CN108769712B (en) * 2012-04-16 2021-11-19 韩国电子通信研究院 Video encoding and decoding method, method of storing and generating bit stream
CN104396251A (en) * 2012-04-23 2015-03-04 三星电子株式会社 Method for encoding multiview video using reference list for multiview video prediction and device therefor, and method for decoding multiview video using refernece list for multiview video prediction and device therefor
CN104396252A (en) * 2012-04-25 2015-03-04 三星电子株式会社 Multiview video encoding method using reference picture set for multiview video prediction and device thereof, and multiview video decoding method using reference picture set for multiview video prediction and device thereof
CN104396252B (en) * 2012-04-25 2018-05-04 三星电子株式会社 Use the multi-view point video decoding method and its device of the reference picture collection predicted for multi-view point video
CN104620585A (en) * 2012-09-09 2015-05-13 Lg电子株式会社 Image decoding method and apparatus using same
CN104854866B (en) * 2012-11-13 2019-05-31 英特尔公司 Content adaptive, feature compensated prediction for next generation video
CN104854866A (en) * 2012-11-13 2015-08-19 英特尔公司 Content adaptive, feature compensated prediction for next generation video
US9762929B2 (en) 2012-11-13 2017-09-12 Intel Corporation Content adaptive, characteristics compensated prediction for next generation video

Also Published As

Publication number Publication date
CN101455082A (en) 2009-06-10
CN101449585A (en) 2009-06-03
CN101491095B (en) 2013-07-10
CN101491095A (en) 2009-07-22
CN101461242A (en) 2009-06-17
CN101455082B (en) 2013-02-13
CN101461242B (en) 2011-08-03

Similar Documents

Publication Publication Date Title
CN101461242B (en) A method and apparatus for decoding/encoding a video signal
KR100934673B1 (en) A method and apparatus for decoding/encoding a video signal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20090610