CN1882093B - Transcoding system using encoding history information - Google Patents
Transcoding system using encoding history information Download PDFInfo
- Publication number
- CN1882093B CN1882093B CN 200610099782 CN200610099782A CN1882093B CN 1882093 B CN1882093 B CN 1882093B CN 200610099782 CN200610099782 CN 200610099782 CN 200610099782 A CN200610099782 A CN 200610099782A CN 1882093 B CN1882093 B CN 1882093B
- Authority
- CN
- China
- Prior art keywords
- data
- quantitative calibration
- image
- encoding process
- code
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The present invention provides a transcoder for changing a GOP structure and the bit rate of an encoded bitstream obtained as a result of an encoding process based on MPEG standards. According to the transcoder provided by the present invention, encoding parameters generated in a past encoding process can be transmitted to the MPEG encoder which performs a present encoding process as a history information. And optimum encoding parameters in commensurate with the present encoding process are selected from the transmitted encoding parameters, and the selected encoding parameters are reused in the present encoding process. As a result, the picture quality does not deteriorate even if decoding and encoding processes are carried out repeatedly.
Description
Please be dividing an application of following patent application in this patent:
In please number: 200410088170.6
The applying date: on March 10th, 1999
Denomination of invention: the Coded Switch System that utilizes encoding history information
Technical field
The present invention relates to Coded Switch System, video encoder, data flow processing system and be used to change GOP (image sets) structure and as video decoding apparatus based on the bit rate of the coded bit stream that encoding process result obtained of MPEG (Motion Picture Experts Group) standard.
Background technology
In recent years, be used to produce with the broadcasting station of transmitting television program and be used to compress MPEG technology with coding video frequency data in use usually.Particularly, the MPEG technology is becoming and is used on tape or arbitrary access recording medium recording video data and is used for a kind of de-facto standard by cable or satellite transmission video data.
Be the concise and to the point process prescription that is transferred to each family by the video frequency program that the exemplary process that carry out in the broadcasting station will produce until standing below.At first, the encoder encodes source video data that adopts in the camcorder apparatus (comprehensive gamma camera be a kind of equipment of one with VTR) and with encoded data record to the tape of VTR.Simultaneously, this source video data of the encoder encodes that adopts in the camcorder apparatus is the coded bit stream that is suitable for the record format of VTR tape.Typically, the gop structure that is recorded in the MPEG bit stream on the tape is the structure that one of them GOP is made up of two frames.An example of gop structure is a kind of structure that comprises a series of images of I-, B-, I-, B-, I-, B-or the like type.The bit rate that is recorded in the MPEG bit stream on the tape is 18Mbps.Subsequently, the central broadcasting station executive editor handles so that the video bit stream of reference record on tape.For this purpose, the gop structure that is recorded in the video bit stream on the tape is converted to the gop structure that is suitable for editing and processing.The gop structure that is suitable for editing and processing is the structure that one of them GOP is made up of a frame.More particularly, the image that is suitable for the gop structure of editing and processing is all I-images.This be because, in order in frame unit, to carry out editing and processing, with the incoherent I-image of other images optimum.In the practical operation of conversion gop structure, the video bit stream that is recorded on the tape once is decoded as the base band video data.Then, these base band video data are encoded so that comprise all I-images again.Has the bit stream that is suitable for the editing and processing gop structure by carrying out decoding and encoding process more by this way, just might producing.
Secondly, in order to send an editor's obtaining as the editing and processing result video frequency program to the spot broadcasting station, then be necessary the bit rate of the bit stream of gop structure and editor's video frequency program is changed into gop structure and the bit rate that is suitable for transmitting from central broadcasting station.The gop structure that is suitable for transmitting between the broadcasting station is the gop structure that one of them GOP is made up of 15 frames.An example of this gop structure is a kind of structure that comprises a series of images of I-, B-, B-, P-, B-, B-, P-or the like type.As for the bit rate that is suitable between the broadcasting station, transmitting,, then wish the high bit rate of 50Mbps at least because the special circuit that has usually such as the high-transmission capacity of optical fiber is installed between the broadcasting station.In fact, once decode back be video data to the bit stream of finishing the video frequency program of editing and processing.Then, these base band video data are encoded so that gop structure and the bit rate that obtains being suitable for transmitting between above-mentioned broadcasting station again.
At the spot broadcasting station, the video frequency program that receives from central broadcasting station experiences editing and processing usually so that special commercial advertisement is inserted in the block that is located to the spot broadcasting station.Resemble very much the editing and processing of carrying out at central broadcasting station, once decoding back from the video frequency program bit stream of central broadcasting station reception is the base band video data.Then, these base band video data are encoded so that comprise all I-images again.As a result, might produce bit stream with the gop structure that is suitable for editing and processing.
Subsequently, in order to be sent in the video frequency program of the editing and processing of finishing at the spot broadcasting station by cable or satellite to each family, the bit rate of gop structure and bit stream is converted to gop structure and the bit rate that is suitable for being transferred to each family respectively.The gop structure that is suitable for being transferred to each family is a kind of structure that one of them GOP is made up of 15 frames.An example of this gop structure is a kind of structure that comprises a series of images of I-, B-, B-, P-, B-, B-, P-or the like type.The bit rate that is suitable for being transferred to each family has representative value low as about 5Mbps.The bit stream of often finishing the video frequency program of editing and processing is decoded back and is the base band video data.Subsequently, these base band video data are encoded to gop structure and the bit rate that is suitable for being transferred to each family again.
As viewed from above-mentioned, repeatedly experience decoding and the encoding process that repeats during the transmission to the video frequency program that each family sends from central broadcasting station.In fact, carry out various signal processing except that above-mentioned signal processing and often be that every kind of signal processing execution is decoded and encoding process in the broadcasting station.Thereby, need repeat decoding and encoding process.
Yet handling based on the Code And Decode of mpeg standard is not hundred-percent each other contrary processing the known to usually.More specifically, the base band video data transformation of experience encoding process is with identical as the video data that the result obtained of the decoding processing of carrying out in the code conversion in last generation.Therefore, decoding and encoding process cause deterioration of image quality.Thereby, have the problem that occurs deterioration of image quality when each execution Code And Decode is handled.In other words, when each repeat decoding and encoding process with regard to the effect of accumulative image deterioration.
Summary of the invention
Therefore even an object of the present invention is to address the above problem so that provide and repeat to finishing bit stream based on the encoding process of mpeg standard that Code And Decode is handled so that also do not cause Coded Switch System, video encoder, data flow processing system and the video decoding apparatus of deterioration of image quality when changing the gop structure of this bit stream and bit rate.In addition, repeat Coded Switch System, video encoder, data flow processing system and the video decoding apparatus that the Code And Decode processing does not cause deterioration of image quality yet even an object of the present invention is to provide.
In order to reach above purpose,, can in handling, present encoding utilize the coding parameter that produces and use in the encoding process formerly according to code converter provided by the invention.As a result, even repeat decoding and encoding process, picture quality does not worsen yet, and in other words, might alleviate because the accumulation that repeated encoding is handled in the picture quality that causes worsens.
According to code converter provided by the invention, be described in the coding parameter and this coded bit stream that produce in the previous coding processing and use in the user data area of the coded bit stream that obtains in the result who handles as present encoding and follow mpeg standard.Therefore might utilize any existing decoder this coded bit stream of decoding.In addition, needn't be provided for being sent in the special circuit of the coding parameter that produces in the previous coding processing and use.As a result, might utilize existing data flow transmission environment to be sent in the coding parameter that produces and use in the previous coding processing.
According to code converter provided by the invention, only be described in the selected previous coding parameter that produces and use in the encoding process in the user data area of the coded bit stream that obtains in the result who handles as present encoding.As a result, might be sent in over the coding parameter that produces in the encoding process of carrying out and use and not need significantly to increase the bit rate of output bit flow.
According to code converter provided by the invention, have only optimum code parameter that present encoding handles from encoding process formerly, to produce and the coding parameter that uses in select so that in present encoding is handled, use.As a result, even repeat the also never deterioration in the accumulation image quality of previous decoding and encoding process.
According to code converter provided by the invention, the optimum code parameter of having only present encoding to handle be according to the previous coding parameter image type that produces in handling from previous coding and use select so that in present encoding is handled, use.As a result, even repeat decoding and the also never deterioration in the accumulation image of encoding process.
According to code converter provided by the invention, about the decision of the previous coding parameter whether utilizing previous coding to produce in handling again and use is to make according to the image type that is included in the previous coding parameter.Thereby, can carry out optimum code and handle.
Description of drawings
In order more completely to understand the present invention, with reference to following explanation and accompanying drawing, wherein:
Fig. 1 is used to describe the employed key diagram of high efficient coding handling principle;
Fig. 2 is the key diagram that is illustrated in the image type that uses in the Image Data Compression;
Fig. 3 is the key diagram that is illustrated in the image type that uses in the Image Data Compression;
Fig. 4 is used to describe the employed key diagram of encoding moving pictures Video signal processing principle;
Fig. 5 is the equipment disposition block diagram that expression is used for Code And Decode motion picture video signal;
Fig. 6 A to 6C is the used key diagram of descriptor format conversion;
Fig. 7 is the configuration block diagram of the encoder 18 that adopts in the expression equipment shown in Figure 5;
Fig. 8 is the used key diagram of operation that is used for being described in the predictive mode change-over circuit 52 that encoder shown in Figure 7 18 adopts;
Fig. 9 is the used key diagram of operation that is used for being described in the predictive mode change-over circuit 52 that encoder shown in Figure 7 18 adopts;
Figure 10 is the used key diagram of operation that is used for being described in the predictive mode change-over circuit 52 that encoder shown in Figure 7 18 adopts;
Figure 11 is the used key diagram of operation that is used for being described in the predictive mode change-over circuit 52 that encoder shown in Figure 7 18 adopts;
Figure 12 is illustrated in the decoder configurations block diagram that adopts in the equipment shown in Figure 5;
Figure 13 is the key diagram that is used to describe based on the SNR control of image type;
Figure 14 is the configuration block diagram of expression by code converter 101 provided by the invention;
Figure 15 is the block diagram of the more detailed configuration of expression code converter 101 shown in Figure 14;
Figure 16 is the configuration block diagram that is illustrated in the decoder 111 that adopts in the decoding device 102 of code converter shown in Figure 14 101;
Figure 17 is the key diagram of expression macro block pixels;
Figure 18 is the key diagram in the zone of expression record coding parameter;
Figure 19 is the configuration block diagram that is illustrated in the decoder 121 that adopts in the decoding device 106 of code converter shown in Figure 14 101;
Figure 20 is the Typical Disposition block diagram that is illustrated in the historical forms device 211 that adopts in the code converter shown in Figure 15 101;
Figure 21 is the Typical Disposition block diagram that is illustrated in the historical decoder 203 that adopts in the code converter shown in Figure 15 101;
Figure 22 is the Typical Disposition block diagram that is illustrated in the transducer 212 that adopts in the code converter shown in Figure 15 101;
Figure 23 is the Typical Disposition block diagram that is illustrated in the filling circuit 323 that adopts in the transducer shown in Figure 22 212;
Figure 24 A is the timing diagram that is used to explain the operation of transducer shown in Figure 22 212 to I;
Figure 25 is the Typical Disposition block diagram that is illustrated in the transducer 202 that adopts in the code converter shown in Figure 15 101;
Figure 26 is the Typical Disposition block diagram of the deletion circuit 343 of employing in the expression transducer 202 shown in Figure 25;
Figure 27 is another Typical Disposition block diagram that is illustrated in the transducer 212 that adopts in the code converter shown in Figure 15 101;
Figure 28 is another Typical Disposition block diagram that is illustrated in the transducer 202 that adopts in the code converter shown in Figure 15 101;
Figure 29 is the Typical Disposition block diagram that is illustrated in the format of user data device 213 that adopts in the code converter shown in Figure 15 101;
Figure 30 is that the actual system configuration block diagram at a plurality of code converters 101 shown in Figure 14 is adopted all in expression;
Figure 31 is the administrative division map that expression is used for the record coding parameter;
Figure 32 is the flow chart that is used for explaining the processing that encoding device 106 that code converter shown in Figure 14 101 adopts is carried out, to determine changeable image type;
Figure 33 is the exemplary plot that expression changes image type;
Figure 34 is another exemplary plot that expression changes image type;
Figure 35 is the quantified controlling processing spec figure that is used for describing encoding device 106 execution of being adopted by code converter shown in Figure 14 101;
Figure 36 is the quantified controlling process chart that is used for explaining encoding device 106 execution of being adopted by code converter shown in Figure 14 101;
Figure 37 is the configuration block diagram of the closely-coupled code converter 101 of expression;
Figure 38 is the grammer key diagram that is used to describe mpeg data stream;
Figure 39 is used to describe grammer configuration instruction figure shown in Figure 38;
Figure 40 is the grammer key diagram that is used to describe with the history_stream () of fixed-length record historical information;
Figure 41 is the grammer key diagram that is used to describe with the history_stream () of fixed-length record historical information;
Figure 42 is the grammer key diagram that is used to describe with the history_stream () of fixed-length record historical information;
Figure 43 is the grammer key diagram that is used to describe with the history_stream () of fixed-length record historical information;
Figure 44 is the grammer key diagram that is used to describe with the history_stream () of fixed-length record historical information;
Figure 45 is the grammer key diagram that is used to describe with the history_stream () of fixed-length record historical information;
Figure 46 is the grammer key diagram that is used to describe with the history_stream () of fixed-length record historical information;
Figure 47 is the grammer key diagram that is used to describe with the history_stream () of variable-length record historical information;
Figure 48 is the grammer key diagram that is used to describe sequence_header ();
Figure 49 is the grammer key diagram that is used to describe sequence_extension ();
Figure 50 is the grammer key diagram that is used to describe extension_and_user_data ();
Figure 51 is the grammer key diagram that is used to describe user_data ();
Figure 52 is the grammer key diagram that is used to describe group_of_picture_header ();
Figure 53 is the grammer key diagram that is used to describe picture_header ();
Figure 54 is the grammer key diagram that is used to describe coding_ ();
Figure 55 is the grammer key diagram that is used to describe extension_data ();
Figure 56 is the grammer key diagram that is used to describe quant_matrix_extension ();
Figure 57 is the grammer key diagram that is used to describe copyright_extension ();
Figure 58 is the grammer key diagram that is used to describe picture_display_extension ();
Figure 59 is the grammer key diagram that is used to describe picture_data ();
Figure 60 is the key diagram that is used to describe slice () grammer;
Figure 61 is the key diagram that is used to describe macroblock () grammer;
Figure 62 is the grammer key diagram that is used to describe macroblock_mode ();
Figure 63 is the grammer key diagram that is used to describe motion_vector (s);
Figure 64 is used to describe motion_vector (r, grammer key diagram s);
Figure 65 is the key diagram that is used to describe the variable-length codes of the macro block (mb) type that is used for the I image;
Figure 66 is the key diagram that is used to describe the variable-length codes of the macro block (mb) type that is used for the P image;
Figure 67 is the key diagram that is used to describe the variable-length codes of the macro block (mb) type that is used for the B image;
Embodiment
Before describing code converter provided by the invention, explain compression and encoding moving pictures Video signal processing.Should notice that the technical term ' system ' that uses in this specification is meant the whole system that comprises a plurality of equipment and mode.
As mentioned above, sending the motion picture video signal in the system such as the remote object of video conference system and television telephone system, vision signal is utilized relevant the compression with encoding process so that allow highly to effectively utilize transmission line in the line correlation of this vision signal and the frame.By utilizing line correlation, handle just energy compressed video signal through carrying out typical DCT (discrete cosine transform).
Relevant by utilizing in the frame, can further compress and encoded video signal.Suppose that two field picture PC1, PC2 and PC3 produce respectively on moment t1, t2 shown in Figure 1 and t3.In this case, the difference in the picture signal between calculating two field picture PC1 and the PC2 is so that produce two field picture PC12.Equally, the difference in the picture signal between calculating two field picture PC2 and the PC3 is so that produce two field picture PC23.Usually, very little along the difference in the picture signal between the time shaft two field picture adjacent one another are.Therefore, be included among two field picture PC21 and the PC23 amount of information also less and be included in as the size of code in the difference signal that the result obtained of this difference of coding also few.
Yet, just can not recover original image by only sending difference signal.In order to obtain original image, two field picture is divided into three types, promptly in the compression of vision signal and encoding process, all be used as I-, P-and the B-image of minimum treat unit.
The GOP (image sets) that supposes Fig. 2 comprises 17 frames, and promptly the frame F1 that all handles as the least unit of handling vision signal is to F17.More specifically, the first frame F1, the second frame F2 and the 3rd frame F3 handle as I-, B-and P-image respectively.The frame of back, promptly the 4th to the 17 frame is alternately handled as B-and P-.
In I-image situation, send a vision signal of entire frame.On the contrary, in the situation of P-image or B-image, only send the instead vision signal of entire frame of the interior difference of vision signal.More specifically, in the situation of the 3rd frame F3 of P-image shown in Figure 2, only be sent in P-image and long-term leading I-and the difference in the vision signal between the P-image as vision signal.In the situation of the second frame F2 of B-image shown in Figure 3, for example, only be sent in difference in the vision signal between the mean value of B-image and long-term leading frame, successive frame or leading and long-term successive frame as vision signal.
Fig. 4 is that expression comes the technical schematic diagram of encoding moving pictures vision signal according to the above.The first frame F1 handles as the I-image as shown in Figure 4.Therefore, the vision signal of entire frame F1 sends to transmission line as data F1X (internal image coding).On the other hand, the second frame F2 handles as the B-image.In this case, send difference between the mean value of the second frame F2 and leading for a long time frame F1, successive frame F3 or leading frame F1 and successive frame F3 as data F2X.
Be described in more detail, the B-treatment of picture can be divided into four types.In the first kind is handled, the data that send the primitive frame F2 that the symbol SP1 by Fig. 4 represents as with the data F2X of I-image same case.Therefore, the processing of the first kind is exactly said internal image coding.In the processing of second type, be sent in the difference represented by symbol SP2 between the F2 and long-continued the 3rd frame F3 in second frame as data F2X.Because this successive frame takes out as benchmark or predicted picture, then this processing is called as the back forecast coding.In the processing of the 3rd type, be sent in the data F2X that the difference conduct of being represented by symbol SP3 between the second frame F2 and the leading frame F1 has P-image situation.Because leading frame takes out as predicted picture, then this processing is called as forward predictive coded.In the processing of the 4th type, be sent in the difference of representing by symbol SP4 between the mean value of the second frame F2 and continuous the 3rd frame F3 and the leading first frame F1 as F2X.Owing to should leadingly take out as predicted picture with successive frame, then this processing is called as forward direction and back forecast coding.In fact, select above-mentioned four kinds to handle one of types so that produce minimum as the transmission data that result obtained.
It should be noted that under the situation of the difference that the result in above-mentioned second, third or the 4th type obtains, also be sent in a motion vector between the two field picture (predicted picture) that calculates use in this difference with this difference.More specifically, in the situation of forward predictive coded, motion vector is a vector X1 between frame F1 and the frame F2.In the situation of back forecast coding, motion vector is a vector X2 between frame F2 and the frame F3.In the situation of forward direction and back forecast coding, send motion vector X1 and motion vector X2 the two.
Closely similar with above-mentioned B-image, under the situation of the frame F3 of P-image, select forward predictive coded or internal image to handle so that produce the minimum of the transmission data that obtain as result.If the selection forward predictive coded sends the difference represented by the symbol SP3 between the 3rd frame F3 and the leading first frame F1 as data F3X together with motion vector X3 so.On the other hand, if select internal image to handle, so just send the data F3X of the primitive frame F3 that represents by symbol SP1.
Fig. 5 is expression comes a system of encoding moving pictures vision signal and transmission and this code signal of decoding based on above-mentioned principle a Typical Disposition block diagram.Signal encoding device 1 sends the vision signal of this coding to signal decoding equipment 2 with the incoming video signal coding and by the recording medium 3 that is used as transmission line.Signal decoding equipment 2 playback of recorded on recording medium 3 code signal and this replay signal is decoded as output signal.
In signal encoding device 1, incoming video signal is delivered to the pre-process circuit 11 that is separated into luminance and chrominance information.In the situation of this embodiment, carrier chrominance signal is a color difference signal.Analog luminance signal and color difference signal are sent into A/ D converter 12 and 13 respectively then so that all be converted to digital video signal.The digital video signal that obtains from the A/D conversion sends into frame memory unit 14 subsequently so that storage there.Frame memory unit 14 comprises the color difference signal frame memory 16 that is used to store the luminance signal frame memory 15 of luminance signal and is used to store color difference signal.
The frame format signal that format conversion circuit 17 will be stored in the frame memory unit 14 is converted to the block format signal shown in Fig. 6 A to 6C.In detail, vision signal is stored in the frame memory unit 14 as the frame format data shown in Fig. 6 A.As shown in Figure 6A, this frame format is the V row set that all comprises H point.Format conversion circuit 17 is divided into a frame signal N the sheet that all comprises the row of 16 shown in Fig. 6 B.Every M sheet macro block that then is divided into shown in Fig. 6 B.Shown in Fig. 6 C, macro block comprises the brightness signal Y corresponding to 16*16 pixel (point).This brightness signal Y further is divided into and all comprises the Y[1 that 8*8 is ordered] to Y[4].This 16*16-point brightness signal is relevant with 8*8-point Cr signal with 8*8-point Cb signal.
Have the format conversion carried out by above-mentioned format conversion circuit 17 as a result the data of the block format of gained deliver to the encoder 18 of these data that are used to encode.The back will be described the configuration of encoder 18 with reference to figure 7 in detail.
The signal of carrying out the gained as a result of coding by encoder 18 outputs to transmission line as bit stream.Typically, this code signal is delivered to the writing circuit 19 that is used for record coding signal on the recording medium 3 that is used as transmission line as digital signal.
The playback circuitry of using in signal decoding equipment 2 30 is reset from the data of recording medium 3, thereby data are delivered to the decoder 31 of decoding device so that decode these data.The back will be described the configuration of decoder 31 with reference to Figure 12 in detail.
The feeds of data of being carried out by decoder 31 that decoded result obtained is frame format to format conversion circuit 32 so that data block format is changed back.Subsequently, the luminance signal with frame format is fed to the luminance signal frame memory 34 of frame memory cell 33 so that storage there.On the other hand, the color difference signal with frame format is fed in the color difference signal frame memory 35 of frame memory unit 33 so that storage there.Luminance signal reads and delivers to D/A converter 36 from luminance signal frame memory 34.On the other hand, color difference signal reads and delivers to D/A converter 37 from color difference signal frame memory 35.D/ A converter 36 and 37 is an analog signal with these conversion of signals, then these analog signals is delivered to post processing circuitry 38 so that synthesize YUV and produce a synthetic output.
Next step describes the configuration of decoder 18 with reference to figure 7.Want the image encoded data to deliver to motion vector detection circuit 50 in the macroblock unit.Motion vector detecting circuit 50 is handled the view data of every frame as I-, P-or B-image according to the predetermined sequence of setting in advance.More specifically, comprise usually Fig. 2 and frame F1 shown in Figure 3 to the view data of the GOP of F17 be treated to I-, B-, P-, B-, P-,---, a sequence of B-and P-image.
The forward direction resource image-region 51a that will be fed to frame memory unit 51 by the frame image data that motion vector detecting circuit 50 is treated to the I-image of all frame F1 as shown in Figure 3 is so that storage there.To be fed to the reference resources image-region 51b of frame memory unit 51 so that storage there by the frame image data that motion vector detecting circuit 50 is treated to such as the P-image of frame F2.Will by motion vector detecting circuit 50 be treated to that a frame image data such as the P-image of frame F3 is fed to frame memory unit 51 back to resource image-region 51c so that storage there.
When delivering to motion vector detecting circuit 50 in succession such as the view data of following two frames of frame F4 or F5 so that be treated to B-respectively and during the P-image, regional 51a, 51b and 51C are by following renewal.When the view data of frame F4 is handled by motion vector detecting circuit 50, the view data that is stored in the frame F3 of back in resource image-region 51c just is sent to forward direction resource image-region 51a, thereby rewrites the view data of the frame F1 that is stored in resource image-region 51b previously.The image data storage of handled frame F4 in reference resources image-region 51b, thereby rewrite view data on the frame F2 be stored in resource image-region 51b previously.Then, the image data storage of handled F5 in the back in resource image-region 51c, thereby rewrite the view data of the frame F3 be sent to forward direction resource image-region 51a by any way.Repeat aforesaid operations so that handle the frame of GOP afterwards.
The signal that is stored in each image in the frame memory unit 51 is read by predictive mode commutation circuit 52 so that stand to be used for the beamhouse operation of frame prediction mode or field prediction pattern, just, and will be by the processing type of processing unit 53 execution.
Then, perhaps in frame prediction mode or field prediction pattern, these signals calculate so that obtain the internal image predictive coding, such as in internal image processing/forward direction/back to forward direction and back forecast determine circuit 54 carry out forward direction under the control, back to forward direction and back forecast.The processing type of carrying out in processing unit is to determine according to the prediction error signal of the difference between the predicted picture that is illustrated in reference picture and this reference picture.The image of reference is that image of this processing of experience and predicted picture are leading and an image of this reference picture continuously.For this reason, motion vector detecting circuit 50 (strictly speaking, being the predictive mode commutation circuit 52 that adopts in the vector detection circuit 50 that will describe in the back) is created in the absolute value sum of the prediction error signal that uses in the processing type of determining to be carried out by processing unit 53.The absolute value sum that replaces this prediction error signal also can utilize the quadratic sum of prediction error signal to carry out this definite.
Predictive mode commutation circuit 52 is carried out the following beamhouse operation that is used for by the processing of processing unit 53 execution with frame prediction mode and field prediction pattern.
Predictive mode commutation circuit 52 receives by motion vector detecting circuit 50 and is fed to four luminance block [Y1] there to [Y4].In every, the line data of odd field mixes with the line data of even field shown in Figure 8.These data can be passed to processing unit 53 routinely.Processing be to be called as by the data processing that processing unit 53 is carried out, wherein each macro block that comprises four luminance block prediction processing and motion vector carried out corresponding to four luminance block with configuration shown in Figure 8 with frame prediction mode.
The signal that predictive mode commutation circuit 52 and then configuration are provided by motion vector detecting circuit 50.Replace having the signal of configuration shown in Figure 8, the signal with configuration shown in Figure 9 can be sent to processing unit 53.As shown in Figure 9, other two luminance block [Y3] and [Y4] constitute by the point that typically is the row of even field by the point that is the row of odd field usually constitutes for two luminance block [Y1] and [Y2].Have the data processing of carrying out by processing unit 53 that Fig. 9 disposed and be called as processing with the field prediction pattern, one of them motion vector corresponding to two luminance block [Y1] and [Y2] and other motion vector corresponding to other two luminance block [Y3] and [Y4].
Predictive mode commutation circuit 52 selects to have the data of Fig. 8 or configuration shown in Figure 9 so that be fed to processing unit 53 as described below.Predictive mode commutation circuit 52 is calculated and for frame prediction mode is, the prediction error absolute value sum of calculating for the data that provided by motion vector detecting circuit 50 with configuration shown in Figure 8, and for the field prediction pattern, promptly for having the prediction error absolute value sum of calculating as the data of the configuration shown in Figure 9 that the result obtained of transformation of data with configuration shown in Figure 8.It should be noted that the back will describe prediction error in detail.So the prediction error absolute value sum that predictive mode commutation circuit 52 is relatively calculated data so as to determine which kind of mode producing minimum and.Then, predictive mode commutation circuit 52 select to be used for to produce respectively minimum and frame prediction mode or one of Fig. 8 of field prediction pattern or configuration shown in Figure 9.The predictive mode commutation circuit 52 last output devices data of arrangement to some extent arrive processing unit 53 so that by these data of mode treatment corresponding to institute's arrangement.
It should be noted that in fact predictive mode commutation circuit 52 is included within the motion vector detecting circuit 50.That is to say, data with configuration shown in Figure 9 are prepared, the calculating of absolute value, the comparison of absolute value, the selection of data configuration and the data that output has the configuration selected all are to be carried out by motion vector detecting circuit 50 to the operation of handling unit 53, and predictive mode commutation circuit 52 only the signal that provides of output movement vector detection circuit 50 to the back one-level of processing unit 53.
It should be noted that in frame prediction mode color difference signal and the odd field line data that mixes with even field line data shown in Figure 8 are delivered to processing unit 53.On the other hand, in field prediction pattern shown in Figure 9, four last hemistich of aberration piece Cb are used as the color difference signal corresponding to the odd field of luminance block [Y1] and [Y2], and four following hemistich of aberration piece Cb are used as the color difference signal corresponding to the even field of luminance block [Y3] and [Y4] simultaneously.Equally, four last hemistich of aberration piece Cr are used as the color difference signal corresponding to the odd field of luminance block [Y1] and [Y2], and four following hemistich of aberration piece Cr are used as the color difference signal corresponding to the even field of luminance block [Y3] and [Y4] simultaneously.
As mentioned above, motion vector detecting circuit 50 prediction of output mistake absolute value sums determine to prediction whether circuit 54 should carry out internal image prediction, forward prediction, back forecast, forward direction and back forecast for use in definite processing unit 53.
For describing in detail, by the following prediction error absolute value sum of obtaining in the internal image prediction.For internal image prediction, motion vector detecting circuit 50 calculates macro block signals Aij's and ∑ Aij's the absolute value at reference picture | the absolute value of the macro block signals Aij of ∑ Aij| and same reference picture | Aij|'s and ∑ | poor between the Aij|.For forward prediction, prediction error absolute value sum is the absolute value of poor (Aij-Bij) between the macro block signals Bij of the macro block signals Aij of reference picture and back forecast image or leading image | Aij-Bij| sum ∑ | and Aij-Bij|.The predicted picture that uses in back forecast is back forecast image or the consecutive image, obtains the prediction error absolute value sum of back forecast in the mode identical with forward prediction.As for forward direction and back forecast, when summation operation, use the two macro block signals Bij average of forward prediction image and back forecast image.
The absolute value sum of the error prediction of every kind of Predicting Technique is delivered to prediction and is determined circuit 54, this circuit select to have as the minimum of the absolute value sum of the prediction error of internal image prediction and forward prediction, back forecast or forward and back forecast.Prediction determine circuit 54 further relatively more minimum and with internal image prediction sum and select or the internal image prediction or have the predictive mode of the processing of carrying out than processing unit 53 littler and the internal image prediction.More specifically, if find internal image prediction sum than the minimum of internal image prediction and littler, prediction determines that circuit 54 just selects the internal image prediction as will be by the processing type of processing unit 53 execution so.On the other hand, if obtain the minimum of internal image prediction and than the internal image prediction and littler, prediction determines that circuit 54 just selects the internal image prediction as the processing type that will be carried out by processing unit 53 so.As mentioned above, internal image prediction represent conduct have minimum and the selected forward prediction of predictive mode, back forecast or the forward direction and the back forecast of processing.Determine predictive mode for each image (or frame) and determine frame prediction or field prediction for each image sets simultaneously.
As mentioned above, motion vector detecting circuit 50 utilizes predictive mode commutation circuit 52 to export the macro block signals of the reference picture that is used for predictive mode commutation circuit 52 selected frame prediction modes or field prediction pattern to handling unit 53.Simultaneously, motion vector detecting circuit 50 also detects at reference picture and is used for by a motion vector between the predicted picture of one of definite circuit 54 selected four predictive modes of prediction.Motion vector detecting circuit 50 outputs to this motion vector variable length code circuit 58 and movement compensating circuit 64 then.By this way, motion vector detecting circuit 50 output corresponding to the absolute value of the prediction error of selected predictive mode as described above minimum and a motion vector.
When motion vector detecting circuit 50 is read first frame of view data, GOP of I-image from the 51a of forward direction resource map picture zone, circuit 54 setting internal image predictions are determined in prediction, strictly speaking, be the prediction of frame interior or fields inside as predictive mode, thereby be set in the switch 53d that adopts in the processing unit 53 on the tie point a.Utilize the switch 53d that sets on this position, the I-view data is delivered to DCT mode switch circuit 55.As will be explained, the internal image predictive mode is a pattern wherein not carrying out motion compensation.
DCT mode switch circuit 55 receives by switch 53d with admixture or frame DCT pattern shown in Figure 10 and is sent to the data here from predictive mode commutation circuit 52.DCT mode switch circuit 55 is converted to these data the state of separation or field DCT pattern shown in Figure 11 then.In frame DCT pattern, in per four luminance block, mix the data of the row of odd and even number field.On the other hand, in the DCT pattern on the scene, the row of odd field is put into two of four luminance block and capable other two pieces of putting into of even field.With mixing or released state the data of I-image are delivered to DCT circuit 56.
Presenting these data before the DCT circuit 56, DCT mode switch circuit 55 relatively has code efficiency that the DCT of code efficiency that the DCT of data of the row of the odd and even number field that is mixed with each other handles and the data of the row with odd and even number field separated from one another handles so that select to have more high efficiency data.A frame DCT pattern or a DCT pattern corresponding to selected data are confirmed as the DCT pattern.
Code efficiency compares each other by following.In the data conditions of row, calculate poor between the capable signal of signal and the odd field of vertical adjacent even field therewith of row of even field with odd and even number field that is mixed with each other as shown in figure 10.Then, obtain this difference squared absolute value or absolute value and.At last, calculate the absolute value sum of two adjacent even numbers and the difference between the odd field or square.
In the data conditions of row, calculate the signal difference between the row of signal difference between the row of vertical adjacent even field and vertical adjacent odd field with odd and even number field separated from one another shown in Figure 11.Then, obtain each the difference squared absolute value or absolute value and.At last, calculate all differences between two neighbouring even-numbered fields and two adjacent odd fields absolute value sums or square.
That data shown in Figure 10 are calculated and with data shown in Figure 11 are calculated and compare so that select the DCT pattern.More specifically, if find the former, select frame DCT pattern so less than the latter.On the other hand, if find the latter, select a DCT pattern so less than the former.
At last, have corresponding to the data of selected DCT pattern configurations and deliver to DCT circuit 56, simultaneously, be used to represent that a DCT sign of selected DCT pattern is delivered to variable length code circuit 58 and movement compensating circuit 64.
Show more clearlyly that with regard to luminance block the data structure of frame prediction and field prediction pattern is identical with the data structure of field DCT pattern with frame DCT pattern respectively substantially between the DCT pattern of Fig. 8 that determines by predictive mode commutation circuit 52 and the frame prediction of Fig. 9 and field prediction pattern and the Figure 10 that determines by DCT mode switch circuit 55 and Figure 11.
If the frame prediction mode that predictive mode commutation circuit 52 selects odd number even number line wherein to be mixed with each other, DCT mode switch circuit 55 also selects to have DCT predictive mode that the odd number even number line mixes just fully within the boundary line of possibility so.If predictive mode commutation circuit 52 is selected wherein odd number even number line field prediction pattern separated from one another, DCT mode switch circuit 55 also selects to have the field DCT pattern of odd number even number line separation just fully within the boundary line of possibility so.
But, it should be noted that selected DCT pattern is not always corresponding to selected predictive mode.Under any condition, predictive mode commutation circuit 52 select the frames prediction or provide the prediction error absolute value minimum and field prediction pattern and the DCT mode switch circuit 55 DCT pattern of selecting to provide forced coding efficient.
As mentioned above, the I-view data outputs to DCT circuit 56 by DCT mode switch circuit 55.So DCT circuit 56 is handled the data that are converted to the DCT coefficient that are provided to sample circuit 57 subsequently.Sample circuit 57 carries out quantification treatment with the quantitative calibration of adjusting to the data volume that is stored in transmission buffer 59 then, and said quantitative calibration feeds back to sample circuit 57, and this will describe in the back.Finish the I view data of quantification treatment and deliver to variable length code circuit 58 then.
The I view data that provides from sample circuit 57 is provided for variable length code circuit 58, is that variable-length codes is such as the Huffman code so that change this view data with the quantitative calibration that also is provided to there by sample circuit 57.This variable-length codes is stored in the transmission buffer 59 then.
Except view data and quantitative calibration that sample circuit 57 provides, variable length code circuit 58 also receive from prediction determine the prediction mode information of circuit 54, from the motion vector information of motion vector detecting circuit 50, from the prediction indication of predictive mode commutation circuit 52 with from the DCT sign of DCT commutation circuit 55.Which the processing type of information representation in the predictive mode in internal image coding, forward predictive coded, back forecast coding or forward direction and back forecast coding carried out by processing unit.The data that prediction indication represents to be provided to from predictive mode commutation circuit 52 processing unit 53 are at frame prediction mode or predictive mode on the scene.The data that the expression of DCT sign is provided to DCT circuit 56 from DCT mode switch circuit 55 are with frame DCT or with field DCT mode initialization.
Subsequently, thus be stored in the transmission buffer 59 data predetermined regularly be fed in case deliver to writing circuit 19 with data record to the recording medium 3 that is used as transmission line.
The quantitative calibration of I view data and sample circuit 57 outputs is also delivered to inverse quantization circuit 60 so that with the inverse quantization scale corresponding to this quantitative calibration data are carried out inverse quantization.Data by inverse quantization circuit 60 outputs are fed to IDCT (inverse discrete cosine transform) circuit 61 then so that carry out inverse discrete cosine transform.At last, utilize processor 62 to deliver to frame memory unit 63 by the data of idct circuit 61 outputs so that be stored among the forward prediction image-region 63a of this frame memory unit 63.
Be provided to motion vector detecting circuit 50 so that Li GOP comprises images such as a series of I-, B-, P-, B-, P-, B-herein.In this case, handle first frame data and be after the above-mentioned I-image, handling the 3rd frame data before handling as second frame data of B-image is the P-image.This is because the B-image can carry out back forecast, unless back forecast relates to continuous P-image and carried out in advance outside continuous P-image, can not carry out back forecast.The form that it should be noted that frame prediction that the P-view data is set with the predictive mode commutation circuit 52 of GOP or field prediction pattern is sent to processing unit 53 and is always handled by processing unit 53 aforesaid frame interior predictive mode from predictive mode commutation circuit 52.
For above-mentioned reasons, handle after first frame as the I-image, motion vector detecting circuit 50 begins to handle and is stored in the P-image of back in resource image-region 51C.Then, predictive mode commutation circuit 52 utilization sees the macro block that acts on each predictive mode unit calculates the absolute value sum of the difference between the prediction error of frame that is provided to there by motion vector detecting circuit 50 or I view data and with this with deliver to aforesaid prediction and determine circuit 54.When the first frame I-image of input GOP, the data of I-image itself are with frame prediction or once be sent to processing unit 53 by the field prediction pattern of predictive mode commutation circuit 52 settings of the GOP that is used for the P-image.On the other hand, circuit 54 definite predictive modes of wherein being handled the data of P-images by processing unit 53 are determined in prediction, promptly, according to the prediction error absolute value sum of calculating by the predictive mode commutation circuit 52 that is used for each predictive mode, select as the processing type that will carry out by processing unit 53 or internal image, forward direction, back to or forward direction and back forecast.Say that strictly in the situation of P-image, handling type can be internal image or aforesaid forward prediction mode.
In first place, in the internal image predictive mode, processing unit 53 is at contact point A configuration switch.Therefore, the P-view data utilizes DCT mode switch circuit 55, DCT circuit 56, sample circuit 57, variable length code circuit 58 and transmission buffer 59 (for the situation of I image) to be sent to transmission line.The P-view data utilizes sample circuit 57, inverse quantization circuit 60, idct circuit 61 and processor 62 also to deliver to frame memory unit 63 so that be stored among its back forecast image-region 63b.
In second place, in forward prediction mode, processing unit 53 reads out data from the forward prediction image-region 63a of frame memory unit 63 at contact point b configuration switch 53d and movement compensating circuit 64, thereby according to the motion vector that is provided to movement compensating circuit 64 by motion vector detecting circuit 50 data is carried out motion compensation.In this case, the data that are stored in forward prediction image-region 63a are I view data.In other words, determine that by prediction circuit 54 forms the forward prediction image, movement compensating circuit 64 by from the regional 63a of forward direction predicted picture read read the data that the I view data produces the forward prediction view data the address.Read the address and be from the macro block position displacement of motion vector detecting circuit 50 current outputs position corresponding to a distance of this motion vector.
The forward prediction data of being read by movement compensating circuit 64 are that the P-view data is relevant with reference picture, and deliver among the processor 53a that uses in the processing unit 53.The forward prediction image that is provided by movement compensating circuit 64 is provided from the macro block data of the reference picture that provided by predictive mode commutation circuit 52 processor 53a is the data of I-image so that obtain difference or mistake in the prediction.Difference data utilizes DCT commutation circuit 55, DCT circuit 56, sample circuit 57, variable length code circuit 58 and transmission buffer 59 to be sent to transmission line.Difference data is also carried out local decode and is fed to processor 62 from the result that local decode obtains by inverse quantization circuit 60 and idct circuit 61.
The forward prediction view data that is provided to processor 53a by movement compensating circuit 64 is also delivered in the processor 62.In processor 62, the data of forward prediction image are added to difference data by idct circuit 61 output so that produce original P-view data.The data of original P-image store the back forecast image-region 63b of frame memory unit 63 then into.
The data segment of I image and P-image is stored in respectively after foregoing forward prediction image-region 63a and the back forecast image-region 63b, begins the processing of second frame of B-images by motion vector detecting circuit 50.The B-image is handled in the mode identical with aforesaid P-image by predictive mode commutation circuit 52, except in the situation of B-image, determine that by prediction circuit 54 definite processing types can be back forecast pattern or forward direction and the back forecast patterns except that internal image predictive mode or forward prediction mode.
In the situation of internal image predictive mode or forward prediction mode, P-image situation is the same as the aforementioned, configuration switch 53d on contact point a and b.In this case, handle and transmit the B-view data of exporting by predictive mode commutation circuit 52 in the identical mode of above-mentioned P-image.
On the other hand, under the situation of back forecast pattern or forward direction and back forecast pattern, respectively at contact point c and d configuration switch 53d.
Therein in the back forecast pattern of contact point c configuration switch 53d, movement compensating circuit 64 is sense data from the back forecast image-region 63b of frame memory cell 63, so that according to the motion vector that is provided to movement compensating circuit 64 by motion vector detecting circuit 50 data are carried out motion compensation.In this case, the data that are stored among the back forecast image-region 63b are P-view data.In other words, owing to determine that by prediction circuit 54 notices form the back forecast pattern, then movement compensating circuit 64 is read the P-view data and is produced the back forecast view data by the address of reading from back forecast image-region 63b.Read the address and be from the macro block position displacement of motion vector detecting circuit 50 current outputs position corresponding to a distance of this motion vector.
The back forecast data of being read by movement compensating circuit 64 are that the B-view data is relevant with reference picture, and deliver among the processor 53b that uses in the processing unit 53.The back forecast image that is provided by movement compensating circuit 64 is provided from the macro block data of the reference picture that provided by predictive mode commutation circuit 52 processor 53b is the data of P-image so that obtain difference or mistake in the prediction.Difference data utilizes DCT commutation circuit 55, DCT circuit 56, sample circuit 57, variable length code circuit 58 and transmission buffer 59 to be sent to transmission line.
On the other hand, therein in the forward direction and back forecast pattern of contact point d configuration switch 53d, in this case, movement compensating circuit 64 is read the I-view data and read the P-view data from back forecast image-region 63b from the forward prediction image-region 63a of frame memory cell 63, so that according to the motion vector that is provided to movement compensating circuit 64 by motion vector detecting circuit 50 data are carried out motion compensation.
In other words, owing to determine circuit 54 notice back forecast patterns by prediction, then movement compensating circuit 64 is by reading that I-is read in the address and the P-view data produces forward direction and back forecast view data respectively from forward direction predicted picture zone 63a and back forecast image-region 63b.Read the address and be from the macro block position displacement of motion vector detecting circuit 50 current outputs position corresponding to a distance of this motion vector.In this case, two motion vectors are arranged, that is, be used for the motion vector of forward direction and back forecast image.
The forward direction of being read by movement compensating circuit 64 is that the B-view data is relevant with back forecast data and reference picture, and delivers among the processor 53c that uses in the processing unit 53.Deduct the predicted image data that provides by movement compensating circuit 64 macro block data of the reference picture that processor 53c provides from the predictive mode commutation circuit of being used by motion vector detecting circuit 50 52 so that obtain difference or mistake in the prediction.Difference data utilizes DCT commutation circuit 55, DCT circuit 56, sample circuit 57, variable length code circuit 58 and transmission buffer 59 to be sent to transmission line.
Because the B-image never is used as the predicted picture of another frame, then it is not stored in the frame memory unit 63.
It should be noted that the forward prediction image-region 63a of frame memory unit 63 and back forecast image-region 63b are implemented as the memory bank (banks) that switches to another from usually.Therefore, in the operation of reading the forward prediction image, frame memory unit 63 is set to forward prediction image-region 63a, and on the other hand, in the operation of reading the back forecast image, frame memory unit 63 is set to back forecast image-region 63b.
When foregoing description concentrated on luminance block, color difference signal was also handled and is transmitted in the macroblock unit shown in Fig. 8 to 11 in the mode identical with luminance block.It should be noted that the motion vector in handling as the aberration piece, each is cut value partly and uses with it with the motion vector components of the vertical luminance block relevant with horizontal direction.
Figure 12 is the configuration block diagram that is illustrated in the decoder 31 that uses in the moving image encoding/decoding apparatus shown in Figure 5.The coded image data that sends by the transmission line of being implemented by recording medium 3 utilizes the playback circuitry 30 of moving image encoding/decoding apparatus to be received and be temporarily stored in subsequently in the reception buffer 81 that uses in the decoder 31 by decoder 31.Then, view data is fed in the length-changeable decoding circuit 82 that uses in the decoding circuit 90 of decoder 31.82 pairs of view data of reading from reception buffer 81 of length-changeable decoding circuit are carried out length-changeable decoding, so that export a motion vector, prediction mode information, frame/field prediction sign of giving movement compensating circuit 87 and frame/field DCT sign and quantitative calibration and to the decode image data of inverse quantization circuit 83.
In I-image situation, the view data that is fed to processor 85 by idct circuit 84 outputs to frame memory unit 86 by processor 85 routinely so that be stored among the forward prediction image-region 86a of frame memory unit 86.The I view data that is stored among the forward prediction image-region 86a will be used for after the I-of forward prediction mode image, produce the forward prediction view data of the view data of the P-be fed to processor 85 or B-image.The I-view data also outputs at the format conversion circuit that is used for moving image encoding/decoding apparatus 32 shown in Figure 5.
The view data that provides as circuit I DCT 84 is that promptly the view data of I-image is read from the forward prediction image-region 86a of frame memory unit 86 by movement compensating circuit 87 when having the P-view data of a leading frame.In movement compensating circuit 87, the motion compensation of the motion vector that provided by length-changeable decoding circuit 82 is provided the view data of I view data.The view data of finishing motion compensation is delivered to processor 85 subsequently so that the actual view data that is provided by circuit I DCT 84 for difference data is provided.The result who adds, promptly Xie Ma P-view data is fed to frame memory unit 86 so that be stored among the back forecast image-region 86b of aforesaid frame memory unit 86.Be stored in P-view data among the back forecast image-region 86b and after the back forecast pattern, generation be fed to the back forecast view data of view data of the B-image of processor 85.
On the other hand, where manage the back forecast image-region 86b that just output to I-image situation by processor 85 without successively holding by the P-view data of handling with the signal encoding device 1 of frame interior predictive mode.
Owing to after treatments B-image, will show the P image after the P-image, so, at this constantly, the P-image does not output to format conversion circuit 32.Very similar to encoder 18, decoder 31 was handled before the B-image and is transmitted the P-image, even receive the P-image too after the B-image.
B-image view data by idct circuit 84 outputs is handled according to the prediction mode information that length-changeable decoding circuit 82 provides by processor 85.More specifically, processor 85 can be exported with the internal image predictive mode identical with I-image situation or handle view data with forward direction prediction, back forecast or forward direction and back forecast pattern.In forward prediction, in forward prediction, back forecast or forward direction and back forecast pattern, movement compensating circuit 87 reads and is stored in I view data among the 86a respectively, be stored in the P-view data among the 86b or be stored in 86 86a and 86b in I-and P-view data.Movement compensating circuit 87 carries out motion compensation to the image of reading so that produce predicted picture according to the motion vector of length-changeable decoding circuit 82 outputs then from frame memory unit 86.In above-mentioned internal image predictive mode,, so just do not produce predicted picture because processor 85 does not require predicted picture.
In movement compensating circuit 87, the predicted picture that stands motion compensation is added to by processor 85 on the view data of B-image, strictly speaking, is to be added on the difference data of idct circuit 84 outputs.So the feeds of data of processor 85 outputs is to the format conversion circuit 32 as I-image situation.
But,, when producing predicted picture, do not need these data so because the data of processor 85 outputs are B-image view data.Therefore, the data of processor 85 outputs are not stored in the frame memory unit 86.
Exported after the B-view data, from 86b, read the P-view data and be fed to processor 85 by movement compensating circuit 87.But, specifically and since these data before being stored in back forecast image-region 86b, experienced motion compensation then these data without undergoing motion compensation.
When concentrating luminance signal as described above, color difference signal is also handled and is transmitted with the macroblock unit shown in Fig. 8 to 11 in the identical mode of luminance signal.It should be noted that the motion vector in handling as color difference signal, each is cut quantity partly and uses with it with the motion vector components of the vertical luminance signal relevant with aberration piece horizontal direction.
Figure 13 is the quality of coded picture figure of expression according to SNR (signal to noise ratio).As shown in the figure, picture quality depends on image type more.More specifically, the I-of transmission and P-image have high-quality respectively, but the B-image has lower quality.Artificial variable in the picture quality shown in Figure 13 is to utilize a kind of technology of feature of visual perception's people.That is to say that by changing quality, this quality shows to such an extent that Billy is better with the average case of all images.The control that changes picture quality is carried out by the sample circuit that is used for encoder 18 57 shown in Figure 7.
Figure 14 and 15 is configurations of expression code converter 101 provided by the invention.Figure 15 represents configuration shown in Figure 14 in more detail.Code converter 101 is converted to the operator to the bit rate of gop structure that is provided to video decoding apparatus 102 and coded video bit stream respectively and wishes gop structure and bit rate with the main frame regulation.In fact, all has all just the function that prime that three other code converters with code converter 101 identical functions are connected code converter 101 is come interpretive code transducer 101 by supposition.For the bit rate with gop structure and bit stream is converted to one of a plurality of gop structures and one of a plurality of bit rates respectively, then first, second be connected with series system with third generation code converter and the shown in Figure 15 the 4th from generation to generation code converter 101 be connected after being connected in series of first, second and third generation code converter.It should be noted that first, second and third generation code converter do not represent in Figure 15.
In the description below the present invention, the encoding process of being carried out by first generation code converter is called as first generation encoding process and the encoding process that is connected the second generation code converter execution after the first generation code converter is called as second generation encoding process.Equally, be connected encoding process that the third generation code converter after the second generation code converter carries out be called as third generation encoding process and be connected the 4th after the third generation code converter from generation to generation code converter be the encoding process of code converter 101 execution shown in Figure 15 be called as the 4th generation encoding process.In addition, used and as first generation encoding process as a result the coding parameter of gained be called as first generation coding parameter and used and as second generation encoding process as a result the coding parameter of gained be called as second generation coding parameter.Similarly, used and as third generation encoding process as a result the coding parameter of gained be called as third generation coding parameter and used and as the 4th generation encoding process as a result the coding parameter of gained be called as the 4th generation coding parameter or present encoding parameter.
At first, explain by third generation code converter and produce and offer the shown in Figure 15 the 4th third generation coded video bit stream ST (3rd) of code converter 101 from generation to generation.Third generation coded video bit stream ST (3rd) is as by in the 4th third generation encoding process carried out of the third generation code converter that provides of the code converter 101 places coded video bit stream of gained as a result from generation to generation.In the third generation coded video bit stream ST (3rd) that obtains from third generation result, the coding parameter that produces in third generation encoding process is described as the GOP layer respectively, image layer, the macroblock layer of the coded video bit stream ST of the amplitude limit layer and the third generation and the sequence-header () function on the sequence layer, sequence-extension () function, goup-of-picture-header () function, picture-header () function, picture-coding-extension () function, picture-data () function, silce () function and macroblock () function.Being described in this fact of third generation coding parameter of using in the third generation encoding process at the third generation coded video bit stream ST that obtains from third generation encoding process follows MPEG 2 standards and does not disclose any novelty.
The technology of will putting of code converter 101 provided by the invention is not this fact of third generation coding parameter of describing in third generation coded video bit stream ST, but in third generation coded video bit stream ST, comprise respectively as first and second generation encoding process as a result gained first and second generation this fact of coding parameter.The historical data stream () of first and second generations coding parameter conduct in the user-data area of the image layer of third generation coded video bit stream ST.In the present invention, the historical data stream of describing in the user-data area of the image layer of third generation coded video bit stream ST is called as " historical information " and the parameter that is described as historical stream is called as " history parameters ".In the another kind of mode of named parameter, the third generation coding parameter of describing in third generation coded video bit stream ST also can be called as the present encoding parameter.In this case, in the user-data area of the image layer of third generation coded video bit stream ST, be described as historical stream () first and second generation coding parameter be called as " past coding parameter ", because if from third generation encoding process, first and second generation encoding process be each processing of carrying out in the past.
Respectively as first and second generation encoding process gained as a result first and second generation the reason that except that above-mentioned third generation coding parameter, also is described among the third generation coded video bit stream ST (3rd) of coding parameter be to avoid deterioration of image, even when in code conversion, repeatedly changing the bit stream of gop structure and encoded data stream too.For example, an image can be encoded as the P-image in first generation encoding process, and in order to change the gop structure of first generation coded video bit stream, this image is encoded as the B-image in second generation encoding process.In order further to change the gop structure of second generation coded video bit stream, this image is encoded as the P-image once more in third generation encoding process.Because handling based on the conventional Code And Decode of mpeg standard is 100% anti-processing, so each when carrying out known usually Code And Decode and handling picture quality just worsen.Not only in third generation encoding process, calculate coding parameter in this case again such as quantitative calibration, motion vector and predictive mode.But, utilize the coding parameter that in first generation encoding process, produces such as quantitative calibration, motion vector and predictive mode again.Coding parameter such as quantitative calibration, motion vector and the predictive mode of up-to-date generation in first generation encoding process has the precision of the counter portion coding parameter that is higher than up-to-date generation in third generation encoding process significantly.Therefore, by utilizing the coding parameter that in first generation encoding process, produces again, might reduce the degree of deterioration of image quality, even repeatedly carry out the Code And Decode circuit too.
By detailed explanation shown in Figure 15 the 4th from generation to generation the decoding carried out of code converter 101 and encoding process with according to the processing of the invention described above as illustration.Video decoding apparatus 102 utilizes decoding of third generation coding parameter and coding to be included in the vision signal among the third generation coded video bit stream ST (3rd) so that produce the baseband digital video signal of decoding.In addition, video decoding apparatus 102 also decode in the user data area of the image layer of third generation coded video bit stream ST (3rd), be described to historical stream first and second generation coding parameter.By means of configuration and the operation of describing video decoding apparatus 102 with reference to following Figure 16 in detail.
Figure 16 is the detailed configuration figure of expression video decoding apparatus 102.As shown in the figure, video decoding apparatus 102 comprise the buffer 81 that is used to cushion the coded bit stream that provides, be used for to this coded bit stream carry out length-changeable decoding circuit 112 that length-changeable decoding handles, be used for the quantitative calibration that provides according to length-changeable decoding circuit 112 to the data of finishing length-changeable decoding and handling carry out inverse quantization inverse quantization circuit 83, be used for the DCT coefficient of finishing inverse quantization is carried out the idct circuit 84 of inverse discrete cosine transform.Be used to carry out processor 85, frame memory unit and the movement compensating circuit 87 of motion compensation process.
For the third generation coded video bit stream ST (3rd) that decodes, length-changeable decoding circuit 112 is extracted in the third generation coding parameter of describing on third generation coded video bit stream ST (3rd) image layer, amplitude limit layer and the macroblock layer.Typically, the third generation coding parameter that is extracted by length-changeable decoding circuit 112 comprises the image encoding type of presentation video type, the quantizer scale code of expression quantitative calibration step sizes, the macro block (mb) type of expression predictive mode, the motion vector of expression motion vector, the frame/field-type of sports of expression frame prediction mode and the dct type of an expression DCT pattern or a DCT pattern.Quantizer-scale code coding parameter is fed to inverse quantization circuit 83.On the other hand, be fed to movement compensating circuit 87 such as image encoding type, quantizer scale code, macro block (mb) type, motion vector, frame/field type of sports and all the other coding parameters of dct type.
Length-changeable decoding circuit 112 not only extracts above-mentioned the being used to required coding parameter of third generation coded video bit stream ST (3rd) of decoding, but also extract as from sequence layer, GOP layer, image layer, amplitude limit layer and the macroblock layer of third generation coded video bit stream ST (3rd), to send to and be connected the every other third generation coding parameter of code converter from generation to generation of the 5th after the code converter shown in Figure 15 101.Needless to say the above third generation coding parameter such as above-mentioned image encoding type, quantizer scale code, macro block (mb) type, motion vector, frame/field type of sports and the dct-type used in the third generation is handled is also included within the third generation historical information.Operator and main frame determine in advance that according to transmission capacity what coding parameter will extract as historical information.
In addition, length-changeable decoding circuit 112 also is extracted in the user data of describing in the user data area of image layer of third generation coded video bit stream ST (3rd).Thereby present this user data to historical decoding device 104.
Extract in the user data of historical decoding device 104 from the client layer of third generation coded video bit stream ST (3rd) as historical information describe first and second generation coding parameter.More specifically, by analyze the user data sentence structure that receives from length-changeable decoding circuit 12, historical decoding device 104 can detect the historical data ID that describes and utilize it to extract converted-history-stream () in user data.Then, by being extracted in the 1 bit flag bit (flag bit) of the converted-history-stream () that predetermined space inserts, historical decoding device 104 can access histroy-stream ().By analyzing the sentence structure of history-stream (), historical decoding device 104 can extract be recorded among the history-stream () first and second generation coding parameter.The back will be described the configuration and the operation of historical decoding device 104 in detail.
For finally present first, second and third generation coding parameter to be used to carry out the 4th generation encoding process video encoder 106, historical information equipment complex 103 in the base band video data of video decoding apparatus 102 decodings compound first, second with third generation coding parameter.Historical information equipment complex 103 receives base band video data from video decoding apparatus 102, come the length-changeable decoding circuit 112 that uses in the comfortable video decoding apparatus 102 third generation coding parameter and from historical decoding device 104 first and second generation coding parameter, thereby in the base band video data compound first, second and third generation coding parameter.Base band video data with compound there first, second and third generation coding parameter are fed to coding parameter separation equipment 105 then.
Below, explain the technology of compound first, second and third generation coding parameter in the base band video data by reference Figure 17 and 18.Figure 17 is that expression is by luminance signal part that all comprises 16 pixels, 16 pixel portion that defines according to mpeg standard and the macro block figure that color difference signal partly constitutes.One of part that comprises 16 pixels, 16 pixels is the sub-piece Y[0 by luminance signal], Y[1], Y[2] and Y[3] constitute and other parts by the sub-piece Cr[0 of color difference signal], Cr[1], Cb[0] and Cb[1] constitute.Sub-piece Y[0], Y[1], Y[2] and Y[3] and sub-piece Cr[0], Cr[1], Cb[0] and Cb[1] all comprise 8 pixels, 8 pixels.
Figure 18 is the format chart of expression video data.According to RDT 601 definition that ITU recommends, this form represents to be used for the so-called D1 form of broadcasting industry.Because the D1 form is standardized as the form that is used to send video data, 1 pixel of video data is represented by 10 bits.
The base band video data of following the mpeg standard decoding are 8 bit lengths.In code converter provided by the invention, 8 high order bit D9 to D2 that follow the base band video data utilization 10 bit D1 forms as shown in Figure 8 of mpeg standard decoding send.Therefore, 8 bit decode video datas flow down unappropriated 2 low-order bit D1 and D2 in the D1 form.Comprise the unassigned zone that these are used to send the unallocated bit of historical information by code converter utilization provided by the invention.
Data block shown in Figure 180 is the data block of a pixel that is used for being sent in 8 sub-pieces of macro block.Because in fact each sub-piece comprises aforesaid 64 (=8 * 8) pixel, require each 64 data block shown in Figure 180 to send the macro block data amount (VOLUME) that comprises 8 sub-pieces so.As mentioned above, the YUV macro block comprises 8 sub-pieces that constitute by 64 (=8 * 8) pixel.Therefore, the YUV macro block comprises 8 * 64 pixels=512 pixels.Because remaining unappropriated as mentioned above 2 bits of each pixel, the YUV macro block has the unallocated bit in 512 pixels, 2 unallocated bits/pixel=1024 so.Point out that in passing generation historical information is 256 bit long.Therefore, preceding four (=1024/256) just can be superimposed upon the macroblock of video data that is used for YUV for historical information.In example shown in Figure 180, first, second and third generation historical information are superimposed on the macro block of video data by 2 low-order bit that utilize D1 and D0.
Coding parameter separation equipment 105 extracts the base band video data as the D1 form with from 2 low-order bit and historical information from 8 high order bits of data that send to the there.In example shown in Figure 15, the extracting data base band video data of coding parameter separation equipment 105 from sending, thus present the base band video data to video encoder 106.Simultaneously, coding parameter separation equipment 105 comprises the historical information of first, second and third generation coding parameter from the extracting data that sends, thereby presents this historical information to video encoder 106 and historical encoding device 107.
In the embodiment shown in fig. 15, the base band video data that provide comprise the historical information of first, second and the third generation coding parameter that are superimposed upon the there.Therefore video encoder 106 by means of select to utilize again these historical information sections just can carry out the 4th generation encoding process so that the degree of reduction deterioration of image quality.
Figure 19 is the configuration pictorial diagram that is illustrated in the encoder 121 that uses in the video encoder 106.As shown in the figure, encoder 121 comprises motion vector detecting circuit 50, predictive mode commutation circuit 52, processor 53, DCT commutation circuit 55, DCT circuit 56, sample circuit 57, variable length code circuit 58, transmission buffer 59, inverse quantization circuit 60, anti-DCT circuit 61, processor 62, frame memory 63 and motion compensation 64.The function of these circuit almost with encoder 18 shown in Figure 7 in use the same, thereby needn't be to its repeat specification.Below concentrate the difference that is described between encoder shown in Figure 7 121 and the encoder 18.
In addition, controller 70 also receives many for historical information by coding parameter separation equipment 105 output, so that by utilizing this historical information coded reference image again.The function of controller 70 is described below.
At first, controller 70 is for forming a judgement from the determined image type that exists the reference picture type whether to mate to be included in the historical information of the gop structure of operator or main frame regulation.That is to say that whether controller 70 encodes with the image type identical with the specified image type in the past for reference picture forms a judgement.
Above-mentioned judgement form can describe by utilizing example shown in Figure 15.Controller 70 is for the image type of whether distributing to the reference picture in the 4th generation encoding process and the type of reference picture in first generation encoding process, reftype or the judgement of the identical formation of the reference picture type in the third generation encoding process in second generation encoding process.
If it is different with the image type of the reference picture of any former generation encoding process that result of determination represents to distribute to the image type of the reference picture in the 4th generation encoding process, controller 70 is carried out normal encodings and is handled so.This result of determination refer to this reference picture never experience before first, second and third generation encoding process in the image type of reference picture in distributing to the 4th generation encoding process.On the other hand, if it is identical with the image type of the reference picture of any former generation encoding process that result of determination represents to distribute to the image type of the reference picture in the 4th generation encoding process, controller 70 utilizes processing again by utilizing former generation parameter execution parameter more so.This result of determination refers to before this reference picture experience first, second and the third generation encoding process in the image type of the reference picture in distributing to the 4th generation encoding process.
At first, explain the normal encoding processing of carrying out by controller 70.Which make about selecting decision of frame prediction mode or field prediction pattern in order to make controller 70, motion vector detecting circuit 50 detects prediction error in frame prediction mode and the prediction error in the field prediction pattern, gives controller 70 so that present the prediction error value.Controller 70 compares this value each other, so that select to have the predictive mode of minimum predictive mode.Predictive mode commutation circuit 52 carries out signal processing then so that the corresponding predictive mode of being selected by controller 70, thereby presents signal as the result gained to handling unit 53.Utilize selected frame prediction mode, predictive mode commutation circuit 52 is carried out signal processing so that present luminance signal to handling unit 53 by the appearance of received signal, and carries out the signal processing of color difference signal so that as before capable and even field is capable with reference to mixing odd field as described in the figure 8.On the other hand, utilize selected field prediction pattern, predictive mode commutation circuit 52 is carried out the signal processing of luminance signals so that make the sub-piece Y[1 of brightness] and Y[2] comprise the capable and sub-piece Y[3 of brightness of odd field] and Y[4] to comprise even field capable, and carry out color difference signal processing so that as before with reference to figure 9 describe make four lines comprise the capable and following four lines of odd field to comprise even field capable.
In addition, be used for the prediction error of each predictive mode for controller 70 being made producing, thereby provide prediction error to arrive controller 70 about which the decision motion vector detecting circuit 50 that will select internal image predictive mode, forward prediction mode, back forecast pattern or forward direction and back forecast pattern.Controller 70 selects to have the pattern of minimum prediction error as the intermediate image predictive mode from forward direction predictive mode, back forecast pattern or forward direction and back forecast pattern.Then, the minimum prediction error of controller 70 more selected intermediate image predictive modes and the prediction error of internal image predictive mode, thus select or the prediction of selected intermediate image or internal image predictive mode with minimum prediction error as predictive mode.In more detail, if find that the prediction error of internal image predictive mode is littler, then set up the internal image predictive mode.On the other hand, if find that the prediction error of intermediate image predictive mode is littler, then set up selected forward prediction, back forecast pattern or forward direction or back forecast pattern with minimum prediction error.Controller 70 then processor controls 53 and motion compensation 64 so that in the predictive mode of being set up, operate.
In addition, make relevant which the decision that will select a frame DCT pattern or a DCT pattern in order to make controller 70, the data of four sub-pieces of brightness of DCT mode switch circuit 55 conversions are to have signal that comprises the DCT pattern form that mixes odd and even number field row and the data with the field DCT pattern form that comprises the odd and even number field row that separates, thereby present this signal that is converted to DCT circuit 56.DCT circuit 56 calculates and comprises the code efficiency that the DCT that mixes odd and even number field row signal handles code efficiency and comprises the DCT processing of the odd and even number field row signal that separates, thereby the code efficiency of presenting this calculating arrives controller 70.Controller 70 code efficiency to each other is to select to have the DCT pattern of peak efficiency.Controller 70 is controlled DCT mode switch circuit 55 then and carry out work in selected DCT pattern.
Controller 70 also receives the signal that the target bit rate of hope bit rate of expression operator or main frame regulation and expression are stored in remaining resident free area sizes in data volume in the transmission buffer 59 or the buffer 59, so that produce the feedback q scale code that is used to control the used quantization step size of sample circuit 57 according to remaining resident free areas size in target bit rate and the buffer 59.The control signal that this feedback q scale code is produced according to remaining resident free area sizes in the transmission buffer 59 so as prevent from buffer 59, to overflow or underflow and make bit stream with target bit rate from transmission buffer 59 outputs.More specifically, for example, if the data volume of buffering diminishes in transmission buffer 59, then the lower quantization step-length makes increases next image encoded bit number.On the other hand, if the data quantitative change of buffering is big in transmission buffer 59, then increases quantization step and make and to reduce next image encoded bit number.It should be noted that quantization step size is proportional to feedback q scale code.That is to say that when feedback q scale code increased, quantization step also increased.On the other hand, when feedback q scale code reduced, quantization step also reduced.
Below, explain with code converter 101 to be that the parameter of utilizing coding parameter again of feature is used encoding process again.For example is understood easily, suppose that reference picture is encoded as the I-image in the first generation encoding process, P-image in the second generation encoding process and the B-image in the third generation encoding process, and be encoded to I-image in current the 4th generation encoding process once more.In this case, since before the reference picture with distribute to the 4th generation encoding process the desired image type of I-image in first generation encoding process, encode, controller 70 utilizes first generation coding parameter rather than utilizes the new coding parameter that produces from the video data that provides to carry out encoding process so.The representative of this coding parameter that will in the 4th generation encoding process, utilize again comprise quantizer scale code, the expression predictive mode of expression quantifications-scale step sizes macro block (mb) type, expression motion vector motion vector, expression frame prediction mode or field prediction pattern frame/field type of sports and represent frame DCT pattern or the dct type of field DCT pattern.Controller 70 no longer utilizes all coding parameters that receive as historical information.But 70 utilizations of controller are used as the coding parameter that utilizes judgement again and are newly produced the previous coding parameter to its coding parameter that is not suitable for utilizing again.
Below, explain that by concentrating on the difference of handling with aforementioned normal encoding the parameter of utilizing coding parameter again uses encoding process again.In normal encoding was handled, motion vector detecting circuit 50 detected the motion vector of reference picture.On the other hand, use in the encoding process in the parameter of utilizing coding parameter again, motion vector detecting circuit 50 does not detect the motion vector of reference picture again.But motion vector detecting circuit 50 utilizes the motion-vector that transmits as first generation historical information again.Use the reason of first generation motion vector will be performed as follows explanation.Because as the encoding process of third generation coded bit stream at least three encoding process of base band video data experience of gained as a result, then compare its picture quality with original video data obviously bad.The motion vector that detects from the video data with bad picture quality is with regard to inaccuracy.More specifically, offer as first generation historical information the 4th from generation to generation the motion vector of code converter 101 have the accuracy that is higher than the motion vector that in the 4th generation encoding process, detects certainly.By utilize again as the 4th generation the motion vector that receives of coding parameter, picture quality does not worsen during the 4th generation encoding process.Controller 70 is presented the motion vector that receives as first generation historical information to motion compensation 64 and variable length code circuit 58 so that with the motion vector that is made in the reference picture of encoding in the 4th generation encoding process.
In normal process, prediction error in the motion vector detecting circuit 50 detection frame prediction modes and the prediction error in the field prediction pattern are so that selection or frame prediction mode or field prediction pattern.On the other hand, using in the coding parameter based on the parameter of utilizing coding parameter again, motion vector detecting circuit 50 neither detects the also prediction error in the checkout area predictive mode not of prediction error in the frame prediction mode again.But the frame/field type of sports that receives as first generation historical information is so that represent to utilize frame prediction mode or field prediction pattern again.This is because the prediction error of each predictive mode of detecting in first generation encoding process has the accuracy of the prediction error that is higher than each predictive mode that detects in the 4th generation encoding process.Therefore, the predictive mode of selecting based on the prediction error that all has pinpoint accuracy will allow to carry out more excellent encoding process.More specifically, controller 70 control signal of presenting frame/field type of sports that expression receives as first generation historical information arrives predictive mode commutation circuit 53.This control signal drives predictive mode commutation circuit 52 and carries out signal processing according to the frame that utilizes again/field type of sports.
In normal process, motion vector detecting circuit 50 also detects prediction error in each internal image predictive mode, forward prediction mode, back forecast pattern and forward direction and back forecast pattern so that select one of these predictive modes.On the other hand, in based on the processing that utilizes coding parameter again, motion vector detecting circuit 50 does not detect the prediction error of these predictive modes.But, select by one of internal image predictive mode, forward prediction mode, back forecast pattern and the forward direction represented as the macro block (mb) type of first generation historical information reception and back forecast pattern.This is because the prediction error of each predictive mode that detects in the first generation is handled has the accuracy of the prediction error that is higher than each predictive mode that detects in the processing of the 4th generation.Therefore, will allow to carry out more high efficiency encoding process based on all having the selected predictive mode of pinpoint accuracy prediction error.More specifically, controller 70 selects the predictive mode represented by the macro block (mb) type that is included in the first generation history and processor controls 53 and motion compensation 64 to operate in selected predictive mode.
In normal encoding is handled, DCT mode switch circuit 55 present the signal that is converted to frame DCT pattern form and be converted to a signal of DCT pattern form the two to DCT circuit 56 for use in the code efficiency in the code efficiency in the frame DCT pattern relatively and the DCT pattern.On the other hand, in based on the processing that utilizes coding parameter again, neither produce the signal be converted to frame DCT pattern form and also do not produce and be converted to a signal of DCT pattern form.But, only carry out the processing that is included in the DCT pattern that the dct type in the first generation historical information represents.More specifically, controller 70 utilizes and is included in the dct type in the first generation historical information and controls DCT mode switch circuit 55 so that carry out signal processing according to the DCT pattern of being represented by the dct type.
In normal encoding was handled, controller 70 was controlled at the quantization step size of using in the sample circuit 57 according to remaining resident free area size in the target bit rate of operator or main frame regulation and the transmission buffer 59.On the other hand, in based on the processing that utilizes coding parameter again, controller 70 is according to remaining resident free area size in the target bit rate of operator or main frame regulation, the transmission buffer 59 and be included in the quantitative calibration in the past in the historical information and be controlled at the quantization step size of using in the sample circuit 57.It should be noted that in the following description the past quantitative calibration that is included in the historical information is called as historical q scale code.In the historical data stream that will describe in the back, quantitative calibration is called as quantizer scale code.
At first, controller 70 produces the expression current quantitative calibration feedback q scale code identical with the normal encoding disposition.Feedback q scale code is arranged to such value of determining in the size of remaining resident free area so that neither occur overflowing and underflow also do not occur in transmission buffer 59 from transmission buffer 59.So history-q-scale-code that expression is included in the previous quantitative calibration in the first generation historical data stream compares with the feedback q scale code of representing current quantitative calibration so that determine which quantitative calibration is bigger.It should be noted that big quantitative calibration means big quantization step.If feedback-q-scale-code of finding the current quantitative calibration of expression is greater than maximum historical q scale code in a plurality of previous quantitative calibrations of expression, controller 70 is just presented the feedback q scale code of the current quantitative calibration of expression to sample circuit 57 so.If find the feedback q scale code of the historical q scale code of the maximum previous quantitative calibration of expression greater than the current quantitative calibration of expression on the other hand, controller 70 is just presented the historical q scale code of the maximum previous quantitative calibration of expression to sample circuit 57 so.Controller 70 is chosen in selects maximum one in the current quantitative calibration that a plurality of previous quantitative calibration neutralization that is included in the historical information derives in the size of remaining resident free area from transmission buffer 59.In other words, the controller 70 control sample circuits 57 maximum quantization step that utilizes in present encoding and handle (or the 4th generation encoding process) and previous coding to handle in the quantization step of use in (first, second and third generation encoding process) quantizes.Its reason is described below.
Suppose the stream bit rate that in third generation encoding process, produces be 4Mbps and carry out the 4th generation encoding process the target bit rate set of encoder 121 be 15Mbps.In fact can not obtain to be higher than such target bit rate of previous bit rate by the quantization step that successively decreases simply.This is because with small quantization step the image of finishing before the encoding process of carrying out with big quantization step is carried out the quality that current encoding process is never improved image.In other words, with small quantization step the image of finishing before the encoding process of carrying out with big quantization step being carried out current encoding process only increases synthetic bit number, but does not help improve the quality of image.Therefore, be used for that present encoding is handled (or the 4th generation handled) and previous coding is handled the maximum quantization step of (or first, second and third generation encoding process), can carrying out efficient coding processing by utilizing.
Below, explain historical decoding device 104 and historical encoding device 107 by reference Figure 15.As shown in the figure, historical decoding device 104 comprises user data decoder 201, transducer 202 and historical decoder 203.The user data that user's decorder 201 decodings are provided by video decoding apparatus 102.Transducer 202 conversion by the data of user data decoder 201 outputs historical decoder 203 reset from the historical information of the data of transducer 202 outputs.
On the other hand, historical encoding device 107 comprises historical forms device 211, transducer 212 and format of user data device 213.211 formats of historical forms device are fed to that three generations's coding parameter by coding parameter separation equipment 105.The data of transducer 212 conversion history formatters 211 output and the data of format of user data device 213 format transducers 212 outputs are format of user data.
The user data that 201 decodings of user data decoder are presented by video decoding apparatus 102 is so that provide decoded result to transducer 202.The details of user data will be described in the back.In any speed, the user data of being represented by user data () comprises user data initial code and user data.According to the MPEG standard, forbid that 23 in the user data are continuous: the generation of " 0 " bit is so that prevent incorrect detection initial code.Because historical information can comprise 23 or more a plurality of continuously " 0 " bit, then be necessary to handle and the historical data stream () of conversion history information for the conversion described with reference to Figure 38 later on.Carry out the transducer 212 that the parts of this conversion are uses in historical encoding device 107 by inserting " 1 " bit.On the other hand, the transition reverse that the transducer 202 that uses in historical decoding device 104 and the transducer that uses in historical encoding device 107 212 are performed, its carries out the conversion of deleted bit.
On the other hand, historical forms device 211 conversions of using in historical encoding device 107 are the historical information form by three generations's coding parameter form that coding parameter separation equipment 105 is fed to that.The historical information form can have below with regular length or the following variable-length as shown in figure 47 that also will describe shown in the Figure 40 to 46 that describes.
Be converted device 212 by historical forms device 211 formative historical informations and be converted into the historical data stream () of conversion so that prevent the aforesaid initial code that detects user data () improperly.In other words, when historical information may comprise 23 or more a plurality of continuous " 0 " bit, the MPEG standard had been forbidden the generation of 23 continuous " 0 " bits in the user data.Therefore, in the historical data stream that will describe in the back, forbid that according to what constitute transducer 212 is by inserting " 1 " bits switch historical information.
The sentence structure shown in Figure 38 that User Format device 213 will be described according to the back is added among the converted-history-stream () that is presented by transducer 212 data-ID and user-data-data flow-code so that produce the user data that can insert video-stream (), thereby the output user data is to video encoder 106.
Figure 20 is the Typical Disposition block diagram of expression historical forms device 211.As shown in figure 20, code language transducer 301 and code length transducer 305 elder sister's project data and project-number data from coding parameter separation equipment 105.Project data is a coding parameter, i.e. the coding parameter that sends as historical information specifically.The item number data are the information that is used to discern the data flow that comprises this coding parameter.The example of item number data is sequence header titles that sentence structure title and back will be described.The coding parameter that 301 conversions of code language transducer are fed to this is a code language of following the regulation sentence structure, thereby exports this code to barrel shifter shifts 302.The code language barrel shape displacement that barrel shifter shifts 302 will be fed to this is fed to a displacement of this information corresponding to address production electric circuit 306, thereby exports the code of displacement to switch 303 with byte unit.The contact position of switch 303 can be changed by the different choice signal of address production electric circuit 306 output, it have as barrel shifter shifts 302 present as many many to the contact utmost point.Switch 303 is sent to unit R AM unit 304 so that store with the write address of address production electric circuit 306 regulations with the code that barrel shifter shifts 302 is fed to this.The code of storage in the ram cell 304 is in reading to read the address and be fed to the transducer 212 that subordinate provides from address production electric circuit 306 regulations.If necessary, the code of reading from ram cell 304 utilizes switch 303 to be fed to ram cell 304 once more so that once more in that storage.
As mentioned above, historical forms device 211 is used as and is used for selecting variable length code to handle and be used to export the result's of this variable length code processing so-called variable length coder to the coding parameter that is fed to this.
Figure 22 is the Typical Disposition block diagram of expression transducer 212.In this Typical Disposition, 8 bit D type circuits for triggering (D-FF) 321 are read and be fed to reading the buffer memory unit 320 that 8 Bit datas provide between historical forms device 211 and transducer 212 so that in this maintenance in the address.Reading the address is produced by controller 326.8 Bit datas of reading from 8 bit D type circuits for triggering (D-FF) 321 are fed to fills circuit 323 and 8 bit D type circuits for triggering 322 so that in this maintenance.8 Bit datas of reading from 8 bit D type circuits for triggering 322 also are fed to filling circuit 323.In more detail, link so that form 16 bit parallel data, this parallel data is fed to fills circuit 323 subsequently from 8 bit D type circuits for triggering 321 8 Bit datas of reading and 8 Bit datas of reading from 8 bit D type circuits for triggering 322.
Barrel shifter shifts 324 will be filled circuit 323 and will be fed to the displacement of this data barrel shape displacement by the signal indication that receives in the slave controller 326, thus extracting data 8 Bit datas of displacement from then on.The data of extracting output to 8 bit D type circuits for triggering 325 then so that in this storage.The data that remain on 8 bit D type circuits for triggering 325 utilize buffer memory unit 327 to output in grade format of user data device 213 that provides of back at last.In other words, data are temporarily stored in the buffer memory unit 327 that provides between transducer 212 and the User Format device 213 with the write address that controller 326 produces.
Figure 23 is the block diagram that circuit 323 Typical Disposition are filled in expression.In this configuration, 16 Bit datas that receive from D type circuits for triggering 321 and 322 are fed to switch 331-16 to the contact point of 331-1.The data segment that is fed to switch 331-i (wherein i=0-15) contact point also is fed to the contact point C of switch 331-i.Switch 331-i (wherein i=0-16) be adjacent to switch 331-i (wherein i=0-15) switch (respectively as the switch 331-i (wherein i=0-15) of figure MSB side.For example in the MSB of adjacent switch 331-12 side, the 13 segment data of LSB that is fed to the contact point a of switch 331-13 also is fed to the contact point C of switch 331-12.The 14 segment data of LSB that is fed to the contact point a of switch 331-14 simultaneously also is fed to the contact point C of switch 331-13.
Yet the contact point a of the switch 331-0 that provides in the bottom of switch 331-1 opens a way, because do not provide switch in the bottom corresponding to the switch 331-0 of LSB.In addition, the contact point C of the switch 331-16 that provides on the top of switch 331-15 also opens a way, because do not provide switch on the top corresponding to the switch 331-16 of MSB.
Data " 1 " are fed to the contact point b of switch 331-0 to 331-16.Decoder 332 is transformed into contact point b with one of switch 331-0 to 331-16 so that insert data " 1 " to this filling position at the filling position of the filling position signal indication that is received by controller 326.Be transformed into its contact point C and be transformed into its contact point a at the switch 331-0 to 331-16 of the switch LSB at filling position place end at the switch 331 of the switch MSB at filling position place end.
Figure 23 represents that data " 1 " are inserted into the example that LSB holds the 13 bit.Therefore, in this case, switch 331-0 to 331-12 is transformed into its contact point C and switch 331-14 to 331-16 is transformed into its contact point a.Switch 331-13 is transformed into its contact point b.
Utilize above-mentioned configuration, transducer 212 shown in Figure 22 is converted to 22 bit code and comprises insertion data " 1 " to 23 bit code, thus 23 bit result of output conversion.
Figure 24 is the data segment timing diagram of expression by a plurality of part outputs of transducer shown in Figure 22 212.When the clock signal of controller practical in the transducer 212 326 and data byte synchronously produce shown in Figure 24 A read the address time, be stored in this byte data of reading the address and just from buffer memory cell 320, read and temporarily remain in the D flip-flop circuit 321.Then, the feeds of data of Figure 24 B that reads from D flip-flop circuit 321 is to filling circuit 323 and D flip-flop circuit 322 so that in that maintenance.The data of reading Figure 24 C from D flip-flop circuit 322 link with the data of Figure 24 B that reads from D flip-flop circuit 321 and as linking feeds of data that the result obtains to filling circuit 323 shown in Figure 24 D.
Thereby, utilizing and to read the timing A1 of address, the first byte D0 of the data of Figure 24 B that reads from D flip-flop circuit 321 is fed to and fills first byte of circuit 323 as the data shown in Figure 24 D.Then, the timing of address A1 is read in utilization, and the second byte D1 of the data of Figure 24 B that read and that link with the first byte D0 of the data of Figure 24 C that reads from D flip-flop circuit 322 delivers to and fills second two byte of circuit 322 as the data shown in Figure 24 D from D flip-flop circuit 321.Subsequently, the timing of address A3 is read in utilization, and the 3rd byte D2 of the data of Figure 24 B that read and that link with the second byte D1 of the data of Figure 24 C that reads from D flip-flop circuit 322 delivers to and fills circuit 323 as the 3rd two bytes of the data shown in Figure 24 D or the like from D flip-flop circuit 321.
Comprise from the data of D type flip-flop circuit 325 output and to be inserted into the data " 1 " the position behind 22 Bit datas.Therefore, 0 continuous bit number never exceeds 22, even also like this when all bits between data " 1 " and next data " 1 " all are " 0 ".
Figure 25 represents the Typical Disposition block diagram of transducer 202.In transducer 202, use from D type circuits for triggering 341 to controller 346 scopes in parts and identical this true expression of the parts in transducer 212 uses from D type circuits for triggering 321 to controller 326 scopes shown in Figure 22 before configuration identical with afterwards configuration cardinal principle.Transducer 202 is with the difference of conversion 212, in previous situation, adopts deletion circuit 343 to substitute the latter's filling circuit 323.Otherwise the configuration of transducer 202 is just identical with the configuration of transducer 212 shown in Figure 22.
The deletion circuit 343 that uses in transducer 202 is bit of deletion on the delete position of the signal indication of being exported by controller 346.The filling position of data " 1 " is inserted in the delete position corresponding to the described filling circuit 323 of Figure 23.
All the other operations of transducer 202 are identical with the operation that transducer shown in Figure 22 is carried out.
Figure 26 is the Typical Disposition block diagram of expression deletion circuit 343.In this configuration, 15 bits of the LSB end of 16 Bit datas that receive from D type circuits for triggering 341 and 342 are fed to the contact point a of switch 351-0 and 351-14.The data segment that is fed to switch 351-i (i=1-14) contact point a also is fed to the contact point b of switch 351-i (i=0-13).Switch 351-i (i=1-14) is the adjacent switch 351-i (i=1-13) on the MSB end (or the upper end shown in the figure) of switch 351-i (i=0-13).For example, the 13 segment data of LSB of contact point a of switch 351-13 that is fed to the MSB end of adjacent 351-12 also is fed to the contact point b of switch 351-12.Simultaneously, the 14 segment data of LSB of contact point a of switch 351-14 that is fed to the MSB end of adjacent 351-13 also is fed to the contact point b of switch 351-13.Decoder 352 is bit of deletion on the delete position of the signal indication of being exported by controller 346, thereby all the other 15 Bit datas of this deleted bit are got rid of in output.
Figure 26 represents the state of deletion from the 13 input bit of LSB (input bit 12).Therefore, in this case, switch 351-0 to 351-11 is transformed into its contact point a so that export 12 input bit to the 12 bits (bit 11) from LSB (bit 0) same as before.On the other hand, to be transformed into contact point b so that transmit the 14 to the 16 input bit (input bit 13 to 15) respectively be the 13 to 15 output bit (output bit 12 to 14) to switch 351-12 to 351-14.At this state, be free of attachment to outlet line from the 13 input bit of LSB (input bit 12).
16 Bit datas are fed to filling circuit 323 shown in Figure 23 and deletion circuit 343 shown in Figure 26.This is to be results by the data segments link of 8 bit D type circuits for triggering 321 that use in the transducer shown in Figure 22 212 and 322 outputs because be fed to the data of filling circuit 323.Equally, being fed to the data of deleting circuit 343 is the results that linked by 8 bit D type circuits for triggering 341 that use in the transducer shown in Figure 25 202 and 342 data segments of exporting.The barrel shape trigger 324 that uses in transducer shown in Figure 22 212 will be filled circuit 323 and will be fed to the displacement of this 17 Bit data barrel shapes displacement by the signal indication of slave controller 326 receptions, thus final typical 8 Bit datas of extracting data from displacement.Equally, the barrel shifter shifts of using in transducer shown in Figure 25 202 344 will be filled circuit 324 and will be fed to the displacement of this 15 Bit data barrel shapes displacement by the signal indication of slave controller 346 receptions, thus final typical 8 Bit datas of extracting data from displacement.
Figure 21 is the Typical Disposition block diagram of historical decoder 203 that expression is used for finishing in transducer 202 decoding the data of historical forms processing.The coding parameter feeds of data that is fed to historical decoder 203 by transducer 202 is given ram cell 311 so that store at this with the write address of address production electric circuit 315 regulations.Address production electric circuit 315 is also exported the address of reading with predetermined timing and is arrived ram cell 311.At that constantly, output to barrel shifter shifts 312 to read the data that the address is stored in ram cell 311.Barrel shifter shifts 312 will be fed to the displacement of this data barrel shape displacement corresponding to the information that is fed to this by address production electric circuit 315, thereby the data of exporting this displacement arrive reverse code length transducer 313 and 314.
Oppositely code length transducer 313 and 314 also receives the sentence structure title that comprises from the data flow of the coding parameter of transducer 202.Oppositely code length transducer 313 is determined to come data since then according to sentence structure or is fed to the code length of coding parameter of this code length, thereby the output code length information is to address production electric circuit 315.
On the other hand, oppositely code length transducer 314 is according to the data that sentence structure is decoded or phase-reversal coding is presented by barrel shifter shifts 312, and the result is to historical information equipment complex 103 thereby output decoder comes out.
In addition, oppositely code length transducer 314 also extracts and is used to discern the information that comprises which type of code language is required, promptly is used for determining the required information of code indefiniteness, thereby exports this information to address production electric circuit 315.Address production electric circuit 315 produces write and read address and displacement according to this information and the code length that receive from reverse code length transducer 313, thereby the Writing/Reading address is outputed to ram cell 311 and displacement outputs to barrel shape register 312.
Figure 27 is the block diagram of another Typical Disposition of expression transducer 212.Counter 361 countings that use in this configuration are fed to the number of this continuous 0 Bit data, thereby the output count results is to controller 326.When the continuous 0 Bit data number that is fed to counter 361 reached 22, controller 326 just output represented the signal of filling position to filling circuit 323.Simultaneously, controller 326 reset counters 361 are so that allow counter 361 to begin from continuous 0 bit number of 0 counting again.The configuration of remaining configuration and operation and transducer 212 shown in Figure 22 and operate identical.
Figure 28 is another Typical Disposition block diagram of expression transducer 202.Counter 371 countings that use in this configuration again are fed to the number of this continuous 0 Bit data, thereby the output count results is to controller 346.When the continuous 0 Bit data number that is fed to counter 371 reached 22, controller 346 just output represented the signal of delete position to deletion circuit 343.Simultaneously, controller 346 reset counters 371 are so that allow counter 371 to begin from continuous 0 bit number of 0 counting again.The configuration of remaining configuration and operation and transducer 202 shown in Figure 25 and operate identical.
As mentioned above, in the Typical Disposition shown in Figure 27 and 28, data " 1 " as a token of bit are inserted, and when counter detected the predetermined codes type that comprises predetermined continuous 0 bit number, data " 1 " were correspondingly deleted.Typical Disposition shown in Figure 27 and 28 allows to utilize than the configuration higher efficient of Figure 22 and 25 shown in respectively and handles.
Figure 29 is the Typical Disposition block diagram of expression User Format device 213.In this configuration, when the address was provided to the buffer memory (not shown) that provides between transducer 212 and format of user data device 213 in controller 383 output, data were just from reading the contact point a that the switch 382 of use format of user data device 213 was read and be fed in the address.Should note not expression in the drawings of buffer itself.In ROM cell 381, storage is used to produce such as the required data of the user data () of user data code and data ID.Controller 313 change over switches 382 to have predetermined regularly contact point a or contact point b so that allow switch 382 to select data in the storage ROM cell 381 or the data of presenting by transducer 212 and transmit selected data.By this way, the data with user data () form output in the video encoder 106.
Merit attention, a switch input-output data that is used for deleting the insertion data of reading by means of utilization from the ROM cell that is similar to the ROM cell 381 that format of user data device shown in Figure 29 213 uses can be implemented user data decoder 201.The configuration of user data decoder 201 is not shown in the diagram.
Figure 30 is the enforcement state block diagram that expression is used for a plurality of code conversion 101-1 to 101-N of being connected in series of video chamber service.The historical information equipment complex 103-i that uses in code converter 101-i (i=1-N) all writes the most current used coding parameter itself in the zone that is used for storing in the used zone of record coding parameter present encoding parameter.As a result, the baseband images data comprise the generation historical information in coding parameter or four up-to-date generations relevant with the macro block of view data.
The variable length code circuit 58 that uses among the encoder 121-i of the Figure 19 that uses in encoding device 106-i is according to the video data encoding of the present encoding parameter that receives from coding parameter split circuit 105-i to receiving from sample circuit 57.As a result, be re-used usually in the image header () of present encoding parameter in the bit stream that is included in 58 generations of variable length code circuit.
In addition, variable length code circuit 58 also is multiplexed with output bit flow with user data, and said user data comprises the generation historical information and receives from historical encoding device 107-i.This multiplexing process is not that the enforcement of similar processing shown in Figure 180 is handled, but the user data multiplexed bit is flowed.So, utilize SDTI 108-i to be fed to code converter 101-(i+1) in subordinate by the bit stream of video encoder 106-i output.
Code converter 101-I is identical with configuration shown in Figure 15 with the configuration of 101-(i+1).Therefore the processing that they carry out can make an explanation with reference to Figure 15.If wish in the encoding operation that utilizes actual coding parameter history, the present image type to be P-or B-image from the I-image transitions, be the history of the type search previous coding parameter of before used P-or B-image so.If find P-or B-image history in history, the parameter that comprises motion vector so just is used for changing image type.On the other hand, if in history, do not find P or B-image history, so just abandon not having the modification of the image type of motion detection.Needless to say, if carry out motion detection, can not change image type even in history, find the coding parameter of P or B-image yet.
In form shown in Figure 180, embodiment four generations coding parameter in view data.As selection, the parameter of each I-, P-and B-image also can embody in being similar to a form shown in Figure 31.In example shown in Figure 31, for each the image type record coding parameter in the operation or generation image history information so that formerly encode the identical macro block of following change in the image type of Chu Xianing.In this case, decoder shown in Figure 16 111 outputs are used for the generation coding parameter of I-, P-and B-image rather than the most current, first, second and the 3rd former generation coding parameter so that be fed to encoder shown in Figure 19 121.
In addition, owing to do not use Cb[1] [X] and Cr[1] [X], and the present invention also can use and not use Cb[1] [X] and Cr[1] view data of 4:2:0 form in [X] zone.In the situation of this example, decoding device 102 takes out in the decode procedures coding parameter and discerns this image type.Decoding device 102 write or compound this coding parameter for corresponding to the position of the image type of picture signal and export this compound image signal to coding parameter separation equipment 105.Coding parameter separation equipment 105 separates coding parameter from view data, consider simultaneously to change this image type by image type of taking to change and the coding parameter that is fed to this by utilizing separated coding parameter coding parameter separation equipment 105 can carry out back decoding and coding processing.
Explain other operations with reference to flow chart shown in Figure 32.Shown in figure 32, flow process is used for the coding parameter of each image type or the controller 70 that generation image history feed information arrives encoder 121 from step S1 in this step.Whether handling process proceeds to step S2 then, be included in for image history information at this step coding parameter separation equipment 105 to change to judgement of coding parameter formation of using in the B image.Change to the coding parameter that uses in the B image if image history information is included in, handling process enters step S3 so.
At step S3, whether controller 70 is included in for image history information changes to judgement of coding parameter formation of using in the P image.Change to the coding parameter that uses in the P image if image history information is included in, handling process enters step S4 so.
At step S4, controller 70 determines that the variable image type is I-, P-and B-image.On the other hand, change to the coding parameter that uses in the P image if the result of determination presentation video historical information that forms at step S3 is not included in, handling process enters step S5 so.
At step S5, controller 70 determines that the variable image type is I-and B-image.In addition, controller 70 determines that by only utilizing forward prediction vector and the non-back forecast vector in the historical information that is included in the B image to carry out particular procedure it also is possible that the puppet of P-image changes.On the other hand, change to the coding parameter that uses in the B image if the result of determination presentation video historical information that forms at step S2 is not included in, handling process enters step S6 so.
At step S6, whether controller 70 is included in for image history information changes to judgement of coding parameter formation of using in the P image.Change to the coding parameter that uses in the P image if image history information is included in, handling process enters step S7 so.
At step S7, controller 70 determines that the variable image type is I-and P-image.In addition, coding parameter separation equipment 105 determines that by only utilizing the forward prediction vector to carry out particular procedure with the non-back forecast vector in the historical information that is included in the B image variation of P-image also is possible.
On the other hand, change to the coding parameter that uses in the P image if the result of determination presentation video historical information that forms at step S6 is not included in, handling process enters step S8 so.At step S8, it is the I image that controller 70 determines to have only the variable image type, because there is not motion vector.The I image can not change to any other characteristic type except that the I image.
After completing steps S4, S5, S7 or the S8, handling process enters S9, in this step, notifies the user this variable image type on the display unit that controller 70 does not show in the drawings.
Figure 33 is the example figure of presentation video type change.When image type changed, the frame number that comprises gop structure also changed.For more detailed, in these examples, long gop structure is changed into the short gop structure of the second generation.Then, second generation gop structure becomes again at the third generation and is long GOP.This long gop structure has N=15 and M=3, and wherein N constitutes the frame number of GOP and M is the cycle that occurs according to the P image that frame is expressed.On the other hand, short GOP has N=1 and M=1, and wherein M is the cycle that occurs according to the I image that frame is expressed.Should notice that the dotted line shown in the figure represents two borders between the adjacent GOP.
When first generation gop structure became second generation gop structure, the image type of all frames can be changed into the I image, and this sees from processing spec is conspicuous so that the variable image type that provides more than definite.When these image types changed, all motion vectors of handling in first generation coding resource vision signal just were stored or stay.Then, at this moment short gop structure becomes again at the third generation and is long gop structure.That is,, also utilize the motion vector of each type of when the first generation is encoded, preserving again, thereby allow to become again the feasible deterioration of image of having avoided of long gop structure when the resource vision signal even image type changes.
Figure 34 is illustrated in another exemplary plot that changes in the image type.In the situation of these examples, it is also also last to the GOP at random that has uncertain frame count N in the 4th generation to the short structure with N=1 and M=1 subsequently to change to the short gop structure with N=2 and M=2 in the second generation from the long gop structure with N=14 and M=2.
In these examples, also have, preserve the motion vector of each image type of when the resource vision signal is encoded as the first generation, handling, until the 4th generation.As a result, by utilizing the coding parameter of preserving again,, image type also can be reduced to minimum even changing deterioration of image with complex way shown in Figure 34.In addition, if effectively utilize the quantitative calibration of the coding parameter of preserving, just can implement only to cause the encoding process of deterioration of image quality in a small amount.
Explain the utilization again of quantitative calibration by reference Figure 35.Figure 35 is that certain reference frame of expression always is used to the situation map from the I image encoding in first to the 4th generation.The 18Mbps that has only bit rate to become the second generation from the 4Mbps of the first generation is that the 50Mbps of the third generation gets back to the 4Mbps in the 4th generation at last then.
When the bit rate of the 4Mbps bit stream that produces in the first generation changes to the 18Mbps bit rate of the second generation, do not improve picture quality even carry out back decoding and coding processing with the small quantization scale of following bit rate to increase yet.This is because do not store with the data of original quantification step-size amountsization in the past.Therefore, in processing procedure as shown in figure 35, only increased amount of information and do not caused the improvement of picture quality with the quantification of following the small quantization step that bit rate raises.Owing to this reason, if carry out control so that original or the most maximum used quantitative calibration before keeping, so just can the loss of minimum ground and implement encoding process the most efficiently.
As mentioned above, when bit rate changes,, can implement encoding process most effectively by utilizing former quantitative calibration history.
Handle by explain this quantified controlling with reference to flow chart shown in Figure 36.As shown in the figure, flow process is from step S11, and in this step, whether controller 70 comprises from judgement of coding parameter formation of the image type that will change now for the input picture historical information.If judged result is represented the input picture historical information and comprises the image type coding parameter that will change that handling process enters step S12 so.
At step S12, controller 70 extracts historical q scale code from the coding parameter of the above-mentioned comparison that is used for being included in image history information.
Handling process enters step S13 subsequently, and in this step, controller 70 calculates the candidate value of feedback q scale code according to the data full scale of transmission buffer 59.
Flow process enters S14 then, and in this step, whether controller 70 makes historical q scale code than the big or thick judgement of feedback q scale code.If it is bigger or thick than feedback q scale code that judged result is represented historical q scale code, flow process proceeds to step S15 so.
At step S15, controller 70 is presented historical q scale code as quantitative calibration to sample circuit 57, and sample circuit 57 is then by utilizing historical q scale code to carry out quantification treatment.
Flow process enters step S16 then, and in this step, whether controller 70 has been quantized judgement of formation for all macro blocks that are included in the frame.If all macro blocks that result of determination is represented to be included in the frame also are not quantized, flow process turns back to S13 so that repeatedly execution in step S13 is quantized to processing section all macro blocks in being included in frame of step S16 so.
On the other hand, if represent that in the result of determination of step S14 formation historical q scale code is not more than feedback q scale code, promptly historical q scale code is less than feedback q scale code, and flow process proceeds to step S17 so.
At step S17, controller 70 is presented feedback q scale code as quantitative calibration to sample circuit 57, and sample circuit 57 is then by utilizing feedback q scale code to carry out quantification treatment.
On the other hand, if represent that in the result of determination of step S11 formation the input picture historical information does not comprise the coding parameter of the image type that will change, flow process enters step S18 so.
At step S18, sample circuit 57 slave controllers 70 receive the candidate value of feedback q scale code.
Flow process enters step S19 then, carries out quantification treatment at this step sample circuit 57 by utilizing the Q feedback.
Flow process enters step S20 then, and in this step, whether controller 70 has been quantized judgement of formation for all macro blocks that are included in the frame.If all macro blocks that result of determination is represented to be included in the frame also are not quantized, flow process turns back to step S18 so that repeatedly execution in step S18 is quantized to processing section all macro blocks in being included in frame of step S20 so.
The code converter of explaining with reference to Figure 15 101 passed through in these parameters of base band video data multiplex in the past, and first, second before presenting and third generation coding parameter are to video encoder 106.But in the present invention, the technology of the coding parameter before multiplexing is not absolute demand in the base band video data.For example, former coding parameter can transmit by utilizing such as the transmission line that the data/address bus that provides is provided in base band video data circuit shown in Figure 37.
The length-changeable decoding circuit 112 that uses in video decoding apparatus 102 extracts third generation coding parameter from the macroblock layer of sequence layer, GOP layer, image layer, amplitude limit layer and third generation encoded video ST (3rd), thereby presents the controller 70 that this parameter is used in historical encoding device 107 and the video encoder 106.The third generation coding parameter that historical encoding device 107 conversions are fed to this becomes the historical data stream () of the conversion that is described in the user data area of failing on image layer, thus the variable length code circuit 58 that the historical data stream () of presenting this conversion uses in the video encoder 106 of user data.
In addition, before length-changeable decoding circuit 112 also extracts the user data area on the image layer of third generation coded video bit stream ST (3rd) and comprises first and second generation coding parameter user data (user data), thereby present the variable length code circuit 58 of this user data to historical decoding device 104 and use in video encoder 106.Historical decoding device 104 from the historical data stream of the user data user data, described, extract as the historical data stream () of conversion first and second generation coding parameter, thereby present the controller 70 that this parameter is used in the video encoder 106.The controller 70 of video encoder 106 according to from historical decoding device 104, receive first and second generation coding parameter and the encoding process carried out by video encoder 106 of the third generation coding parameter control that from video decoding apparatus 102, receives.
Simultaneously, the variable length code circuit 58 that in video encoder 106, uses receive comprise from video decoding apparatus 102 first and second generation coding parameter user data (user-data) and comprise user data (user-data) from the third generation coding parameter of historical encoding device 107, thereby as historical information the 4th generation coded video bit stream image layer on user data area in user data segment is described.
Figure 38 represents to be used to decode the used syntax diagram form of MPEG video data stream.Decoder is according to this sentence structure decoding MPEG bit data flow.So that from bit stream, extract a plurality of same data items or same data cell.Below in the sentence structure that will explain, function and cond by common string representation data cell by the thick line character representation.By the data items that the mnemonic(al) (Mnemonic) of expression data item name is described, in some cases, mnemonic(al) is also indicated the bit length that comprises data item and data item type.
At first, explain function used in sentence structure shown in Figure 38.Next-start-code () is a function that is used for searching for the bit stream of the initial code of describing at bit stream.In sentence structure shown in Figure 38, next-start-code () function is followed the sequence-extension () function of sequence-header () function and order setting so that the expression bit stream comprises the data cell that is limited by sequence-header () and sequence-extension () function.Therefore, be that initial code is obtained so that to this bit stream decoding in just by the bit stream of next-start-code () function from operation in a kind of data cell that begins to describe of sequence-header () and sequence-extension () function.Initial code is then as benchmark, so that further find sequence-header () and sequence-extension () function and by the decoded data unit of sequence-header () and sequence-extension () function definition.
Should point out that sequence-header () function is the function that is used for defining the title data of MPEG bit stream sequence layer, and sequence_extension () function is the function that is used for defining the growth data of MPEG bit stream sequence layer.
Explanation do{}while statement behind sequence_extension () function.The do{}while statement is included in { } piece and the while statement behind { } piece behind the do statement.As long as the condition of while statement definition is true, from bit stream, extract data element by the function representation in { } piece behind the do statement.That is to say that as long as the condition of while statement definition is true, do{}while syntactic definition decoding processing is to extract the data element by the function representation in the { } piece behind the do statement.
The nextbit () that uses in the while statement is bit or Bit String and the next data element that will the decode function relatively that is used for bit stream is occurred.In grammer example shown in Figure 38, nextbit () function is with Bit String that occurs in the bit stream and the EOS sign indicating number comparison that is used for showing the video sequence end.If this string bit and the EOS sign indicating number that occur in the bit stream do not match, the condition of while statement definition then is true.So, sequence_extension () is used for representing that the EOS sign indicating number that video sequence finishes does not appear in the bit stream as long as the do{}while statement of describing behind the function shows, then describes the data element by the function representation in the { } piece behind the do statement in bit stream.
After the data element of the sequence_extension () function definition in bit stream, the data element of extension_and_user_data (0) function definition is described.Extension_and_user_dara (0) function is the function that is used to define the user data of growth data and MPEG bit stream sequence layer.
Do{}while statement after extension_and_user_data (0) function is as long as the condition of while statement definition is true, then the data element of the function representation from the { } piece behind the bitstream extraction do statement.The nextbit () function that uses in the while statement be used for by the initial code of stipulating in Bit String and this function is relatively made a bit occurring in the bit stream or a string bit whether respectively with the function of the judgement of image initial code or initial code group coupling.If this string bit that occurs in the bit stream and image initial code or initial code group coupling think that then the condition of while statement definition is true.Therefore, if image initial code or initial code group appear in the bit stream, the code of the data element of the function definition in the piece behind the do statement is described after this initial code then.Therefore, by finding the initial code of representing by image initial code or initial code group, then can from bit stream, extract data element by the function definition in { } piece of do statement.
At the if statement that begins to describe of do statement { } piece the condition of " if the initial code group appears in the bit stream ".Be illustrated in the data element of then describing after the initial code group by group_of_picture_header (1) function and extension_and_user_data (1) function definition by true (satisfying) condition of if statement.
Group_of_picture_header (1) function is the function that is used to define the title data of MPEG bit stream GOP layer, and extension_and_user_data (1) function is growth data and or the function of the user data of the user data of MPEG bit stream GOP layer by name that is used to define growth data by name.
In addition, in this bit stream, the data element by picture_header () function and pictyre_coding_extension () function definition is described after by the data element of group_of_picture_header (1) function and extension_and_user_data (1) function definition.Certainly, if the condition of if statement definition is not true, the data element of group_of_picture_header (1) function and extension_and_user_data (1) function definition is not described then.In this case, data element by picture_header () function and picture_coding_extension () function definition is described after the address points unit that defines after by extension_and_user_data (0) function.
Picture_header () function is to be used for defining the function of title data to the image layer of mpeg data stream, and picture_coding_extension () function is the function that is used for defining first growth data of mpeg data stream image layer.
Next while statement is the function that is used for definite condition.As long as the condition of while statement definition be true, judge that then condition that each if statement of describing in the piece after the condition of while statement definition defines is very or vacation.The nextbits () function that uses in the while statement be used for making a string bit that relevant bit stream occurs whether respectively with the function of the judgement of expansion initial code and user's initial code coupling.If this string bit that occurs in the bit stream and expansion initial code and the initial coupling of user data think that then the conditional statement of while statement definition is true.
If statement in { } piece behind the while statement be used for making a string bit that relevant bit stream occurs whether with the function of the judgement of expansion initial code coupling.Be illustrated in the data element of describing behind the expansion initial code in the bit stream by extension_data (2) function definition with the Bit String that occurs in the bit stream of 32 bit expanded initial codes couplings.
The 2nd if statement be used for making serial data that relevant bit stream occurs whether with the function of the judgement of user data initial code coupling.If Bit String that occurs in the bit stream and 32 bit user data initial data code matches judge that then condition by the 3rd if statement definition is very or vacation.The user data initial code is the initial code that is used to show that the user data area of MPEG bit stream image layer begins.
In { } piece behind the while statement the 3rd if statement be used for making Bit String that relevant bit stream occurs whether with the function of the judgement of historical data ID coupling.Be illustrated in the data element of describing behind the code of representing by 8 bit historical data ID in the user data area of MPEG bit stream image layer by converted_history_stream () function definition with the Bit String that occurs in the bit stream of 8 bit historical data ID coupling.
Converted_history_stream () function is to be used for describing the emission mpeg encoded to handle the historical information of all coding parameters that use and the function of historical data.The details by the data element of this converted_history_stream () function definition will be described in the back.Historical data ID is used for showing the historical information of user data area of MPEG bit stream image layer and the initial code that historical data is described beginning.
The else statement is the grammer of expression by the false condition situation of the 3rd if statement definition.Therefore, if do not describe the data element of converted_history_stream () function definition in the user data area of MPEG bit stream image layer, the data element by user_data () function definition is described then.
Picture_data () function is the function of the data element relevant with macroblock layer of the amplitude limit layer behind the user data that is used to describe with MPEG bit stream image layer.Usually, the data element of describing behind the data element of converted_history_stream () function definition by this pictrue_data () function definition is described in the user data area of the data element of user_data () function definition or bit stream image layer.Yet, if show neither to exist the expansion initial code also not have the user data initial code in the bit stream of image layer data element, the data element by this picture_data () function definition is described then after by the data element of picture_coding_extension () function definition.
After the data element of this picture_data () function definition, the data element by sequence_header () function and sequence_extension () function definition is described successively.By the data element of sequence_header () function and sequence_extension () function representation with identical by the data element of sequence_header () function that begins to describe in the video data stream sequence and sequence_extension () function definition.The reason of definition identical data part is to prevent the data of receiving sequence layer again in data flow, thereby prevents when the data flow receiving system begins to receive bit stream in the middle part of data flow such as the bit stream part corresponding with image layer decoded data stream again.
After the data element of sequence_header () function and sequence_extension () function definition, promptly, the 32 bit sequence end code that are used to show this EOS are described in the data flow ending.
Figure 39 is the generalized schematic of grammer basic structure described so far.
Next the history stream of converted_history_stream () function definition is described.
Converted_histroy_stream () function is in the function of the history stream that inserts the expression historical information in the user data area of MPEG bit stream image layer.Should point out, word ' converted ' represent this data flow finished to the history that constitutes by the historical data that will insert stream at least every 22 bits insert a flag bit to the user area with the conversion process of avoiding beginning to imitate.
With one of historical form that flows of the regular length shown in Figure 40 to 46 or historical stream description the converted_history_stream () function of variable-length shown in Figure 47 described later.If select the historical stream of regular length, has the advantage that the circuit that adopts and software become simple in to the decoder from the data element decoding of history stream in encoder one side.On the other hand, if select the historical stream of variable-length in encoder one side, encoder can be selected historical information or the data element described in the user area of image layer when needed arbitrarily.Therefore, can reduce the data volume of historical stream.The result is, data transfer rate that also can whole reduction bit stream.
The historical information of quoting in the explanation of the present invention, historical data and history parameters are coding parameter or the data elements that uses in the correlation technique encoding process, rather than the coding parameter data of using in the encoding process that present encoding is handled or in the end the stage carries out.Be considered as the I image in the encoding process of the first generation, as the P image in the encoding process of the second generation and the situation of encoding and launching image as the B image in the encoding process of the third generation.In the precalculated position of this sequence, GOP image, amplitude limit and describe the coding parameter that uses in the third generation encoding process as the macroblock layer of the resultant coded bit stream of encoding process of the third generation.On the other hand, the coding parameter that uses in the first and second generation encoding process is not recorded in this sequence or is used for writing down in the GOP layer of the coding parameter that third generation encoding process uses, but is recorded in the user data area of image layer as the historical information of coding parameter.
At first, the grammer of the historical stream of benchmark Figure 40 to 46 explanation regular length.
In primary importance, with the encoding process of front in the relevant coding parameter of sequence-header of the sequence layer that uses, promptly normally the encoding process in first and second generations is inserted in the encoding process that final stage is carried out as historical stream, i.e. the user data area of the bit stream image layer that normally produces in the third generation encoding process.Should point out that the historical information relevant with the sequence-header of the bit stream sequence layer that produces in the encoding process of front is not inserted in the sequence-header of the bit stream sequence layer that produces in the encoding process of final stage execution.
The data element relevant with the sequence-header of using in the encoding process of front comprises sequence-header code as shown in figure 40, and mark appears in sequence-header, horizontal value, vertical value, aspect ratio information, frame frequency code, bit-rates values, flag bit, VBV cushions value, the restriction parameter tags, load internal quantizer matrix, the internal quantizer matrix, non-internal quantizer matrix of load and non-internal quantizer matrix.
The data element of listing above is described below.Sequence-header code data unit is the initial synchronization codes of sequence layer.It is that the data that are used for showing sequence-header are effectively or invalid mark that flag data unit appears in sequence-header.Horizontal magnitude data unit is the data that comprise low level 12 bit image pixel quantities in the horizontal direction.Vertical magnitude data unit is the data that comprise the image pixel quantity of low level 12 bits in the vertical direction.The aspect ratio information data element is the ratio of width to height, i.e. the ratio of width to height of the depth-width ratio of image, or display screen.Frame frequency code data unit is the data of presentation video display cycle.
The bit-rates values data element is the data that comprise low level 18 bits of the bit rate that is used to limit the amount of bits that is produced.These data collect to hold together with 400bps unit.The flag bit data element is to be used to prevent the initial code imitation and the Bit data of insertion.VBV buffering capacity Value Data unit comprises being used for determining that control produces the data of low level 10 bits of the value of the virtual buffer amount (video buffer verifier) that size of code uses.Restriction parameter tags data element is to be used to show whether restricted mark of parameter.Load internal quantizer matrix data unit is the mark that is used to represent whether to exist inner MB quantization matrix data.Internal quantizer matrix data unit is the value of inner MB quantization matrix.The inner non-quantizer matrix data of load unit is the mark that is used to represent whether exist the data of non-inner MB quantization matrix.The value of the inner MB quantization matrix of non-internal quantizer matrix data unit's right and wrong.
Next the data element of describing the sequence extension of using in the encoding process of expression front is as the history stream in the user area of the bit stream image layer that produces in the encoding process that in the end stage carries out.
The data element of the sequence extension of using in the face code before the expression comprises the expansion initial code shown in Figure 40 and 41, expansion initial code identifier, mark appears in sequence extension, profile and level identification, sequence line by line, chroma format, the expansion of level amount, vertical amount expansion, the bit rate expansion, the expansion of vbv buffering capacity, the low delay, frame frequency expansion n and frame frequency expansion d.
The data element of listing above is described below.Expansion initial code data element is the initial synchronizing code of growth data.Expansion initial code identifier data unit is the data that are used to represent to launch growth data.It is that the data that are used for representing sequence extension are effectively or invalid mark that flag data unit appears in sequence extension.Profile and level recognition data unit are the data of the profile and the level of regulation video data.Sequence data unit shows the data that obtain video data from sequential scanning line by line.The chroma format data element is the data of the color-difference formats of regulation video data.
Level amount growth data unit is two high-bit datas that will be added to the horizontal value of sequence-header.Vertically amount growth data unit is two high-bit datas that will be added to the vertical value of sequence-header.Bit rate growth data unit is 12 high-bit datas that will be added to the bit-rates values of sequence-header.Vbv buffering capacity growth data unit is 8 high-bit datas that will be added to the vbv buffering value of sequence-header.Low delayed data unit is the data that are used to represent not comprise the B image.Frame frequency expansion n data element is the data that are used to obtain the frame frequency that combines with the frame frequency code of sequence-header.Frame frequency expansion d data element is the data that are used to obtain the frame frequency that combines with the frame frequency code of sequence-header.
Next, the sequence demonstration expanded data unit of describing the sequence layer that uses in the encoding process of expression front flows as the history in the user area of bit stream image layer.
Be described as sequence and show that expanded data unit is an expansion initial code as shown in figure 41, expand the initial code identifier, sequence shows that mark appears in expansion, video format, colored decoding, primary colours, transmission characteristic, matrix coefficient, reveal competence amount and the vertical amount of demonstration.
The data element of listing above is described below.Expansion initial code data element is the initial synchronizing code of growth data.Expansion initial code identifier data unit is the data that are used for representing to launch growth data.Sequence shows that it is that the data element that is used for representing sequence extension is effectively or invalid mark that flag data unit appears in expansion.Video formatted data unit is the data of expression source signal video format.Colored decoding data unit is the data that are used to represent that the detailed data of the color space exists.The primary colours data element is the data of the color characteristic details of expression source signal.The transmission characteristic data element is data of representing how to carry out the opto-electronic conversion details.The matrix coefficient data element is data of representing how to convert to from the three primary colors of light the details of source signal.Reveal competence amount data element is expression promoter region or the data of preparing the level amount of demonstration.The vertical amount of demonstration data element is expression promoter region or the data of preparing the vertical amount of demonstration.
Next the macroblock allocation data (macroblock allocation in the user data by name) of describing the phase information of the macro block that produces in the encoding process of expression front are as the history stream in the user area of the bit stream image layer that produces in the encoding process that in the end stage carries out.
Macroblock allocation in the user data of expression macro block phase information comprise as shown in figure 41 such as macroblock allocation mark appears, the data element of v phase place and h phase place and so on.
The data element of listing above is described below.It is that the data element that is used for representing the macroblock allocation of user data is effectively or invalid mark that flag data unit appears in macroblock allocation.V phase data unit is expression when the data of macro block from the vertical direction phase information of view data is separated time acquisition.H phase data unit is expression when the data of macro block from the horizontal direction phase information of view data is separated time acquisition.
Next the data element of describing the GOP title of the GOP layer that uses in the encoding process of expression front flows as the history in the user area of the bit stream image layer that produces in the encoding process that in the end stage carries out.
The data element of expression GOP title is an initial code group as shown in figure 41, and mark appears in the image header group, time code, closed circuit gop (closed_gop) and open circuit link.
The data element of listing above is described below.Initial code group data element is the initial synchronizing code at GOP layer family.It is that the data element that is used for the presentation video set of titles is effectively or invalid mark that flag data unit appears in the image header group.The time code data element is the time code of expression from the time span that begins to measure of first image of GOP.Closed circuit gop data element is the mark that is used for representing whether can carry out from another GOP the image independence replay operations of a GOP.Open circuit link data unit is used to represent owing to the reason such as editor can not be with the reset mark of the B image that begins at GOP of high accuracy.
Next the image header data element of describing the image layer of using in the encoding process of expression front is as the history stream in the user area of the bit stream image layer that produces in the encoding process that in the end stage carries out.
The data element relevant with image header is the image initial code shown in Figure 41 and 42, temporary transient benchmark, the image encoding type, vbv postpones, both full-pixel forward direction vector, forward direction f code, after the both full-pixel to vector and back to the f code.
The data element of listing above is described below.Image initial code data element is the initial synchronizing code of image layer.Temporary transient reference data unit is the numbering that is used for the presentation video DISPLAY ORDER.This numbering GOP begin reset.Image encoding categorical data unit is the data that are used for the presentation video type.Vbv delayed data unit is the data that are illustrated in random-access virtual buffer initial condition.Both full-pixel forward direction vector data unit is used to represent with pixel cell or with the half-pix unit to be the mark of unit representation forward motion vector accuracy.Forward direction f code data unit is the data of expression forward motion vector hunting zone.Be to be used to represent with pixel cell or with the half-pix unit to be the mark of unit representation backward motion vector accuracy to vector data unit after the both full-pixel.The back is the data of expression backward motion vector hunting zone to f code data unit.
Next the image encoding expanded data unit of describing the image layer of using in the encoding process of expression front is as the history stream in the user area of the bit stream image layer that produces in the encoding process that in the end stage carries out.
The data element relevant with the image encoding expansion is expansion initial code as shown in figure 42, expansion initial code identifier, f code [0] [1], f code [1] [0], f code [1] [1], inner dc precision, picture structure, field, top first, frame predictive frame dct hides motion vector, q scale type, inner vlc form, interlacing scan repeats first, the colourity 420 class, progressive frame, combination show tags, v axle, field sequence, subcarrier, pulse hurst modulator and sub-carrier phase.
The data element of listing above is described below.Expansion initial code data element is to be used for the initial code that presentation video layer growth data begins.Expansion initial code identifier data unit is the code that is used to represent to launch which growth data.F code [0] [0] data element is the data that are illustrated in the horizontal motion vector hunting zone of forward direction.F code [0] [1] data element is the data that are illustrated in the vertical motion vector hunting zone of forward direction.F code [1] [0] data element be illustrated in the back to the data of horizontal motion vector hunting zone.F code [1] [1] data element be illustrated in the back to the data of vertical motion vector hunting zone.
Inner dc precision is the data of expression DC coefficient precision.The picture structure data element is to be used to represent that this data structure is the frame structure or the data of field structure.Under the situation of field structure, the picture structure data element shows that also this field structure is high potential field or low potential field.Top first data element is to be used to show that first of frame structure is the high potential field or the data of low potential field.Predictive frame dct data element is to be used to show under the situation of frame structure only carry out frame pattern DCT data predicted in frame DCT pattern.Hiding motion vector data unit is to be used to represent that macro block comprises the data of the motion vector that is used for hiding transmission error.
Q scale type data element is used to show the employing equal interval quantizing scale or the data of nonlinear quantization scale.Inner vlc formatted data unit is used to show the data of whether using another 2 dimension VLC at intra-macroblock.The interlacing scan data element is that expression selects to use zigzag scanning or interleaved data.Repeating the first field data unit is the data of using under 2:3 drop-down (pull down) situation.Colourity 420 class data element is to equal the value of next progressive frame data element or otherwise equal 0 data under the situation of 4:2:0 signal format.The progressive frame data element is the data that are used to represent to obtain from sequential scanning whether this image.Combination show tags data element is to be used to represent whether source signal is the mark of composite signal.
V axis data unit is the data of using in the situation of PAL source signal.Field sequence data unit is the data of using in the situation of PAL source signal.Subcarrier data unit is the data of using in the situation of PAL source signal.The pulse hurst modulator data element is the data of using in the situation of PAL source signal.The sub-carrier phase data element is the data of using in the situation of PAL source signal.
Next, being described in the quantization matrix expansion of using in the encoding process of front flows as the history in the user area of the bit stream image layer that produces in the encoding process that in the end stage carries out.
The data element relevant with the quantization matrix expansion is expansion initial code as shown in figure 43, expansion initial code identifier, mark appears in the quantization matrix expansion, load internal quantizer matrix, internal quantizer matrix [64], the non-internal quantizer matrix of load, non-internal quantizer matrix [64], load colourity internal quantizer matrix, the non-internal quantizer matrix of colourity [64], the non-internal quantizer matrix of load colourity internal quantizer matrix and colourity [64].
The data element of listing above is described below.Expansion initial code data element is the initial code that is used to represent quantization matrix expansion beginning.Expansion initial code identifier data unit is the code that is used to represent to launch which growth data.Quantization matrix expansion flag data unit occurs and is used to represent that quantization matrix expanded data unit is effectively or invalid mark.Load internal quantizer matrix data unit is the data that are used to represent whether the quantization matrix data of intra-macroblock exist.Inner quantization matrix data element is the data of expression intra-macroblock quantization matrix value.
The non-internal quantizer matrix data of load unit is the data that are used to represent whether the quantization matrix data of non-intra-macroblock exist.Non-internal quantizer matrix data unit is the data of the quantization matrix value of the non-intra-macroblock of expression.Load colourity internal quantizer matrix data unit is the data that are used to represent whether the quantization matrix data of aberration intra-macroblock exist.Colourity internal quantizer matrix data unit is the data of the quantization matrix value of expression aberration intra-macroblock.The non-internal quantizer matrix data of load colourity unit is the data that are used to represent whether the quantization matrix data of the non-intra-macroblock of aberration exist.The non-internal quantizer matrix data of colourity unit is the data of the quantization matrix value of the non-intra-macroblock of expression aberration.
Next, the history stream in the user area of the bit stream image layer that produces in the encoding process of copyright extension of using in the encoding process of description front as the final stage execution.
The data element relevant with copyright extension is expansion initial code as shown in figure 43, expansion initial code identifier, and mark appears in copyright extension, copyright mark, copyright identification symbol, original paper or duplicate, copyright number 1, copyright number 2 and copyright number 3.
The data element of listing above is described below.Expansion initial code data element is to be used to the initial code of representing that copyright extension begins.Expansion initial code identifier data unit is the code that is used to represent to launch which growth data.It is that the data element that is used to represent copyright extension is effectively or invalid mark that flag data unit appears in copyright extension.The copyright mark data element is the mark that is used to represent whether coding video frequency data have been given copyright in the scope of next copyright extension or EOS.
Copyright identification symbol data element is the predetermined data that is used to discern to the copyright class of being stipulated by ISO/IEC JTC/SC29.Original paper or duplicate data element are to be used to represent that bitstream data is original or the mark of copy data.Copyright number 1 data element is represented 44 to 63 bits of copyright number.Copyright number 2 data elements are represented 22 to 43 bits of copyright number.Copyright number 3 data elements are represented 0 to 21 bit of copyright number.
Next, the image that uses in the encoding process of description front shows the history stream in the user area of expanding the bit stream image layer that produces in (image shows expansion) encoding process as the final stage execution.
Presentation video shows that expanded data unit is an expansion initial code as shown in figure 44, expansion initial code identifier, image shows that mark appears in expansion, frame center's horizontal-shift 1, frame center's vertical shift 1, frame center's horizontal-shift 2, frame center's vertical shift 2, frame center's horizontal-shift 3 and frame center's vertical shift 3.
The data element that description is listed above is as follows.Expansion initial code data element is to be used for the initial code that presentation video shows the expansion beginning.Expansion initial code identifier data unit is the code that is used to represent to launch which growth data.Image shows that expansion flag data unit occurs and is used for presentation video to show that expanded data unit is effectively or invalid mark.Frame center's horizontal-shift data element is viewing area skew in the horizontal direction, and frame center's vertical shift data element is the skew of viewing area in vertical direction.Can define the deviant that reaches three levels and vertical shift respectively.
History stream after the image that has illustrated in the user area of description user data as the bit stream image layer that produces in the encoding process of carrying out in expression final stage as shown in figure 44 shows the historical information of expanding.
After user data, the information that is described on the macro block that uses in the encoding process of front flows as the history shown in Figure 44 to 46.
Information on the macro block comprises the data element relevant with macro block position shown in Figure 44 to 46, the data element relevant with macro block mode, the data element relevant with the control of quantization step, the data element relevant with motion compensation, with the relevant data element of macro block sign indicating number type, with the relevant data element of size of code that produces.The data element relevant with macro block position comprises for example macroblock address h, and macroblock address v, amplitude limit title mark occurs and jump over the macro block mark.The data element relevant with macro block mode comprises that for example macro block quantizes, the forward macroblock motion, and the back is to macro block motion, macro block sign indicating number type, interior macroblocks, space-time weighted code mark, frame type of sports and dct type.The data element relevant with the control of quantization step comprises for example quantizer scale code.The data element relevant with motion compensation comprises PMV[0] [0] [0], PMV[0] [0] [1], [0] [0], PMV[0 are selected in the motion vertical field] [1] [0], PMV[0] [1] [1], [0] [1] is selected in the motion vertical field, PMV[1] [0] [0], PMV[1] [0] [1], [1] [0] is selected in the motion vertical field, PMV[1] [1] [0], PMV[1] [1] [1] and selection [1] [1], motion vertical field.The data element relevant with macro block sign indicating number type comprises the block code type of for example encoding, and the data element relevant with the size of code that produces is the num_mv bit, and other compares top grade num_coef bit and num.
Describe the data element relevant below in detail with macro block.
Macroblock address h data element is the data of the present absolute position in the horizontal direction of definition macro block.Macroblock address v data element is that the definition macro block is at present in the data of the absolute position of vertical direction.The amplitude limit title flag data unit occurs and is used to represent that this macro block is positioned at the amplitude limit layer and begins, and the mark that whether is attended by the amplitude limit title.Jump over macro block flag data unit and be the mark that is used for representing whether jumping over this macro block in decoding processing.
Macro block quantized data unit is the data that obtain from the macro block (mb) type shown in Figure 65 to 67.This data element represents whether occur quantizer scale code in the bit stream.Macro block propulsion data element is the data that obtain from the macro block (mb) type shown in Figure 65 to 67, and uses in decoding processing.Macro block reverse data element is the data that obtain from the macro block (mb) type shown in Figure 65 to 67, and uses in decoding processing.Macro block sign indicating number type data element is the data that obtain from the macro block (mb) type shown in Figure 65 to 67, the block code type that whether occurs encoding in its expression bit stream.
The interior macroblocks data element is the data that obtain from the macro block (mb) type shown in Figure 65 to 67, and uses in decoding processing.Space-time weighted code flag data unit is the mark that obtains from the macro block (mb) type shown in Figure 65 to 67, is used for showing whether bit stream exists the space-time weighted code of the last sampling techniques of the low level tomographic image that the up duration scale represents.
Frame type of sports data element is 2 bit code that are used to represent the macroblock prediction type of frame.Frame motion class offset " 00 " expression has two predictive vectors and this type of prediction to be based on the type of prediction of field.Frame motion class offset " 01 " expression has a predictive vector and this type of prediction to be based on the type of prediction of field.Frame motion class offset " 10 " expression has a predictive vector and this type of prediction to be based on the type of prediction of frame.It is two main (dual_prime) type of prediction that frame motion class offset " 11 " expression has a predictive vector and this type of prediction.Field type of sports data element is 2 bit code of the macroblock motion prediction of expression field.Field motion class offset " 01 " expression has a predictive vector and this type of prediction to be based on the type of prediction of field.Field motion class offset " 10 " expression has two predictive vectors and this type of prediction to be based on the type of prediction of 18 * 8 macro blocks.It is two main type of prediction that field motion class offset " 11 " expression has a kind of predictive vector and this type of prediction.Dct categorical data unit is used to represent with frame DCT pattern or carries out the data of DCT with field DCT pattern.The quantification step (step size) of quantizer scale code data unit expression macro block.
The data element relevant with motion vector is described below.Be to reduce required motion vector amplitude in the decoding processing, by in fact the difference coding between a particular motion vector and the motion vector of decoding previously being carried out encoding process to this particular motion vector.Be used for must keeping four motion vector predictor to the decoder of motion vector decoder, each predicted value comprises level and vertical component.These motion vector predictor PMV[r] [s] [v] expression.Subscript [r] is to be used for representing that the motion vector of macro block is the mark of first or second vector.More particularly, [r] value is " 0 " expression first vector, and [r] value is " 1 " expression second vector.Subscript [s] be the direction that is used for representing the motion vector of macro block be forward direction or back to mark.More particularly, [s] value is " 0 " expression forward motion vector, and [s] value is " 1 " expression backward motion vector.Subscript [v] is to be used for representing that the motion vector components of macro block is the level or the mark of vertical direction.More particularly, [v] value is the horizontal component of " 0 " expression motion vector, and [v] value is the vertical component of " 1 " expression motion vector.
Therefore, PMV[0] [0] [0] is the data of horizontal component of the forward motion vector of expression first vector.PMV[0] [0] [1] is the data of vertical component of the forward motion vector of expression first vector.PMV[0] [1] [0] is the data of horizontal component of the backward motion vector of expression first vector.PMV[0] [1] [1] is the data of vertical component of the backward motion vector of expression first vector.PMV[1] [0] [0] is the data of horizontal component of the forward motion vector of expression second vector.PMV[1] [0] [1] is the data of vertical component of the forward motion vector of expression second vector.PMV[1] [1] [0] is the data of horizontal component of the backward motion vector of expression second vector.PMV[1] [1] [1] is the data of vertical component of the backward motion vector of expression second vector.
It is the data that are used to represent use the reference field of the sort of prediction form that [r] [s] selected in the motion vertical field.More particularly, it is " 0 " expression top reference field that the value of [r] [s] is selected in the motion vertical field, and the value of motion vertical field selection [r] [s] is " 1 " expression use end reference field.
Select among [r] [s] in the motion vertical field, subscript [r] is to be used for representing that the motion vector of macro block is the mark of first or second vector.More particularly, [r] value is " 0 " expression first vector, and [r] value is " 1 " expression second vector.Subscript [s] be the direction that is used for representing the motion vector of macro block be forward direction or back to mark.More particularly, [s] value is " 0 " expression forward motion vector, and [s] value is " 1 " expression backward motion vector.Therefore, the motion vertical field selects [0] [0] to be illustrated in the reference field of using in the forward motion vector that produces first vector.The motion vertical field selects [0] [1] to be illustrated in the reference field of using in the backward motion vector that produces first vector.The motion vertical field selects [1] [0] to be illustrated in the reference field of using in the forward motion vector that produces second vector.The motion vertical field selects [1] [1] to be illustrated in the reference field of using in the backward motion vector that produces second vector.
Coding block code type data element is to be illustrated in each which DCT piece that is used for storing a plurality of DCT pieces of DCT coefficient to comprise one rationally or the variable length data of summation about non-zero DCT coefficients.The num_mv bit is the data of motion vector size of code in the expression macro block.Num_coef Bit data unit is the data of the size of code of DCT coefficient in the expression macro block.Other Bit data unit of num shown in Figure 46 is expression data of size of code in the macro block except that motion vector and DCT coefficient.
Below with reference to the grammer of Figure 47 to 64 explanation to decoding from the data element of history stream with variable-length.
As shown in figure 47, historical stream with variable-length comprises the function by next_start_code (), sequence_header () function, sequence_extension () function, extension_and_user_data (0) function, group_of_pictrue_header () function, extension_and_user_data (1) function, picture_header () function, picture_coding_extension () function, the data element of extension_and_user_data (2) function and picture_data () function definition.
Because next_start_code () function is the function that is used to search for the initial code bit stream, as shown in figure 48 history stream begin describe by sequence_header () function definition and the data element that uses in the encoding process in front.
The data element of sequence_header () function definition comprises sequence-header code as shown in figure 48, and mark appears in sequence-header, horizontal value, vertical value, aspect ratio information, frame frequency code, bit-rates values, flag bit, VBV cushions value, restriction parameter tags, load internal quantizer matrix, the internal quantizer matrix, non-internal quantizer matrix of load and non-internal quantizer matrix.
The data element of listing above is described below.Sequence-header code data unit is the initial synchronizing code of sequence layer.It is that the data that are used for representing sequence-header are effectively or invalid mark that flag data unit appears in sequence-header.Horizontal magnitude data unit is the data that comprise the image pixel quantity of 12 bits of horizontal direction low level.Vertical magnitude data unit is the data that comprise the image pixel quantity of 12 bits of vertical direction low level.The aspect ratio information data element is the ratio of width to height of image pixel, i.e. the ratio of width to height of the ratio of width to height of image, or display screen.Frame frequency code data unit is the data of presentation video display cycle.The bit-rates values data element is the data of 18 bits of low level that comprise the bit rate of the bit that is used to limit generation.Be bunched in the 400bsp unit.
The flag bit data element is the Bit data for preventing that the initial code imitation from inserting.VBV buffering capacity Value Data unit is the data that comprise low level 10 bit values that are used for determining the virtual buffer amount (video buffer verifier) used at the size of code that control produces.Restriction parameter tags data element is the mark that is used to represent whether to restrict these parameters.Load internal quantizer matrix data unit is used to the mark of representing whether inner MB quantization matrix data exist.Internal quantizer matrix data unit is the value of inner MB quantization matrix.The non-internal quantizer matrix data of load unit is the mark that is used to represent whether the data in the non-inner MB quantization matrix exist.The value of the inner MB quantization matrix of non-internal quantizer matrix data unit's right and wrong.
Behind the data element of sequence_header () function definition, the data element of describing sequence_extension () function definition is as history stream shown in Figure 49.
The data element of sequence_extension () function definition comprises expansion initial code as shown in figure 49, expansion initial code identifier, mark appears in sequence extension, profile and level identification, sequence line by line, chroma format, the expansion of level amount, vertical amount expansion, the bit rate expansion, the expansion of vbv buffering capacity, the low delay, frame frequency expansion n and frame frequency expansion d.
The data element of listing above is described below.Expansion initial code data element is the initial synchronizing code of growth data.Expansion initial code identifier data unit is the data that are used to represent to launch which growth data.Sequence extension flag data unit occurs and is used for representing that the sequence extension data are effectively or invalid mark.Profile and level recognition data unit are the data of the profile and the level of regulation video data.Sequence data unit shows the data that obtain video data from sequential scanning line by line.The chroma format data element is the data of the color-difference formats of regulation video data.Level amount growth data unit will be added to the data of the horizontal value of sequence-header as two high order bits.The vertical growth data unit of measuring will be added to the data of the vertical value of sequence-header as two high order bits.Bit rate growth data unit will be added to the data of the bit-rates values of sequence-header as 12 high order bits.Vbv buffering capacity growth data unit will be added to the data of the vbv buffering value of sequence-header as 8 high order bits.
Low delayed data unit is the data that are used to represent not comprise the B image.Frame frequency expansion n data element is the data that are used to obtain the frame frequency that combines with the frame frequency code of sequence-header.Frame frequency expansion d data element is the data that are used to obtain the frame frequency that combines with the frame frequency code of sequence-header.
After the data element of sequence_extension () function definition, the data element of describing by extension_and_user_data (0) function definition flows as history shown in Figure 50.For (i) except that 2 value, extension_and_user_data (i) function is only described by the data element of user_data () function definition as history stream rather than describe data element by extension_data () function definition.Therefore, the data element only described by user_data () function definition of extension_and_user_data (0) function flows as history.
User_data () function flows as history according to describing user data with the same syntax of figs shown in Figure 51.
After data element, being described as historical stream by the data element of the group_of_picture_header () function definition shown in Figure 52 with by the data element of extension_and_user_data shown in Figure 50 (1) function definition by extension_and_user_data (0) function definition.Yet, should point out, if in history stream, describe the initial code group of the initial code of expression GOP layer, the data element of group_of_picture_header () function definition is then only described and by the data element of extension_and_user_data (1) function definition.
Shown in Figure 52, be the initial code group by the data element of group_of_picture_header () function definition, the mark group appears in image header, time code, closed circuit gop and open circuit link.
The data element of listing above is described below.Initial code group data element is the initial synchronizing code of GOP layer.It is that the data element that is used for the presentation video set of titles is effectively or invalid mark that mark group data element appears in image header.The time code data element is the time code of expression from the time span that begins to measure of first image of GOP.Closed circuit gop data element is the mark that is used for representing whether can carry out from another GOP the image independence replay operations of a GOP.Open circuit link data unit is used to represent owing to the reason such as editor can not be with the reset mark of the B image that begins at GOP of high accuracy.
Closely similar with extension_and_user_data (0) function shown in Figure 50, the data element that extension_and_user_data (1) function is only described by user_data () function definition flows as history.
If do not describe the initial code group of the initial code of expression GOP layer in the historical stream, do not describe by the data element of group_of_picture_header () function definition and the data element of extension_and_user_data (1) function definition in the historical stream yet.In this case, data element by picture_header () function definition is described after by the data element of extension_and_user_data (0) function definition.
By the data element of picture_header () function definition is image initial code shown in Figure 53, time reference, image encoding type, vbv postpones, both full-pixel forward direction vector, forward direction f code,, back after the both full-pixel to the f code to vector, external bit image and external information image.
The data element of listing above specifically describes as follows.Image initial code data element is the initial synchronizing code of image layer.The time reference data element is the numbering that is used for the presentation video DISPLAY ORDER.This numbering that resets that begins at GOP.Image encoding categorical data unit is the data that are used for the presentation video type.Vbv delayed data unit shows the data of virtual bumper in the initial condition of storing at random.It is the mark of unit representation forward motion vector precision that both full-pixel forward direction vector data unit is used to represent with whole pixel cell or with the half-pix unit.Forward direction f code data unit is the data of expression forward motion vector hunting zone.Be used to represent with whole pixel cell or with the half-pix unit to be the mark of unit representation backward motion vector precision to vector data unit after the both full-pixel.The back is the data of expression backward motion vector hunting zone to f code data unit.External bit view data unit is used to the mark of representing whether following additional information exists.More particularly, there is not following additional information in the value of external bit image for " 0 " expression, and there is following additional information in the value of external bit image for " 1 " expression.External information view data unit is the information that by specification keeps.
After the data element of picture_header () function definition, the data element of describing the picture_coding_extension () function definition shown in Figure 54 flows as history.
The data element of picture_coding_extension () function definition is the expansion initial code shown in Figure 54, expansion initial code identifier, f code [0] [0], f code [0] [1], f code [1] [0], f code [1] [1], inner dc precision, picture structure, field, top the first, frame predictive frame dct, hide motion vector, q scale type, inner vlc form, interlacing scan, repeat first, the colourity 420 class, progressive frame, combination show tags, the v axle, field sequence, subcarrier, pulse hurst modulator and sub-carrier phase.
The data element of listing above is described below.Expansion initial code data element is to be used for the initial code that presentation video layer growth data begins.Expansion initial code identifier data unit is the code that is used to represent to launch which growth data.F code [0] [0] data element is the data that are illustrated in the horizontal motion vector hunting zone of forward direction.F code [0] [1] data element is the data that are illustrated in the vertical motion vector hunting zone of forward direction.F code [1] [0] data element be illustrated in the back to the data of horizontal motion vector hunting zone.F code [1] [1] data element be illustrated in the back to the data of vertical motion vector hunting zone.Inside dc accuracy data unit is the data of expression DC coefficient precision.
The picture structure data element is to be used to represent that this data structure is the frame structure or the data of field structure.Under the situation of field structure, the picture structure data element shows that also this field structure is high potential field or low potential field.Top first data element is to be used to show that first of frame structure is the high potential field or the data of low potential field.Frame predictive frame dct data element is to be used to show under the situation of frame structure only carry out frame pattern DCT data predicted with frame pattern.Hiding motion vector data unit is to be used to represent that intra-macroblock comprises the data of the motion vector that is used for hiding transmission error.Q scale type data element is used to show the employing equal interval quantizing scale or the data of nonlinear quantization scale.Inner vlc formatted data unit is used to show the data of whether using another 2 dimension VLC at intra-macroblock.
The interlacing scan data element is that expression selects to use zigzag scanning or interleaved data.Repeating the first field data unit is the data of using under the 2:3 pulldown conditions.Colourity 420 class data element is to equal the value of next progressive frame data element or otherwise equal 0 data under the situation of 4:2:0 signal format.The progressive frame data element is the data that are used to represent to obtain from sequential scanning whether this image.Combination show tags data element is to be used to represent whether source signal is the mark of composite signal.V axis data unit is the data of using in the situation of PAL source signal.Field sequence data unit is the data of using in the situation of PAL source signal.Subcarrier data unit is the data of using in the situation of PAL source signal.The pulse hurst modulator data element is the data of using in the situation of PAL source signal.The sub-carrier phase data element is the data of using in the situation of PAL source signal.
After the data element of picture_coding_extension () function definition, the data element by extension_and_user_data shown in Figure 50 (2) function definition is described.Yet, should point out to have only the expansion initial code of the initial code that has the expression expansion in the bit stream, ability is by the data element of extension_and_user_data (2) function representation extension_data () function definition.In addition, the user data initial code that has only in the bit stream initial code that has expression user data as shown in figure 50, just after by the data element of extension_data () function definition by the data element of extension_and_user_data (2) function representation by user_data () function definition.In other words, if promptly do not exist the initial data of expansion not have the initial code of user data in the bit stream, in bit stream, do not describe by the data element of extension_data () function definition with by the data element of user_data () function definition yet.
Extension_data () function is to be used to describe the data element of expression expansion initial code and by quant_matrix_extension () function, picture_coding_extension () function, the data element of copyright_extension () function and pictrue_display_extension () function definition is as the function of the stream of the history in the bit stream shown in Figure 55.
By the data element of quant_matrix_extension () function definition is expansion initial code shown in Figure 56, expansion initial code identifier, mark appears in the quantization matrix expansion, load internal quantizer matrix, internal quantizer matrix [64], the non-internal quantizer matrix of load, non-internal quantizer matrix [64], load colourity internal quantizer matrix, colourity internal quantizer matrix [64], non-internal quantizer matrix of load colourity and the non-internal quantizer matrix of colourity [64].
The data element of listing above is described below.Expansion initial code data element is the initial code that is used to represent quantization matrix expansion beginning.Expansion initial code identifier data unit is the code that is used to represent to launch which growth data.Quantization matrix expansion flag data unit occurs and is used to represent that quantization matrix expanded data unit is effectively or invalid mark.Load internal quantizer matrix data unit is the data that are used to represent whether the quantization matrix data of intra-macroblock exist.Inner quantization matrix data element is the data of the value of expression intra-macroblock quantization matrix.
The non-internal quantizer matrix data of load unit is the data that are used to represent whether the quantization matrix data of non-intra-macroblock exist.Non-internal quantizer matrix data unit is the data of the quantization matrix value of the non-intra-macroblock of expression.Load colourity internal quantizer matrix data unit is the data that are used to represent whether the quantization matrix data of aberration intra-macroblock exist.Colourity internal quantizer matrix data unit is the data of the quantization matrix value of expression aberration intra-macroblock.The non-internal quantizer matrix data of load colourity unit is the data that are used to represent whether the quantization matrix data of the non-intra-macroblock of aberration exist.The non-internal quantizer matrix data of colourity unit is the data of the quantization matrix value of the non-intra-macroblock of expression aberration.
By the data element of copyright_extension () function definition is expansion initial code shown in Figure 57, expansion initial code identifier, and mark appears in copyright extension, copyright mark, copyright identification symbol, original paper or duplicate, copyright number 1, copyright number 2 and copyright number 3.
The data element of listing above is described below.Expansion initial code data element is to be used to the initial code of representing that copyright extension begins.Expansion initial code identifier data unit is the code that is used to represent to launch which growth data.Copyright extension flag data unit occurs and is used to represent that the copyright extension data element is effectively or invalid mark.
The copyright mark data element is the mark that is used to represent whether coding video frequency data have been given copyright in the scope of next copyright extension or EOS.Copyright identification symbol data element is the predetermined data that is used to discern to the copyright class of being stipulated by ISO/IEC JTC/SC29.Original paper or duplicate data element are to be used to represent that bitstream data is original or the mark of copy data.Copyright number 1 data element is represented 44 to 63 bits of copyright number.Copyright number 2 data elements are represented 22 to 43 bits of copyright number.Copyright number 3 data elements are represented 0 to 21 bit of copyright number.
By the data element of picture_display_extension () function definition is expansion initial code identifier, frame center's horizontal-shift and frame center's vertical shift shown in Figure 58.
The data element that description is listed above is as follows.Expansion initial code identifier data unit is the code that is used to represent to launch which growth data.Frame center's horizontal-shift data element is viewing area skew in the horizontal direction, and the numerical value of this horizontal-shift can be by frame center's shift value definition.Frame center's vertical shift data element is the skew of viewing area in vertical direction.The numerical value of this vertical shift can be by frame center's shift value definition.
Shown in the historical stream of the variable-length of Figure 47, the data element by picture_data () function definition is described as the data element history stream afterwards of extension_user (2) function definition.
Shown in Figure 59, be data element by slice () function definition by the data element of picture_data () function definition.Should point out,, in bit stream, then not describe data element by slice () function definition if there is not the amplitude limit initial code of the initial code of expression slice () function in the bit stream.
Shown in Figure 60, slice () function is to be used for describing such as the amplitude limit initial code, amplitude limit quantizer scale code, inner amplitude limit mark, inner amplitude limit, reservation bit, external bit amplitude limit, external information amplitude limit and by the data element of data element of macroblock () function definition and so on function as history stream.
The data element of listing above is described below.Amplitude limit initial code data element is to be used for the initial code of expression by the data element of slice () function definition.Amplitude limit quantizer scale code data unit greatly can for the quantization step of the macro block definition that exists in the amplitude limit layer.Yet, when setting quantizer scale code, preferably use the quantizer scale code of setting as macro block.
Inner amplitude limit flag data section is to be used for representing whether bit stream exists the mark of inner amplitude limit and reservation bit.Inside sliced data unit is used for representing whether the amplitude limit layer exists the mark of non-intra-macroblock.More particularly, if one of any macro block in the amplitude limit layer is non-intra-macroblock.The value of inner amplitude limit mark is " 0 ".On the other hand, if all macro blocks in the amplitude limit layer are non-intra-macroblocks, the value of inner amplitude limit mark is " 1 ".The reservation bit data element is 7 Bit datas of value for " 0 ".External bit sliced data unit is used for expression whether to have external information sliced data unit, promptly as the historical mark that flows the information that increases.More particularly, if there is next external information sliced data unit, the value of external bit amplitude limit mark is " 1 ".On the other hand, if there is no next external information sliced data unit, the value of external bit amplitude limit mark is " 0 ".
After the data element by slice () function definition, the data element of describing by macroblock () function definition flows as history.
Shown in Figure 61, macroblock () function is to be used for definition such as the macro block escape, the data element of macroblock address increment and macroblock quantizer scale code and so on and by the function of the data element of macroblock_mode () function and macroblock_vectors (s) function definition.
The data element of listing above is described below.Macro block transferring data unit is whether have between the macro block that is used to represent benchmark macro block and front in the horizontal direction difference be 34 or the Bit String of bigger regular length at least.If difference between the macro block of benchmark macro block and front in the horizontal direction is 34 or bigger at least, add 33 to the value of macroblock address increment data element.The macroblock address increment data element is in the horizontal direction poor between the macro block of benchmark macro block and front.If before the macroblock address increment data element, there is a macro block transferring data unit, add value representation that 33 result obtains in the horizontal direction actual poor between the macro block of benchmark macro block and front as value to the macroblock address increment data element.
Macroblock quantizer scale code data unit is the quantification step of setting in each macro block.The amplitude limit quantizer scale code data unit of the quantification step of expression amplitude limit layer also sets in each amplitude limit layer.Yet the macro block scale code of setting for macro block is before amplitude limit quantizer scale code.
After the macroblock address increment data element, the data element by macroblock_mode () function definition is described.Shown in Figure 62, macroblock_mode () function be used to describe as historical stream such as macro block (mb) type.The frame type of sports, the function of the data element an of type of sports and dct type and so on.
The data element of listing above is described below.The macro block (mb) type data element is the data of expression macroblock encoding type.Specifically, the macro block (mb) type data element is the data that have from the variable-length of the generation of the mark such as the quantification of the macro block shown in Figure 65 to 67, dct type mark, macro block propulsion and macro block reverse.It is to be used to represent whether to set the mark that is used to macro block to set the macroblock quantizer scale code that quantizes step that macro block quantizes mark.If there is macroblock quantizer scale code in the bit stream, the value that macro block quantizes mark is " 1 ".
The dct type mark is to be used for being illustrated in frame DCT pattern or whether a DCT pattern exists the dct type that shows that the benchmark macro block has been encoded.In other words, the dct type mark is to be used to represent whether the benchmark macro block experiences the mark of DCT.If there is the dct type in the bit stream, the value of dct type mark is " 1 ".The macro block propulsion is the mark whether expression benchmark macro block has experienced forward prediction.If the benchmark macro block has experienced forward prediction, the value of macro block propulsion mark is " 1 ".On the other hand, the macro block reverse is whether expression benchmark macro block has experienced the mark of prediction backward.If the benchmark macro block has experienced back forecast, the value of macro block reverse mark is " 1 ".
If the value of macro block propulsion mark or macro block reverse mark is " 1 ", with the frame prediction mode transmitted image, and the value of frame period frame dct is " 0 ", describes the data element of representing the frame type of sports after the data element of expression macro block (mb) type.Should point out that frame period frame dct is used for representing whether bit stream exists the mark of frame type of sports.
Frame type of sports data element is 2 bit code of the type of prediction of expression frame macro block.Frame motion class offset " 00 " expression has two predictive vectors and this type of prediction to be based on the type of prediction of field.Frame motion class offset " 01 " expression has a predictive vector and this type of prediction to be based on the type of prediction of frame.Frame motion class offset " 10 " expression has a predictive vector and this type of prediction to be based on the type of prediction of frame.It is two main type of prediction that frame motion class offset " 11 " expression has a predictive vector and this type of prediction.
If macro block motion forward direction mark or macro block motion back are " 1 " to the value of mark, and not with the frame prediction mode transmitted image, describe the data element of expression frame type of sports after the data element of expression macro block (mb) type.
Field type of sports data element is 2 bit code of the motion prediction of an expression macro block.Field motion class offset " 01 " expression has a predictive vector and this type of prediction to be based on the type of prediction of field.Field motion class offset " 10 " expression has two predictive vectors and this type of prediction to be based on the type of prediction of 18 * 8 macro blocks.It is two main type of prediction that field motion class offset " 11 " expression has a predictive vector and this type of prediction.
If with the frame prediction mode transmitted image, frame period frame dct is illustrated in and has the frame type of sports in the bit stream, and frame period frame dct also represents to exist in the bit stream dct type, describes the data element of expression dct type behind the data element of expression macro block (mb) type.Should point out that dct categorical data unit is used to represent with frame DCT pattern or carries out the data of DCT with field DCT pattern.
Shown in Figure 61, if the benchmark macro block is forward prediction macroblock or finishes and hide the intra-macroblock of handling that description is by the data element of motion_vectors (0) function definition.If the benchmark macro block is a backward prediction macroblock, the data element by motion_vectors (1) function definition is described.Should point out that motion_vectors (0) function is the function that is used to describe the data element relevant with first motion vector, motion_vectors (1) function is the function that is used to describe the data element relevant with second motion vector.
Shown in Figure 63, motion_vectors (s) function is the function that is used to describe the data element relevant with motion vector.
If a motion vector is only arranged and does not adopt two main predictive modes, describe the data element of selecting [0] [s] and motion vector [0] [s] definition by the motion vertical field.
The motion vertical field selects [r] [s] to be used to represent that first vector as the forward or backward prediction vector is by a field at the bottom of the benchmark or a top vector that forms.Subscript [r] expression first or second vector, and subscript [s] expression forward or backward prediction vector.
Shown in Figure 64, motion_vectors (r, s) function is to be used for describing and the relevant data array of motion code [r] [s] [t], the data array relevant with motion residual [r] [s] [t] and represent dmvector[t] the function of data.
Motion code [r] [s] [t] is to have the data that are used for by in the variable-length of the value representation motion vector amplitude of-16 to+16 scopes.Motion code [r] [s] [t] is the data with the variable-length that is used to represent that motion vector is residual.Therefore, by using the value of motion code [r] [s] [t] and motion residual [r] [s] [t], detailed motion vector can be described.Dmvector[t] be to be used for scale to have the existing motion vector of time gap so that produce motion vector in one of top and bottoms field (for example top) at two main predictive modes, and be used for proofreading and correct so that the data that move in vertical direction between the row of reflection top and bottom field in vertical direction.Subscript [r] expression first or second vector, and subscript [s] expression forward or backward prediction vector.Subscript [t] represents that this motion vector is the component in the horizontal or vertical direction.
At first, (r, s) function representation represents that the data array of the motion code [r] [s] [0] in the horizontal direction is as the stream of the history shown in Figure 64 to motion_vectors.By the two bit number of f code [s] [t] expression motion residual [0] [s] [t] and motion residual [1] [s] [t].Therefore, there is motion residual [0] [s] [t] in the value representation bit stream of the f code [s] [t] except that " 1 ".Motion residual [r] [s] [0], be that the horizontal direction component is not " 1 " and motion code [r] [s] [0], promptly the horizontal direction component is not that the fact of " 0 " shows data element that comprises expression motion residual [r] [s] [0] in the bit stream and the horizontal direction component that has motion vector.In this case, expression motion residual [r] [s] [0], the i.e. data element of horizontal component are described.
Next, be described in vertical direction and represent that the data array of motion residual [r] [s] [1] flows as history.Equally, by the two bit number of f code [s] [t] expression motion residual [0] [s] [t] and motion residual [1] [s] [t].Therefore, there is motion residual [r] [s] [t] in the value representation bit stream of the f code [s] [t] except that " 1 ".Motion residual [r] [s] [t], be that the vertical direction component is not " 1 " and motion code [r] [s] [1], promptly the vertical direction component is not that the fact of " 0 " shows data element that comprises expression motion residual [r] [s] [1] in the bit stream and the vertical direction component that has motion vector.In this case, expression motion residual [r] [s] [1], the i.e. data element of vertical component are described.
Should point out, in variable length format, can eliminate historical information so that reduce the transfer rate of emission bit.
For example, be transmission macro block (mb) type and motion_vector (), and do not transmit quantizer scale code that amplitude limit quantizer scale code setting is that " 00000 " is so that reduce bit rate.
In addition, do not transmit motion_vector () in order to transmit macro block (mb) type, quantizer scale code and dct type adopt " not coding " as macro block (mb) type, so that reduce bit rate.
In addition, do not transmit all information behind the slice () for transmitted image type of coding only, adopt the picture_data () that does not have the amplitude limit initial code, so that reduce bit rate.
As mentioned above, appear in the user data, insert " a 1 " bit every 22 bits in order to prevent 23 0 continuous bits.Yet, should point out, also can insert " a 1 " bit at each amount of bits less than 22.In addition, not to insert " 1 " bit, and can insert " a 1 " bit by check Byte_allign by 0 continuous amount of bits is counted.
In addition, in MPEG, forbid producing 23 0 continuous bits.Yet in fact, having only since the sequence of these initial 23 bits of a byte is a problem.That is to say that the sequence from these initial 23 bits of the beginning of byte does not become problem.Therefore, can insert " a 1 " bit by per 24 bits usually in the position except that LSB.
In addition, when when forming historical information near the form of video elementary stream as mentioned above, also can form historical information near the elementary stream of packetizing or the form that transmits data flow.In addition, even the user data of elementary stream is placed on the view data front according to top description, user data also can be placed on another position.
Should point out that the program of being carried out above-mentioned processing section by computer being used to of carrying out can offer the user by the latticed form medium such as the internet or digital satellite and the form medium of being realized by information recording carrier such as disk and CD-ROM.
Claims (19)
1. encoding device is used to carry out the encoding process of the view data that the decoding processing of coding by encoded data stream obtain, and this equipment comprises:
Receiving system, current quantitative calibration and the view data used when being used for being received in over the historical quantitative calibration that the encoding process or the decoding processing of encoded data stream are used and generating encoded data stream;
Choice device is used for selecting the optimal quantization scale from the historical quantitative calibration and the current quantitative calibration that receive by described receiving system, as the use quantitative calibration that will be used in the encoding process that view data is carried out; And
Code device is used to use the use quantitative calibration of selecting by described choice device, and view data is carried out encoding process.
2. encoding device as claimed in claim 1 also comprises:
The quantitative calibration calculation element is used to calculate quantitative calibration as present quantitative calibration, wherein when by described code device view data being carried out encoding process, generates this quantitative calibration,
Described choice device is from the historical quantitative calibration and current quantitative calibration and the present quantitative calibration by described quantitative calibration calculation element calculating that receive by described receiving system, select the optimal quantization scale, as the use quantitative calibration that in encoding process, will be used.
3. encoding device as claimed in claim 1, wherein
Described code device is carried out encoding process, and wherein all pictures all are the I picture.
4. encoding device as claimed in claim 1, wherein
Described code device changes bit rate or gop structure, and carries out described encoding process.
5. encoding device as claimed in claim 4, wherein
Described code device is carried out encoding process according to the MPEG method, and described MPEG method has sequence layer, GOP layer, picture frame layer, amplitude limit layer and macroblock layer.
6. encoding device as claimed in claim 1, wherein
Described encoded data stream is the form that whole pictures are encoded as the processing of I picture.
7. encoding device as claimed in claim 1 also comprises:
Conveyer is used to transmit use quantitative calibration of selecting by described choice device and the encoded image data of encoding by described code device.
8. encoding device as claimed in claim 7, wherein
Described conveyer will be described in the encoded image data of encoding by described code device by the use quantitative calibration that described choice device is selected, and therewith transmit.
9. coding method that is used for encoding device is used to carry out the encoding process of the view data that the decoding processing of coding by encoded data stream obtain, and this method comprises:
Receiving step, the current quantitative calibration and the view data of being received in over to the historical quantitative calibration that uses in the encoding process of encoded data stream or the decoding processing and using when generating encoded data stream;
Select step, from the historical quantitative calibration and current quantitative calibration that receive by processing, select the optimal quantization scale, as the use quantitative calibration that in the encoding process that view data is carried out, will be used at receiving step; And
Coding step uses the use quantitative calibration of selecting by in the processing of selecting step, and the view data that receives by the processing at receiving step is carried out encoding process.
10. encoding device is used to carry out the encoding process of the view data that the decoding processing of coding by encoded data stream obtain, and this equipment comprises:
Receiving system is used for being received in over the historical quantitative calibration that the encoding process or the decoding processing of encoded data stream are used, and view data;
Choice device, be used for from historical quantitative calibration that receives by described receiving system and the present quantitative calibration that calculates by quantification gradation calculations device, select the optimal quantization scale, as the use quantitative calibration that in the encoding process that view data is carried out, will be used;
Code device is used to use the use quantitative calibration of selecting by described choice device, and the view data that receives by described receiving system is carried out encoding process; And
The quantitative calibration calculation element is used to calculate quantitative calibration as present quantitative calibration, wherein when by described code device view data being carried out encoding process, generates this quantitative calibration.
11. a decoding device is used to carry out the decoding processing to the encoded data stream decoding, this equipment comprises:
Receiving system, the current quantitative calibration and the encoded data stream that calculate when being used for being received in over the historical quantitative calibration that the encoding process or the decoding processing of encoded data stream are used and generating encoded data stream;
Decoding device is used to use the current quantitative calibration that receives by described receiving system, carries out the decoding processing of the encoded data stream that receives by described receiving system, to generate view data; And
Conveyer, the view data that is used to transmit historical quantitative calibration and current quantitative calibration and generates by described decoding device, make in the time will carrying out again encoding process to the view data that generates by described decoding device, from the historical quantitative calibration that receives by described receiving system and current quantitative calibration, select the optimal quantization scale, the use quantitative calibration that will be used in the encoding process again as view data.
12. decoding device as claimed in claim 11, wherein
Described conveyer will be multiplexed in the view data that generates by described decoding device by historical quantitative calibration and the current quantitative calibration that described receiving system receives, and therewith transmits.
13. decoding device as claimed in claim 11, wherein
Historical quantitative calibration and current quantitative calibration are described in encoded data stream, and
Described receiving system obtains historical quantitative calibration and current quantitative calibration from encoded data stream.
14. decoding device as claimed in claim 13, wherein
Historical quantitative calibration and current quantitative calibration are described in the zones of different of encoded data stream.
15. decoding device as claimed in claim 11, wherein
Encoded data stream is that described MPEG method has sequence layer, GOP layer, picture frame layer, amplitude limit layer and macroblock layer according to the form of MPEG method coding.
16. decoding device as claimed in claim 11, wherein
Encoded data stream is to be the form of the processing of I picture with all picture codings.
17. a coding/decoding method that is used for decoding device is used to carry out the decoding processing to the encoded data stream decoding, this method may further comprise the steps:
The current quantitative calibration and the encoded data stream of being received in over to the historical quantitative calibration that uses in the encoding process of encoded data stream or the decoding processing and calculating when generating encoded data stream;
Use the current quantitative calibration that is received, carry out the decoding processing of the encoded data stream that is received, to generate view data; And
Transmit historical quantitative calibration and current quantitative calibration and view data, make in the time will carrying out again encoding process to the view data that generates, from the historical quantitative calibration that receives by described receiving system and current quantitative calibration, select the optimal quantization scale, the use quantitative calibration that will be used in the encoding process again as view data.
18. a decoding device is used to carry out the decoding processing to the encoded data stream decoding, this equipment comprises:
Receiving system, the current quantitative calibration and the encoded data stream that calculate when being used for being received in over the historical quantitative calibration that the encoding process or the decoding processing of encoded data stream are used and generating encoded data stream;
Decoding device is used to use the current quantitative calibration that receives by described receiving system, carries out the decoding processing of the encoded data stream that receives by described receiving system, to generate view data; And
Conveyer, be used to transmit historical quantitative calibration and current quantitative calibration, make in the time will carrying out again encoding process to the view data that generates by described decoding device, from the historical quantitative calibration that receives by described receiving system and current quantitative calibration, select the optimal quantization scale, the use quantitative calibration that will be used in the encoding process again as view data.
19. a coding/decoding method that is used for decoding device is used to carry out the decoding processing to the encoded data stream decoding, this method may further comprise the steps:
The current quantitative calibration and the encoded data stream of being received in over to the historical quantitative calibration that uses in the encoding process of encoded data stream or the decoding processing and calculating when generating encoded data stream;
Use the current quantitative calibration that is received, carry out the decoding processing of the encoded data stream that is received; And
Transmit historical quantitative calibration and current quantitative calibration, make in the time will carrying out again encoding process to the view data that generates, from historical quantitative calibration and current quantitative calibration, select optimal quantization scale, the use quantitative calibration that will be used in the encoding process again as view data.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP058118/98 | 1998-03-10 | ||
JP5811898 | 1998-03-10 | ||
JP157243/98 | 1998-06-05 | ||
JP15724398 | 1998-06-05 |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNA2004100881706A Division CN1599463A (en) | 1998-03-10 | 1999-03-10 | Transcoding system using encoding history information |
CNB991076648A Division CN1178516C (en) | 1998-03-10 | 1999-03-10 | Transcoding system using encoding history information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1882093A CN1882093A (en) | 2006-12-20 |
CN1882093B true CN1882093B (en) | 2011-01-12 |
Family
ID=37520034
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200610099782 Expired - Fee Related CN1882093B (en) | 1998-03-10 | 1999-03-10 | Transcoding system using encoding history information |
CN 200610099781 Expired - Fee Related CN1882092B (en) | 1998-03-10 | 1999-03-10 | Transcoding system using encoding history information |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN 200610099781 Expired - Fee Related CN1882092B (en) | 1998-03-10 | 1999-03-10 | Transcoding system using encoding history information |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN1882093B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8498330B2 (en) * | 2009-06-29 | 2013-07-30 | Hong Kong Applied Science and Technology Research Institute Company Limited | Method and apparatus for coding mode selection |
JP5718453B2 (en) | 2010-04-13 | 2015-05-13 | フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン | Decryption method |
BR122020007922B1 (en) | 2010-04-13 | 2021-08-31 | Ge Video Compression, Llc | INTERPLANE PREDICTION |
KR102333225B1 (en) | 2010-04-13 | 2021-12-02 | 지이 비디오 컴프레션, 엘엘씨 | Inheritance in sample array multitree subdivision |
CN105933715B (en) * | 2010-04-13 | 2019-04-12 | Ge视频压缩有限责任公司 | Across planar prediction |
CN105120287B (en) | 2010-04-13 | 2019-05-17 | Ge 视频压缩有限责任公司 | Decoder, encoder and the method for decoding and encoding |
KR102273670B1 (en) * | 2014-11-28 | 2021-07-05 | 삼성전자주식회사 | Data processing system modifying a motion compensation information, and method for decoding video data including the same |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1996024222A1 (en) * | 1995-01-30 | 1996-08-08 | Snell & Wilcox Limited | Video signal processing |
WO1996025823A2 (en) * | 1995-02-15 | 1996-08-22 | Philips Electronics N.V. | Method and device for transcoding video signals |
US5657086A (en) * | 1993-03-31 | 1997-08-12 | Sony Corporation | High efficiency encoding of picture signals |
US5675379A (en) * | 1994-08-31 | 1997-10-07 | Sony Corporation | Method and apparatus for encoding moving picture signals and recording medium for recording moving picture signals |
-
1999
- 1999-03-10 CN CN 200610099782 patent/CN1882093B/en not_active Expired - Fee Related
- 1999-03-10 CN CN 200610099781 patent/CN1882092B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5657086A (en) * | 1993-03-31 | 1997-08-12 | Sony Corporation | High efficiency encoding of picture signals |
US5675379A (en) * | 1994-08-31 | 1997-10-07 | Sony Corporation | Method and apparatus for encoding moving picture signals and recording medium for recording moving picture signals |
WO1996024222A1 (en) * | 1995-01-30 | 1996-08-08 | Snell & Wilcox Limited | Video signal processing |
WO1996025823A2 (en) * | 1995-02-15 | 1996-08-22 | Philips Electronics N.V. | Method and device for transcoding video signals |
Also Published As
Publication number | Publication date |
---|---|
CN1882092B (en) | 2012-07-18 |
CN1882092A (en) | 2006-12-20 |
CN1882093A (en) | 2006-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN1332565C (en) | Transcoding system using encoding history information | |
KR100571307B1 (en) | Coding system and its method, coding device and its method, decoding device and its method, recording device and its method, and reproducing device and its method | |
JP3724205B2 (en) | Decoding device and method, and recording medium | |
CN1882093B (en) | Transcoding system using encoding history information | |
JP3724203B2 (en) | Encoding apparatus and method, and recording medium | |
JP3724204B2 (en) | Encoding apparatus and method, and recording medium | |
JP3890838B2 (en) | Encoded stream conversion apparatus, encoded stream conversion method, and recording medium | |
JP4016290B2 (en) | Stream conversion device, stream conversion method, encoding device, encoding method, and recording medium | |
JP4139983B2 (en) | Encoded stream conversion apparatus, encoded stream conversion method, stream output apparatus, and stream output method | |
JP4539637B2 (en) | Stream recording apparatus and stream recording method, stream reproduction apparatus and stream reproduction method, stream transmission apparatus and stream transmission method, and program storage medium | |
JP4016294B2 (en) | Encoding apparatus and encoding method, stream conversion apparatus and stream conversion method, and recording medium | |
JP4543321B2 (en) | Playback apparatus and method | |
JP4478630B2 (en) | Decoding device, decoding method, program, and recording medium | |
JP4482811B2 (en) | Recording apparatus and method | |
JP3724202B2 (en) | Image data processing apparatus and method, and recording medium | |
JP4016349B2 (en) | Stream conversion apparatus, stream conversion method, and recording medium | |
JP4016293B2 (en) | Encoding apparatus, encoding method, and recording medium | |
JP2000059770A (en) | Data transmitter and its transmitting method, and providing medium | |
JP4016347B2 (en) | Stream conversion apparatus, stream conversion method, and recording medium | |
JP2000232646A (en) | Method and device for transmitting stream and served medium | |
JP2007124703A (en) | Decoder and decoding method, transmitter and transmitting method, and recording medium | |
JP2000232647A (en) | Coder, its method and served medium | |
JP2007124704A (en) | Decoder and decoding method, transmitter and transmitting method, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110112 Termination date: 20150310 |
|
EXPY | Termination of patent right or utility model |