US20050053145A1 - Macroblock information signaling for interlaced frames - Google Patents
Macroblock information signaling for interlaced frames Download PDFInfo
- Publication number
- US20050053145A1 US20050053145A1 US10/934,929 US93492904A US2005053145A1 US 20050053145 A1 US20050053145 A1 US 20050053145A1 US 93492904 A US93492904 A US 93492904A US 2005053145 A1 US2005053145 A1 US 2005053145A1
- Authority
- US
- United States
- Prior art keywords
- macroblock
- frame
- field
- interlaced
- motion vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/93—Run-length coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/112—Selection of coding mode or of prediction mode according to a given display mode, e.g. for interlaced or progressive display mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/129—Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/16—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter for a given display mode, e.g. for interlaced or progressive display mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/463—Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/523—Motion estimation or motion compensation with sub-pixel accuracy
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- an encoder signals macroblock mode information for macroblocks in an interlaced frame coded picture.
- a decoder performs corresponding decoding.
- a typical raw digital video sequence includes 15 or 30 pictures per second. Each picture can include tens or hundreds of thousands of pixels (also called pels). Each pixel represents a tiny element of the picture. In raw form, a computer commonly represents a pixel with 24 bits or more. Thus, the number of bits per second, or bit rate, of a typical raw digital video sequence can be 5 million bits/second or more.
- compression also called coding or encoding
- Compression can be lossless, in which quality of the video does not suffer but decreases in bit rate are limited by the complexity of the video.
- compression can be lossy, in which quality of the video suffers but decreases in bit rate are more dramatic. Decompression reverses compression.
- video compression techniques include “intra” compression and “inter” or predictive compression.
- Intra compression techniques compress individual pictures, typically called I-frames or key frames.
- Inter compression techniques compress frames with reference to preceding and/or following frames, and inter-compressed frames are typically called predicted frames, P-frames, or B-frames.
- Microsoft Corporation's Windows Media Video, Version 8 [“WMV8”] includes a video encoder and a video decoder.
- the WMV8 encoder uses intra and inter compression
- the WMV8 decoder uses intra and inter decompression.
- Windows Media Video, Version 9 [“WMV9”] uses a similar architecture for many operations.
- FIGS. 1 and 2 illustrate the block-based inter compression for a predicted frame in the WMV8 encoder.
- FIG. 1 illustrates motion estimation for a predicted frame 110
- FIG. 2 illustrates compression of a prediction residual for a motion-compensated block of a predicted frame.
- the WMV8 encoder computes a motion vector for a macroblock 115 in the predicted frame 110 .
- the encoder searches in a search area 135 of a reference frame 130 .
- the encoder compares the macroblock 115 from the predicted frame 110 to various candidate macroblocks in order to find a candidate macroblock that is a good match.
- the encoder outputs information specifying the motion vector (entropy coded) for the matching macroblock.
- compression of the data used to transmit the motion vector information can be achieved by selecting a motion vector predictor based upon motion vectors of neighboring macroblocks and predicting the motion vector for the current macroblock using the motion vector predictor.
- the encoder can encode the differential between the motion vector and the predictor. After reconstructing the motion vector by adding the differential to the predictor, a decoder uses the motion vector to compute a prediction macroblock for the macroblock 115 using information from the reference frame 130 , which is a previously reconstructed frame available at the encoder and the decoder.
- the prediction is rarely perfect, so the encoder usually encodes blocks of pixel differences (also called the error or residual blocks) between the prediction macroblock and the macroblock 115 itself.
- FIG. 2 illustrates an example of computation and encoding of an error block 235 in the WMV8 encoder.
- the error block 235 is the difference between the predicted block 215 and the original current block 225 .
- the encoder applies a discrete cosine transform [“DCT”] 240 to the error block 235 , resulting in an 8 ⁇ 8 block 245 of coefficients.
- the encoder then quantizes 250 the DCT coefficients, resulting in an 8 ⁇ 8 block of quantized DCT coefficients 255 .
- the encoder scans 260 the 8 ⁇ 8 block 255 into a one-dimensional array 265 such that coefficients are generally ordered from lowest frequency to highest frequency.
- the encoder entropy encodes the scanned coefficients using a variation of run length coding 270 .
- the encoder selects an entropy code from one or more run/level/last tables 275 and outputs the entropy code.
- FIG. 3 shows an example of a corresponding decoding process 300 for an inter-coded block.
- a decoder decodes ( 310 , 320 ) entropy-coded information representing a prediction residual using variable length decoding 310 with one or more run/level/last tables 315 and run length decoding 320 .
- the decoder inverse scans 330 a one-dimensional array 325 storing the entropy-decoded information into a two-dimensional block 335 .
- the decoder inverse quantizes and inverse discrete cosine transforms (together, 340 ) the data, resulting in a reconstructed error block 345 .
- the decoder computes a predicted block 365 using motion vector information 355 for displacement from a reference frame.
- the decoder combines 370 the predicted block 365 with the reconstructed error block 345 to form the reconstructed block 375 .
- the amount of change between the original and reconstructed frames is the distortion and the number of bits required to code the frame indicates the rate for the frame.
- the amount of distortion is roughly inversely proportional to the rate.
- a video frame contains lines of spatial information of a video signal. For progressive video, these lines contain samples starting from one time instant and continuing through successive lines to the bottom of the frame.
- a progressive I-frame is an intra-coded progressive video frame.
- a progressive P-frame is a progressive video frame coded using forward prediction, and a progressive B-frame is a progressive video frame coded using bi-directional prediction.
- a typical interlaced video frame consists of two fields scanned starting at different times.
- an interlaced video frame 400 includes top field 410 and bottom field 420 .
- the even-numbered lines (top field) are scanned starting at one time (e.g., time t) and the odd-numbered lines (bottom field) are scanned starting at a different (typically later) time (e.g., time t+1).
- time t time t+1
- This timing can create jagged tooth-like features in regions of an interlaced video frame where motion is present because the two fields are scanned starting at different times. For this reason, interlaced video frames can be rearranged according to a field structure, with the odd lines grouped together in one field, and the even lines grouped together in another field.
- a typical progressive video frame consists of one frame of content with non-alternating lines. In contrast to interlaced video, progressive video does not divide video frames into separate fields, and an entire frame is scanned left to right, top to bottom starting at a single time.
- the encoder and decoder use progressive and interlace coding and decoding in P-frames.
- a motion vector is encoded in the encoder by computing a differential between the motion vector and a motion vector predictor, which is computed based on neighboring motion vectors.
- the motion vector is reconstructed by adding the motion vector differential to the motion vector predictor, which is again computed (this time in the decoder) based on neighboring motion vectors.
- a motion vector predictor for the current macroblock or field of the current macroblock is selected based on the candidates, and a motion vector differential is calculated based on the motion vector predictor.
- the motion vector can be reconstructed by adding the motion vector differential to the selected motion vector predictor at either the encoder or the decoder side.
- luminance motion vectors are reconstructed from the encoded motion information
- chrominance motion vectors are derived from the reconstructed luminance motion vectors.
- progressive P-frames can contain macroblocks encoded in one motion vector (1MV) mode or in four motion vector (4MV) mode, or skipped macroblocks, with a decision generally made on a macroblock-by-macroblock basis.
- P-frames with only 1MV macroblocks (and, potentially, skipped macroblocks) are referred to as 1MV P-frames
- P-frames with both 1MV and 4MV macroblocks (and, potentially, skipped macroblocks) are referred to as Mixed-MV P-frames.
- One luma motion vector is associated with each 1MV macroblock, and up to four luma motion vectors are associated with each 4MV macroblock (one for each block).
- FIGS. 5A and 5B are diagrams showing the locations of macroblocks considered for candidate motion vector predictors for a macroblock in a 1MV progressive P-frame.
- the candidate predictors are taken from the left, top and top-right macroblocks, except in the case where the macroblock is the last macroblock in the row.
- Predictor B is taken from the top-left macroblock instead of the top-right.
- the predictor is always Predictor A (the top predictor).
- the predictor C is Predictor C.
- Various other rules address other special cases such as intra-coded predictors.
- FIGS. 6A-10 show the locations of the blocks or macroblocks considered for the up-to-three candidate motion vectors for a motion vector for a 1MV or 4MV macroblock in a Mixed-MV frame.
- the larger squares are macroblock boundaries and the smaller squares are block boundaries.
- the predictor is always Predictor A (the top predictor).
- Various other rules address other special cases such as top row blocks for top row 4MV macroblocks, top row 1MV macroblocks, and intra-coded predictors.
- FIGS. 6A and 6B are diagrams showing locations of blocks considered for candidate motion vector predictors for a 1MV current macroblock in a Mixed-MV frame.
- the neighboring macroblocks may be 1MV or 4MV macroblocks.
- FIGS. 6A and 6B show the locations for the candidate motion vectors assuming the neighbors are 4MV (i.e., predictor A is the motion vector for block 2 in the macroblock above the current macroblock, and predictor C is the motion vector for block 1 in the macroblock immediately to the left of the current macroblock). If any of the neighbors is a 1MV macroblock, then the motion vector predictor shown in FIGS. 5A and 5B is taken to be the motion vector predictor for the entire macroblock. As FIG. 6B shows, if the macroblock is the last macroblock in the row, then Predictor B is from block 3 of the top-left macroblock instead of from block 2 in the top-right macroblock as is the case otherwise.
- FIGS. 7A-10 show the locations of blocks considered for candidate motion vector predictors for each of the 4 luminance blocks in a 4MV macroblock.
- FIGS. 7A and 7B are diagrams showing the locations of blocks considered for candidate motion vector predictors for a block at position 0 ;
- FIGS. 8A and 8B are diagrams showing the locations of blocks considered for candidate motion vector predictors for a block at position 1 ;
- FIG. 9 is a diagram showing the locations of blocks considered for candidate motion vector predictors for a block at position 2 ;
- FIG. 10 is a diagram showing the locations of blocks considered for candidate motion vector predictors for a block at position 3 .
- the motion vector predictor for the macroblock is used for the blocks of the macroblock.
- Predictor B for block 0 is handled differently than block 0 for the remaining macroblocks in the row (see FIGS. 7A and 7B ). In this case, Predictor B is taken from block 3 in the macroblock immediately above the current macroblock instead of from block 3 in the macroblock above and to the left of current macroblock, as is the case otherwise.
- Predictor B for block 1 is handled differently ( FIGS. 8A and 8B ). In this case, the predictor is taken from block 2 in the macroblock immediately above the current macroblock instead of from block 2 in the macroblock above and to the right of the current macroblock, as is the case otherwise.
- Predictor C for blocks 0 and 2 are set equal to 0.
- the encoder and decoder use a 4:1:1 macroblock format for interlaced P-frames, which can contain macroblocks encoded in field mode or in frame mode, or skipped macroblocks, with a decision generally made on a macroblock-by-macroblock basis.
- Two motion vectors are associated with each field-coded macroblock (one motion vector per field), and one motion vector is associated with each frame-coded macroblock.
- An encoder jointly encodes motion information, including horizontal and vertical motion vector differential components, potentially along with other signaling information.
- FIGS. 11, 12 and 13 show examples of candidate predictors for motion vector prediction for frame-coded 4:1:1 macroblocks and field-coded 4:1:1 macroblocks, respectively, in interlaced P-frames in the encoder and decoder.
- FIG. 11 shows candidate predictors A, B and C for a current frame-coded 4:1:1 macroblock in an interior position in an interlaced P-frame (not the first or last macroblock in a macroblock row, not in the top row).
- Predictors can be obtained from different candidate directions other than those labeled A, B, and C (e.g., in special cases such as when the current macroblock is the first macroblock or last macroblock in a row, or in the top row, since certain predictors are unavailable for such cases).
- predictor candidates are calculated differently depending on whether the neighboring macroblocks are field-coded or frame-coded.
- the motion vector is simply taken as the predictor candidate.
- the candidate motion vector is determined by averaging the top and bottom field motion vectors.
- FIGS. 12 and 13 show candidate predictors A, B and C for a current field in a field-coded 4:1:1 macroblock in an interior position in the field.
- the current field is a bottom field, and the bottom field motion vectors in the neighboring macroblocks are used as candidate predictors.
- the current field is a top field, and the top field motion vectors in the neighboring macroblocks are used as candidate predictors.
- the number of motion vector predictor candidates for each field is at most three, with each candidate coming from the same field type (e.g., top or bottom) as the current field.
- various special cases apply when the current macroblock is the first macroblock or last macroblock in a row, or in the top row, since certain predictors are unavailable for such cases.
- the encoder and decoder use different selection algorithms, such as a median-of-three algorithm.
- a procedure for median-of-three prediction is described in pseudo-code 1400 in FIG. 14 .
- the encoder and decoder use progressive and interlaced B-frames.
- B-frames use two frames from the source video as reference (or anchor) frames rather than the one anchor used in P-frames.
- anchor frames for a typical B-frame one anchor frame is from the temporal past and one anchor frame is from the temporal future.
- a B-frame 1510 in a video sequence has a temporally previous reference frame 1520 and a temporally future reference frame 1530 .
- Encoded bit streams with B-frames typically use less bits than encoded bit streams with no B-frames, while providing similar visual quality.
- a decoder also can accommodate space and time restrictions by opting not to decode or display B-frames, since B-frames are not generally used as reference frames.
- macroblocks in forward-predicted frames have only one directional mode of prediction (forward, from previous I- or P-frames)
- macroblocks in B-frames can be predicted using five different prediction modes: forward, backward, direct, interpolated and intra.
- the encoder selects and signals different prediction modes in the bit stream.
- Forward mode is similar to conventional P-frame prediction.
- a macroblock is derived from a temporally previous anchor.
- backward mode a macroblock is derived from a temporally subsequent anchor.
- Macroblocks predicted in direct or interpolated modes use both forward and backward anchors for prediction.
- macroblocks in interlaced P-frames can be one of three possible types: frame-coded, field-coded and skipped.
- the macroblock type is indicated by a multi-element combination of frame-level and macroblock-level syntax elements.
- INTRLCMB is a bitplane-coded array that indicates the field/frame coding status for each macroblock in the picture.
- the decoded bitplane represents the interlaced status for each macroblock as an array of 1-bit values. A value of 0 for a particular bit indicates that a corresponding macroblock is coded in frame mode. A value of 1 indicates that the corresponding macroblock is coded in field mode.
- the macroblock-level MVDATA element is associated with all blocks in the macroblock. MVDATA signals whether the blocks in the macroblocks are intra-coded or inter-coded. If they are inter-coded, MVDATA also indicates the motion vector differential.
- TOPMVDATA element is associated with the top field blocks in the macroblock and a BOTMVDATA element is associated with the bottom field blocks in the macroblock.
- TOPMVDATA and BOTMVDATA are sent at the first block of each field.
- TOPMVDATA indicates whether the top field blocks are intra-coded or inter-coded.
- BOTMVDATA indicates whether the bottom field blocks are intra-coded or inter-coded.
- TOPMVDATA and BOTMVDATA also indicate motion vector differential information.
- the CBPCY element indicates coded block pattern (CBP) information for luminance and chrominance components in a macroblock.
- CBPCY element also indicates which fields have motion vector data elements present in the bitstream.
- CBPCY and the motion vector data elements are used to specify whether blocks have AC coefficients.
- CBPCY is present for a frame-coded macroblock of an interlaced P-frame if the “last” value decoded from MVDATA indicates that there are data following the motion vector to decode. If CBPCY is present, it decodes to a 6-bit field, one bit for each of the four Y blocks, one bit for both U blocks (top field and bottom field), and one bit for both V blocks (top field and bottom field).
- CBPCY is always present for a field-coded macroblock.
- CBPCY and the two field motion vector data elements are used to determine the presence of AC coefficients in the blocks of the macroblock.
- the meaning of CBPCY is the same as for frame-coded macroblocks for bits 1, 3, 4 and 5. That is, they indicate the presence or absence of AC coefficients in the right top field Y block, right bottom field Y block, top/bottom U blocks, and top/bottom V blocks, respectively.
- bit positions 0 and 2 the meaning is slightly different.
- a 0 in bit position 0 indicates that TOPMVDATA is not present and the motion vector predictor is used as the motion vector for the top field blocks. It also indicates that the left top field block does not contain any nonzero coefficients.
- a 1 in bit position 0 indicates that TOPMVDATA is present.
- TOPMVDATA indicates whether the top field blocks are inter or intra and, if they are inter, also indicates the motion vector differential. If the “last” value decoded from TOPMVDATA decodes to 1, then no AC coefficients are present for the left top field block, otherwise, there are nonzero AC coefficients for the left top field block. Similarly, the above rules apply to bit position 2 for BOTMVDATA and the left bottom field block.
- the encoder and decoder use skipped macroblocks to reduce bitrate. For example, the encoder signals skipped macroblocks in the bitstream.
- the decoder receives information (e.g., a skipped macroblock flag) in the bitstream indicating that a macroblock is skipped, the decoder skips decoding residual block information for the macroblock. Instead, the decoder uses corresponding pixel data from a co-located or motion compensated (with a motion vector predictor) macroblock in a reference frame to reconstruct the macroblock.
- the encoder and decoder select between multiple coding/decoding modes for encoding and decoding the skipped macroblock information.
- skipped macroblock information is signaled at frame level of the bitstream (e.g., in a compressed bitplane) or at macroblock level (e.g., with one “skip” bit per macroblock).
- bitplane coding the encoder and decoder select between different bitplane coding modes.
- One previous encoder and decoder define a skipped macroblock as a predicted macroblock whose motion is equal to its causally predicted motion and which has zero residual error.
- Another previous encoder and decoder define a skipped macroblock as a predicted macroblock with zero motion and zero residual error.
- Some international standards describe signaling of field/frame coding type (e.g., field-coding or frame-coding) for macroblocks in interlaced pictures.
- Section 7.3.4 describes a bitstream syntax where mb_field_decoding_flag is sent as an element of slice data in cases where a sequence parameter (mb_frame_field_adaptive_flag) indicates switching between frame and field decoding in macroblocks and a slice header element (pic structure) identifies the picture structure as a progressive picture or an interlaced frame picture.
- mb_field_decoding_flag is sent as an element of slice data in cases where a sequence parameter (mb_frame_field_adaptive_flag) indicates switching between frame and field decoding in macroblocks and a slice header element (pic structure) identifies the picture structure as a progressive picture or an interlaced frame picture.
- dct_type is a macroblock-layer element that is only present in the MPEG-4 bitstream in interlaced content where the macroblock has a non-zero coded block pattern or is intra-coded.
- the dct_type element indicates whether a macroblock is frame DCT coded or field DCT coded.
- a decoder decodes one or more skipped macroblocks among plural macroblocks of an interlaced frame (e.g., an interlaced P-frame, interlaced B-frame, or a frame having interlaced P-fields and/or interlaced B-fields).
- Each of the one or more skipped macroblocks (1) is indicated by a skipped macroblock signal in a bitstream, (2) uses exactly one predicted motion vector (e.g., a frame motion vector) and has no motion vector differential information, and (3) lacks residual information.
- the skipped macroblock signal for each of the one or more skipped macroblocks indicates one-motion-vector motion-compensated decoding for the respective skipped macroblock.
- the skipped macroblock signal can be part of a compressed bitplane sent at frame layer in a bitstream having plural layers. Or, the skipped macroblock signal can be an individual bit sent at macroblock layer.
- a coding mode from a group of plural available coding modes is selected, and a bitplane is processed in an encoder or decoder according to the selected coding mode.
- the bitplane includes binary information signifying whether macroblocks in an interlaced frame are skipped or not skipped.
- a macroblock in the interlaced frame is skipped if the macroblock has only one motion vector, the only one motion vector is equal to a predicted motion vector for the macroblock, and the macroblock has no residual error.
- a macroblock is not skipped if it has plural motion vectors.
- an encoder selects a motion compensation type (e.g., 1MV, 4 Frame MV, 2 Field MV, or 4 Field MV) for a macroblock in an interlaced P-frame and selects a field/frame coding type (e.g., field-coded, frame-coded, or no coded blocks) for the macroblock.
- the encoder jointly encodes the motion compensation type and the field/frame coding type for the macroblock.
- the encoder also can jointly encode other information for the macroblock with the motion compensation type and the field/frame coding type (e.g., an indicator of the presence of a differential motion vector, such as for a one-motion-vector macroblock).
- a decoder receives macroblock information for a macroblock in an interlaced P-frame, including a joint code representing motion compensation type and field/frame coding type for the macroblock.
- the decoder decodes the joint code (e.g., a variable length code in a variable length coding table) to obtain both motion compensation type information and field/frame coding type information for the macroblock.
- FIG. 1 is a diagram showing motion estimation in a video encoder according to the prior art.
- FIG. 2 is a diagram showing block-based compression for an 8 ⁇ 8 block of prediction residuals in a video encoder according to the prior art.
- FIG. 3 is a diagram showing block-based decompression for an 8 ⁇ 8 block of prediction residuals in a video encoder according to the prior art.
- FIG. 4 is a diagram showing an interlaced frame according to the prior art.
- FIGS. 5A and 5B are diagrams showing locations of macroblocks for candidate motion vector predictors for a 1MV macroblock in a progressive P-frame according to the prior art.
- FIGS. 6A and 6B are diagrams showing locations of blocks for candidate motion vector predictors for a 1MV macroblock in a mixed 1MV/4MV progressive P-frame according to the prior art.
- FIGS. 7A, 7B , 8 A, 8 B, 9 , and 10 are diagrams showing the locations of blocks for candidate motion vector predictors for a block at various positions in a 4MV macroblock in a mixed 1MV/4MV progressive P-frame according to the prior art.
- FIG. 11 is a diagram showing candidate motion vector predictors for a current frame-coded macroblock in an interlaced P-frame according to the prior art.
- FIGS. 12 and 13 are diagrams showing candidate motion vector predictors for a current field-coded macroblock in an interlaced P-frame according to the prior art.
- FIG. 14 is a code diagram showing pseudo-code for performing a median-of-3 calculation according to the prior art.
- FIG. 15 is a diagram showing a B-frame with past and future reference frames according to the prior art.
- FIG. 16 is a block diagram of a suitable computing environment in conjunction with which several described embodiments may be implemented.
- FIG. 17 is a block diagram of a generalized video encoder system in conjunction with which several described embodiments may be implemented.
- FIG. 18 is a block diagram of a generalized video decoder system in conjunction with which several described embodiments may be implemented.
- FIG. 19 is a diagram of a macroblock format used in several described embodiments.
- FIG. 20A is a diagram of part of an interlaced video frame, showing alternating lines of a top field and a bottom field.
- FIG. 20B is a diagram of the interlaced video frame organized for encoding/decoding as a frame
- FIG. 20C is a diagram of the interlaced video frame organized for encoding/decoding as fields.
- FIG. 21 is a diagram showing motion vectors for luminance blocks and derived motion vectors for chrominance blocks in a 2 field MV macroblock of an interlaced P-frame.
- FIG. 22 is a diagram showing different motion vectors for each of four luminance blocks, and derived motion vectors for each of four chrominance sub-blocks, in a 4 frame MV macroblock of an interlaced P-frame.
- FIG. 23 is a diagram showing motion vectors for luminance blocks and derived motion vectors for chrominance blocks in a 4 field MV macroblock of an interlaced P-frame.
- FIGS. 24A-24B are diagrams showing candidate predictors for a current macroblock of an interlaced P-frame.
- FIG. 25 is a flow chart showing a technique for determining whether to skip coding of particular macroblocks in an interlaced predicted frame.
- FIG. 26 is a flow chart showing a technique for decoding jointly coded motion compensation type information and field/frame coding type information for a macroblock in an interlaced P-frame.
- FIG. 27 is a diagram showing an entry-point-layer bitstream syntax in a combined implementation.
- FIG. 28 is a diagram showing a frame-layer bitstream syntax for interlaced P-frames in a combined implementation.
- FIG. 29 is a diagram showing a frame-layer bitstream syntax for interlaced B-frames in a combined implementation.
- FIG. 30 is a diagram showing a frame-layer bitstream syntax for interlaced P-fields or B-fields in a combined implementation.
- FIG. 31 is a diagram showing a macroblock-layer bitstream syntax for macroblocks of interlaced P-frames in a combined implementation.
- FIG. 32 is a code listing showing pseudo-code for collecting candidate motion vectors for 1MV macroblocks in an interlaced P-frame in a combined implementation.
- FIGS. 33, 34 , 35 , and 36 are code listings showing pseudo-code for collecting candidate motion vectors for 4 Frame MV macroblocks in an interlaced P-frame in a combined implementation.
- FIGS. 37 and 38 are code listings showing pseudo-code for collecting candidate motion vectors for 2 Field MV macroblocks in an interlaced P-frame in a combined implementation.
- FIGS. 39, 40 , 41 , and 42 are code listings showing pseudo-code for collecting candidate motion vectors for 4 Field MV macroblocks in an interlaced P-frame in a combined implementation.
- FIG. 43 is a code listing showing pseudo-code for computing motion vector predictors for frame motion vectors in an interlaced P-frame in a combined implementation.
- FIG. 44 is a code listing showing pseudo-code for computing motion vector predictors for field motion vectors in an interlaced P-frame in a combined implementation.
- FIG. 45A and 45B are code listings showing pseudo-code for decoding a motion vector differential for interlaced P-frames in a combined implementation.
- FIG. 46 is a code listing showing pseudo-code for deriving a chroma motion vector in an interlaced P-frame in a combined implementation.
- FIGS. 47A-47C are diagrams showing tiles for Norm-6 and Diff-6 bitplane coding modes in a combined implementation.
- a video encoder and decoder incorporate techniques for encoding and decoding interlaced video, and corresponding signaling techniques for use with a bit stream format or syntax comprising different layers or levels (e.g., sequence level, frame level, field level, macroblock level, and/or block level).
- the various techniques and tools can be used in combination or independently. Different embodiments implement one or more of the described techniques and tools. Some techniques and tools described herein can be used in a video encoder or decoder, or in some other system not specifically limited to video encoding or decoding.
- FIG. 16 illustrates a generalized example of a suitable computing environment 1600 in which several of the described embodiments may be implemented.
- the computing environment 1600 is not intended to suggest any limitation as to scope of use or functionality, as the techniques and tools may be implemented in diverse general-purpose or special-purpose computing environments.
- the computing environment 1600 includes at least one processing unit 1610 and memory 1620 .
- the processing unit 1610 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power.
- the memory 1620 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two.
- the memory 1620 stores software 1680 implementing a video encoder or decoder with one or more of the described techniques and tools.
- a computing environment may have additional features.
- the computing environment 1600 includes storage 1640 , one or more input devices 1650 , one or more output devices 1660 , and one or more communication connections 1670 .
- An interconnection mechanism such as a bus, controller, or network interconnects the components of the computing environment 1600 .
- operating system software provides an operating environment for other software executing in the computing environment 1600 , and coordinates activities of the components of the computing environment 1600 .
- the storage 1640 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 1600 .
- the storage 1640 stores instructions for the software 1680 implementing the video encoder or decoder.
- the input device(s) 1650 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 1600 .
- the input device(s) 1650 may be a sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD-ROM or CD-RW that reads audio or video samples into the computing environment 1600 .
- the output device(s) 1660 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 1600 .
- the communication connection(s) 1670 enable communication over a communication medium to another computing entity.
- the communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal.
- a modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
- communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
- Computer-readable media are any available media that can be accessed within a computing environment.
- Computer-readable media include memory 1620 , storage 1640 , communication media, and combinations of any of the above.
- program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- the functionality of the program modules may be combined or split between program modules as desired in various embodiments.
- Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
- FIG. 17 is a block diagram of a generalized video encoder 1700 in conjunction with which some described embodiments may be implemented.
- FIG. 18 is a block diagram of a generalized video decoder 1800 in conjunction with which some described embodiments may be implemented.
- FIGS. 17 and 18 usually do not show side information indicating the encoder settings, modes, tables, etc. used for a video sequence, picture, macroblock, block, etc.
- Such side information is sent in the output bitstream, typically after entropy encoding of the side information.
- the format of the output bitstream can be a Windows Media Video version 9 format or other format.
- the encoder 1700 and decoder 1800 process video pictures, which may be video frames, video fields or combinations of frames and fields.
- the bitstream syntax and semantics at the picture and macroblock levels may depend on whether frames or fields are used. There may be changes to macroblock organization and overall timing as well.
- the encoder 1700 and decoder 1800 are block-based and use a 4:2:0 macroblock format for frames, with each macroblock including four 8 ⁇ 8 luminance blocks (at times treated as one 16 ⁇ 16 macroblock) and two 8 ⁇ 8 chrominance blocks. For fields, the same or a different macroblock organization and format may be used.
- the 8 ⁇ 8 blocks may be further sub-divided at different stages, e.g., at the frequency transform and entropy encoding stages.
- Example video frame organizations are described in more detail below.
- the encoder 1700 and decoder 1800 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration than 8 ⁇ 8 blocks and 16 ⁇ 16 macroblocks.
- modules of the encoder or decoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules.
- encoders or decoders with different modules and/or other configurations of modules perform one or more of the described techniques.
- the encoder 1700 and decoder 1800 process video frames organized as follows.
- a frame contains lines of spatial information of a video signal. For progressive video, these lines contain samples starting from one time instant and continuing through successive lines to the bottom of the frame.
- a progressive video frame is divided into macroblocks such as the macroblock 1900 shown in FIG. 19 .
- the macroblock 1900 includes four 8 ⁇ 8 luminance blocks (Y 1 through Y 4 ) and two 8 ⁇ 8 chrominance blocks that are co-located with the four luminance blocks but half resolution horizontally and vertically, following the conventional 4:2:0 macroblock format.
- the 8 ⁇ 8 blocks may be further sub-divided at different stages, e.g., at the frequency transform (e.g., 8 ⁇ 4, 4 ⁇ 8 or 4 ⁇ 4 DCTs) and entropy encoding stages.
- a progressive I-frame is an intra-coded progressive video frame.
- a progressive P-frame is a progressive video frame coded using forward prediction, and a progressive B-frame is a progressive video frame coded using bi-directional prediction.
- Progressive P- and B-frames may include intra-coded macroblocks as well as different types of predicted macroblocks.
- An interlaced video frame consists of two scans of a frame—one comprising the even lines of the frame (the top field) and the other comprising the odd lines of the frame (the bottom field).
- the two fields may represent two different time periods or they may be from the same time period.
- FIG. 20A shows part of an interlaced video frame 2000 , including the alternating lines of the top field and bottom field at the top left part of the interlaced video frame 2000 .
- FIG. 20B shows the interlaced video frame 2000 of FIG. 20A organized for encoding/decoding as a frame 2030 .
- the interlaced video frame 2000 has been partitioned into macroblocks such as the macroblocks 2031 and 2032 , which use a 4:2:0 format as shown in FIG. 19 .
- each macroblock 2031 , 2032 includes 8 lines from the top field alternating with 8 lines from the bottom field for 16 lines total, and each line is 16 pixels long.
- An interlaced I-frame is two intra-coded fields of an interlaced video frame, where a macroblock includes information for the two fields.
- An interlaced P-frame is two fields of an interlaced video frame coded using forward prediction, and an interlaced B-frame is two fields of an interlaced video frame coded using bidirectional prediction, where a macroblock includes information for the two fields.
- Interlaced P- and B-frames may include intra-coded macroblocks as well as different types of predicted macroblocks.
- Interlaced BI-frames are a hybrid of interlaced I-frames and interlaced B-frames; they are intra-coded, but are not used as anchors for other frames.
- FIG. 20C shows the interlaced video frame 2000 of FIG. 20A organized for encoding/decoding as fields 2060 .
- Each of the two fields of the interlaced video frame 2000 is partitioned into macroblocks.
- the top field is partitioned into macroblocks such as the macroblock 2061
- the bottom field is partitioned into macroblocks such as the macroblock 2062 .
- the macroblocks use a 4:2:0 format as shown in FIG. 19 , and the organization and placement of luminance blocks and chrominance blocks within the macroblocks are not shown.
- the macroblock 2061 includes 16 lines from the top field and the macroblock 2062 includes 16 lines from the bottom field, and each line is 16 pixels long.
- An interlaced I-field is a single, separately represented field of an interlaced video frame.
- An interlaced P-field is a single, separately represented field of an interlaced video frame coded using forward prediction
- an interlaced B-field is a single, separately represented field of an interlaced video frame coded using bidirectional prediction.
- Interlaced P- and B-fields may include intra-coded macroblocks as well as different types of predicted macroblocks.
- Interlaced BI-fields are a hybrid of interlaced I-fields and interlaced B-fields; they are intra-coded, but are not used as anchors for other fields.
- Interlaced video frames organized for encoding/decoding as fields can include various combinations of different field types.
- such a frame can have the same field type in both the top and bottom fields or different field types in each field.
- the possible combinations of field types include I/I, I/P, P/I, P/P, B/B, B/BI, BI/B, and BI/BI.
- picture generally refers to source, coded or reconstructed image data.
- a picture is a progressive video frame.
- a picture may refer to an interlaced video frame, the top field of the frame, or the bottom field of the frame, depending on the context.
- the encoder 1700 and decoder 1800 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration than 8 ⁇ 8 blocks and 16 ⁇ 16 macroblocks.
- FIG. 17 is a block diagram of a generalized video encoder system 1700 .
- the encoder system 1700 receives a sequence of video pictures including a current picture 1705 (e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame), and produces compressed video information 1795 as output.
- a current picture 1705 e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame
- Particular embodiments of video encoders typically use a variation or supplemented version of the generalized encoder 1700 .
- the encoder system 1700 compresses predicted pictures and key pictures. For the sake of presentation, FIG. 17 shows a path for key pictures through the encoder system 1700 and a path for predicted pictures. Many of the components of the encoder system 1700 are used for compressing both key pictures and predicted pictures. The exact operations performed by those components can vary depending on the type of information being compressed.
- a predicted picture (e.g., progressive P-frame or B-frame, interlaced P-field or B-field, or interlaced P-frame or B-frame) is represented in terms of prediction (or difference) from one or more other pictures (which are typically referred to as reference pictures or anchors).
- a prediction residual is the difference between what was predicted and the original picture.
- a key picture e.g., progressive I-frame, interlaced I-field, or interlaced I-frame
- a motion estimator 1710 estimates motion of macroblocks or other sets of pixels of the current picture 1705 with respect to one or more reference pictures, for example, the reconstructed previous picture 1725 buffered in the picture store 1720 . If the current picture 1705 is a bi-directionally-predicted picture, a motion estimator 1710 estimates motion in the current picture 1705 with respect to up to four reconstructed reference pictures (for an interlaced B-field, for example). Typically, a motion estimator estimates motion in a B-picture with respect to one or more temporally previous reference pictures and one or more temporally future reference pictures.
- the encoder system 1700 can use the separate stores 1720 and 1722 for multiple reference pictures.
- progressive B-frames and interlaced B-frames and B-fields see U.S. patent application Ser. No. 10/622,378, entitled, “Advanced Bi-Directional Predictive Coding of Video Frames,” filed Jul. 18, 2003, and U.S. patent application Ser. No. 10/882,135, entitled, “Advanced Bi-Directional Predictive Coding of Interlaced Video,” filed Jun. 29, 2004, which is hereby incorporated herein by reference.
- the motion estimator 1710 can estimate motion by pixel, 1 ⁇ 2 pixel, 1 ⁇ 2 pixel, or other increments, and can switch the resolution of the motion estimation on a picture-by-picture basis or other basis.
- the motion estimator 1710 (and compensator 1730 ) also can switch between types of reference picture pixel interpolation (e.g., between bicubic and bilinear) on a per-frame or other basis.
- the resolution of the motion estimation can be the same or different horizontally and vertically.
- the motion estimator 1710 outputs as side information motion information 1715 such as differential motion vector information.
- the encoder 1700 encodes the motion information 1715 by, for example, computing one or more predictors for motion vectors, computing differentials between the motion vectors and predictors, and entropy coding the differentials.
- a motion compensator 1730 combines a predictor with differential motion vector information.
- Various techniques for computing motion vector predictors, computing differential motion vectors, and reconstructing motion vectors for interlaced P-frames are described below.
- the motion compensator 1730 applies the reconstructed motion vector to the reconstructed picture(s) 1725 to form a motion-compensated current picture 1735 .
- the prediction is rarely perfect, however, and the difference between the motion-compensated current picture 1735 and the original current picture 1705 is the prediction residual 1745 .
- the prediction residual 1745 is added to the motion compensated current picture 1735 to obtain a reconstructed picture that is closer to the original current picture 1705 . In lossy compression, however, some information is still lost from the original current picture 1705 .
- a motion estimator and motion compensator apply another type of motion estimation/compensation.
- a frequency transformer 1760 converts the spatial domain video information into frequency domain (i.e., spectral) data.
- the frequency transformer 1760 applies a DCT, variant of DCT, or other block transform to blocks of the pixel data or prediction residual data, producing blocks of frequency transform coefficients.
- the frequency transformer 1760 applies another conventional frequency transform such as a Fourier transform or uses wavelet or sub-band analysis.
- the frequency transformer 1760 may apply an 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, 4 ⁇ 4 or other size frequency transform.
- a quantizer 1770 then quantizes the blocks of spectral data coefficients.
- the quantizer applies uniform, scalar quantization to the spectral data with a step-size that varies on a picture-by-picture basis or other basis.
- the quantizer applies another type of quantization to the spectral data coefficients, for example, a non-uniform, vector, or non-adaptive quantization, or directly quantizes spatial domain data in an encoder system that does not use frequency transformations.
- the encoder 1700 can use frame dropping, adaptive filtering, or other techniques for rate control.
- the encoder 1700 may use special signaling for a skipped macroblock, which is a macroblock that has no information of certain types. Skipped macroblocks are described in further detail below.
- an inverse quantizer 1776 When a reconstructed current picture is needed for subsequent motion estimation/compensation, an inverse quantizer 1776 performs inverse quantization on the quantized spectral data coefficients. An inverse frequency transformer 1766 then performs the inverse of the operations of the frequency transformer 1760 , producing a reconstructed prediction residual (for a predicted picture) or a reconstructed key picture. If the current picture 1705 was a key picture, the reconstructed key picture is taken as the reconstructed current picture (not shown). If the current picture 1705 was a predicted picture, the reconstructed prediction residual is added to the motion-compensated current picture 1735 to form the reconstructed current picture. One or both of the picture stores 1720 , 1722 buffers the reconstructed current picture for use in motion compensated prediction. In some embodiments, the encoder applies a de-blocking filter to the reconstructed frame to adaptively smooth discontinuities and other artifacts in the picture.
- the entropy coder 1780 compresses the output of the quantizer 1770 as well as certain side information (e.g., motion information 1715 , quantization step size).
- Typical entropy coding techniques include arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above.
- the entropy coder 1780 typically uses different coding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular coding technique.
- the entropy coder 1780 provides compressed video information 1795 to the multiplexer [“MUX”] 1790 .
- the MUX 1790 may include a buffer, and a buffer level indicator may be fed back to bit rate adaptive modules for rate control.
- the compressed video information 1795 can be channel coded for transmission over the network. The channel coding can apply error detection and correction data to the compressed video information 1795 .
- FIG. 18 is a block diagram of a general video decoder system 1800 .
- the decoder system 1800 receives information 1895 for a compressed sequence of video pictures and produces output including a reconstructed picture 1805 (e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame).
- a reconstructed picture 1805 e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame.
- Particular embodiments of video decoders typically use a variation or supplemented version of the generalized decoder 1800 .
- the decoder system 1800 decompresses predicted pictures and key pictures.
- FIG. 18 shows a path for key pictures through the decoder system 1800 and a path for forward-predicted pictures.
- Many of the components of the decoder system 1800 are used for decompressing both key pictures and predicted pictures. The exact operations performed by those components can vary depending on the type of information being decompressed.
- a DEMUX 1890 receives the information 1895 for the compressed video sequence and makes the received information available to the entropy decoder 1880 .
- the DEMUX 1890 may include a jitter buffer and other buffers as well. Before or after the DEMUX 1890 , the compressed video information can be channel decoded and processed for error detection and correction.
- the entropy decoder 1880 entropy decodes entropy-coded quantized data as well as entropy-coded side information (e.g., motion information 1815 , quantization step size), typically applying the inverse of the entropy encoding performed in the encoder.
- Entropy decoding techniques include arithmetic decoding, differential decoding, Huffman decoding, run length decoding, LZ decoding, dictionary decoding, and combinations of the above.
- the entropy decoder 1880 typically uses different decoding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular decoding technique.
- the decoder 1800 decodes the motion information 1815 by, for example, computing one or more predictors for motion vectors, entropy decoding differential motion vectors, and combining decoded differential motion vectors with predictors to reconstruct motion vectors.
- a motion compensator 1830 applies motion information 1815 to one or more reference pictures 1825 to form a prediction 1835 of the picture 1805 being reconstructed.
- the motion compensator 1830 uses one or more macroblock motion vector to find macroblock(s) in the reference picture(s) 1825 .
- One or more picture stores e.g., picture store 1820 , 1822
- B-pictures have more than one reference picture (e.g., at least one temporally previous reference picture and at least one temporally future reference picture). Accordingly, the decoder system 1800 can use separate picture stores 1820 and 1822 for multiple reference pictures.
- the motion compensator 1830 can compensate for motion at pixel, 1 ⁇ 2 pixel, 1 ⁇ 4 pixel, or other increments, and can switch the resolution of the motion compensation on a picture-by-picture basis or other basis.
- the motion compensator 1830 also can switch between types of reference picture pixel interpolation (e.g., between bicubic and bilinear) on a per-frame or other basis.
- the resolution of the motion compensation can be the same or different horizontally and vertically.
- a motion compensator applies another type of motion compensation.
- the prediction by the motion compensator is rarely perfect, so the decoder 1800 also reconstructs prediction residuals.
- An inverse quantizer 1870 inverse quantizes entropy-decoded data.
- the inverse quantizer applies uniform, scalar inverse quantization to the entropy-decoded data with a step-size that varies on a picture-by-picture basis or other basis.
- the inverse quantizer applies another type of inverse quantization to the data, for example, to reconstruct after a non-uniform, vector, or non-adaptive quantization, or directly inverse quantizes spatial domain data in a decoder system that does not use inverse frequency transformations.
- An inverse frequency transformer 1860 converts the quantized, frequency domain data into spatial domain video information.
- the inverse frequency transformer 1860 applies an inverse DCT [“IDCT”], variant of IDCT, or other inverse block transform to blocks of the frequency transform coefficients, producing pixel data or prediction residual data for key pictures or predicted pictures, respectively.
- the inverse frequency transformer 1860 applies another conventional inverse frequency transform such as an inverse Fourier transform or uses wavelet or sub-band synthesis.
- the inverse frequency transformer 1860 may apply an 8 ⁇ 8, 8 ⁇ 4, 4 ⁇ 8, 4 ⁇ 4, or other size inverse frequency transform.
- the decoder 1800 For a predicted picture, the decoder 1800 combines the reconstructed prediction residual 1845 with the motion compensated prediction 1835 to form the reconstructed picture 1805 .
- the decoder needs a reconstructed picture 1805 for subsequent motion compensation, one or both of the picture stores (e.g., picture store 1820 ) buffers the reconstructed picture 1805 for use in predicting the next picture.
- the decoder 1800 applies a de-blocking filter to the reconstructed picture to adaptively smooth discontinuities and other artifacts in the picture.
- a typical interlaced video frame consists of two fields (e.g., a top field and a bottom field) scanned at different times.
- frame mode coding
- field mode coding
- a forward-predicted interlaced video frame may be coded as two separate forward-predicted fields—interlaced P-fields. Coding fields separately for a forward-predicted interlaced video frame may be efficient, for example, when there is high motion throughout the interlaced video frame, and hence much difference between the fields.
- An interlaced P-field references one or more previously decoded fields.
- an interlaced P-field references either one or two previously decoded fields.
- U.S. Provisional Patent Application No. 60/501,081 entitled “Video Encoding and Decoding Tools and Techniques,” filed Sep. 7, 2003
- U.S. Patent Application Ser. No. 10/857,473 entitled, “Predicting Motion Vectors for Fields of Forward-predicted Interlaced Video Frames,” filed May 27, 2004, which is incorporated herein by reference.
- a forward-predicted interlaced video frame may be coded using a mixture of field coding and frame coding, as an interlaced P-frame.
- the macroblock includes lines of pixels for the top and bottom fields, and the lines may be coded collectively in a frame-coding mode or separately in a field-coding mode.
- macroblocks in interlaced P-frames can be one of five types: 1MV, 2 Field MV, 4 Frame MV, 4 Field MV, and Intra.
- a 1MV macroblock the displacement of the four luminance blocks in the macroblock is represented by a single motion vector.
- a corresponding chroma motion vector can be derived from the luma motion vector to represent the displacements of each of the two 8 ⁇ 8 chroma blocks for the motion vector.
- a 1MV macroblock 1900 includes four 8 ⁇ 8 luminance blocks and two 8 ⁇ 8 chrominance blocks.
- the displacement of the luminance blocks (Y 1 through Y 4 ) are represented by single motion vector, and a corresponding chroma motion vector can be derived from the luma motion vector to represent the displacements of each of the two chroma blocks (U and V).
- FIG. 21 shows that a top field motion vector describes the displacement of the even lines of the luminance component and that a bottom field motion vector describes the displacement of the odd lines of the luminance component.
- an encoder can derive a corresponding top field chroma motion vector that describes the displacement of the even lines of the chroma blocks.
- an encoder can derive a bottom field chroma motion vector that describes the displacements of the odd lines of the chroma blocks.
- each chroma block can be motion compensated by using four derived chroma motion vectors (MV 1 ′, MV 2 ′, MV 3 ′ and MV 4 ′) that describe the displacement of four 4 ⁇ 4 chroma sub-blocks.
- a motion vector for each 4 ⁇ 4 chroma sub-block can be derived from the motion vector for the spatially corresponding luminance block.
- each field in the 16 ⁇ 16 luminance component is described by two different motion vectors.
- the lines of the luminance component are subdivided vertically to form two 8 ⁇ 16 regions each comprised of an 8 ⁇ 8 region of even lines interleaved with an 8 ⁇ 8 region of odd lines.
- the displacement of the left 8 ⁇ 8 region is described by the top left field block motion vector and the displacement of the right 8 ⁇ 8 region is described by the top right field block motion vector.
- the displacement of the left 8 ⁇ 8 region is described by the bottom left field block motion vector and the displacement of the right 8 ⁇ 8 region is described by the bottom right field block motion vector.
- Each chroma block also can be partitioned into four regions and each chroma block region can be motion compensated using a derived motion vector.
- the process of computing the motion vector predictor(s) for a current macroblock in an interlaced P-frame consists of two steps.
- First, three candidate motion vectors for the current macroblock are gathered from its neighboring macroblocks.
- candidate motion vectors are gathered based on the arrangement shown in FIGS. 24A-24B (and various special cases for top row macroblocks, etc.).
- candidate motion vectors can be gathered in some other order or arrangement.
- the motion vector predictor(s) for the current macroblock is computed from the set of candidate motion vectors.
- the predictor can be computed using median-of-3 prediction, or by some other method.
- Described embodiments include techniques and tools for signaling macroblock information for interlaced frame coded pictures (e.g., interlaced P-frames, interlaced B-frames, etc.).
- described techniques and tools include techniques and tools for signaling macroblock information for interlaced P-frames, and techniques and tools for using and signaling skipped macroblocks in interlaced P-frames and other interlaced pictures (e.g., interlaced B-frames, interlaced P-fields, interlaced B-fields, etc.).
- Described embodiments implement one or more of the described techniques and tools including, but not limited to, the following:
- an encoder signals skipped macroblocks. For example, an encoder signals a skipped macroblock in an interlaced frame when a macroblock is coded with one motion vector, has a zero motion vector differential, and has no coded blocks (i.e., no residuals for any block).
- the skip information can coded as a compressed bitplane (e.g., at frame level) or can be signaled on a one bit per macroblock basis (e.g., at macroblock level).
- the signaling of the skip condition for the macroblock is separate from the signaling of a macroblock mode for the macroblock.
- a decoder performs corresponding decoding.
- a skipped macroblock takes advantage of the observation that when more than one motion vector is used to encode a macroblock, the macroblock is rarely skipped because it is unlikely that all of the motion vector differentials will be zero and that all of the blocks will not be coded. Thus, when a macroblock is signaled as being skipped, the macroblock mode (1MV) is implied from the skip condition and need not be sent for the macroblock. In interlaced P-frames, a 1MV macroblock is motion compensated with one frame motion vector.
- FIG. 25 shows a technique 2500 for determining whether to skip coding of particular macroblocks in an interlaced predicted frame (e.g., an interlaced P-frame, an interlaced B-frame, or a frame comprising interlaced P-fields and/or interlaced B-fields).
- the encoder checks whether the macroblock is a 1MV macroblock at 2510 .
- the encoder does not skip the macroblock. Otherwise, at 2530 , the encoder checks whether the one motion vector for the macroblock is equal to its causally predicted motion vector (e.g., whether the differential motion vector for the macroblock is equal to zero).
- the encoder determines whether there is any residual to be encoded for the blocks of the macroblock.
- the encoder does not skip the macroblock.
- the encoder skips the macroblock.
- the encoder can continue to encode or skip macroblocks until encoding is done.
- the macroblock-level SKIPMBBIT field (which can also be labeled SKIPMB, etc.) indicates the skip condition for a macroblock. If the SKIPMBBIT field is 1, then the current macroblock is skipped and no other information is sent after the SKIPMBBIT field. On the other hand, if the SKIPMBBIT field is not 1, the MBMODE field is decoded to indicate the type of macroblock and other information regarding the current macroblock, such as information described below in Section IV.B.
- the SKIPMB field indicates skip information for macroblocks in the frame.
- the skip information can be encoded in one of several modes.
- the SKIPMB field indicates the presence of SKIPMBBIT at macroblock level.
- the SKIPMB field stores skip information in a compressed bit plane.
- Available bitplane coding modes include normal-2 mode, differential-2 mode, normal-6 mode, differential-6 mode, rowskip mode, and columnskip mode. Bitplane coding modes are described in further detail in Section V.C, below.
- the decoded SKIPMB bitplane contains one bit per macroblock and indicates the skip condition for each respective macroblock.
- skipped macroblocks are signaled in some other way or at some other level in the bitstream.
- a compressed bitplane is sent at field level.
- the skip condition can be defined to imply information about a skipped macroblock other than and/or in addition to the information described above.
- an encoder jointly encodes motion compensation type and potentially other information about a macroblock with field/frame coding type information for the macroblock. For example, an encoder jointly encodes one of five motion compensation types (1MV, 4 Frame MV, 2 Field MV, 4 Field MV, and intra) with a field transform/frame transform/no coded blocks event using one or more variable length coding tables. A decoder performs corresponding decoding.
- Jointly coding motion compensation type and field/frame coding type information for a macroblock takes advantage of the observation that certain field/frame coding types are more likely to occur in certain contexts for a macroblock of a given motion compensation type. Variable length coding can then be used to assigned shorter codes to the more likely combinations of motion compensation type and field/frame coding type. For even more flexibility, multiple variable length coding tables can be used, and an encoder can switch between the tables depending on the situation. Thus, jointly coding motion compensation type and field/frame coding type information for a macroblock can provide savings in coding overhead that would otherwise be used to signal field/frame coding type separately for each macroblock.
- an encoder selects a motion compensation type (e.g., 1MV, 4 Frame MV, 2 Field MV, or 4 Field MV) and a field/frame coding type (e.g., field, frame, or no coded blocks) for a macroblock.
- the encoder jointly encodes the motion compensation type and the field/frame coding type for the macroblock.
- the encoder also can encode other information jointly with the motion compensation type and field/frame coding type.
- the encoder can jointly encode information indicating the presence or absence of a differential motion vector for the macroblock (e.g., for a macroblock having one motion vector).
- FIG. 26 shows a technique 2600 for decoding jointly coded motion compensation type information and field/frame coding type information for a macroblock in an interlaced P-frame in some implementations.
- a decoder receives macroblock information which includes a joint code (e.g., a variable length code from a variable coding table) representing motion compensation type and field/frame coding type for a macroblock.
- the decoder decodes the joint code (e.g., by looking up the joint code in a variable length coding table) to obtain motion compensation type information and field/frame coding type information for the macroblock.
- the macroblock-level bitstream element MBMODE jointly specifies the type of macroblock (1MV, 4 Frame MV, 2 Field MV, 4 Field MV, or intra), field/frame coding types for inter-coded macroblock (field, frame, or no coded blocks), and whether there is a differential motion vector for a 1MV macroblock.
- MBMODE can take one of 15 possible values.
- MBMODE signals the following information jointly:
- the CBPCY syntax element is not decoded when ⁇ Field/frame Transform> in MBMODE indicates no coded blocks.
- ⁇ Field/frame Transform> in MBMODE indicates field or frame transform
- CBPCY is decoded.
- an additional field is sent to indicate which of the differential motion vectors is non-zero.
- the 2MVBP field is sent to indicate which of the two motion vectors contain nonzero differential motion vectors.
- the 4MVBP field is sent to indicate which of four motion vectors contain nonzero differential motion vectors.
- the Field/Frame coding types and zero coded blocks are coded in separate fields.
- an encoder/decoder uses joint coding with different combinations of motion compensation types and field/frame coding types.
- an encoder/decoder jointly encodes/decodes additional information other than the presence of motion vector differentials.
- an encoder/decoder uses one of several variable length code tables to encode MBMODE and can adaptively switch between code tables.
- the frame-level syntax element MBMODETAB is a 2-bit field that indicates the table used to decode the MBMODE for macroblocks in the frame.
- the tables are grouped into sets of four tables, and the set of tables used depends on whether four-motion-vector coding is enabled for the frame.
- Exemplary MBMODE variable length code tables (e.g., Tables 0-3 for each set—Mixed MV or 1MV) are provided below in Tables 1-8: TABLE 1 Interlace P-Frame Mixed MV MB Mode Table 0 MV VLC VLC MB Type Present Transform Codeword VLC Size (binary) 1 MV 1 Frame 22 5 10110 1 MV 1 Field 17 5 10001 1 MV 1 No CBP 0 2 00 1 MV 0 Frame 47 6 101111 1 MV 0 Field 32 6 100000 2 Field MV N/A Frame 10 4 1010 2 Field MV N/A Field 1 2 01 2 Field MV N/A No CBP 3 2 11 4 Frame MV N/A Frame 67 7 1000011 4 Frame MV N/A Field 133 8 10000101 4 Frame MV N/A No CBP 132 8 10000100 4 Field MV N/A Frame 92 7 1011100 4 Field MV N/A Field 19 5 10011 4 Field MV N/A No CBP 93 7 10
- data for interlaced pictures is presented in the form of a bitstream having plural layers (e.g., sequence, entry point, frame, field, macroblock, block and/or sub-block layers).
- layers e.g., sequence, entry point, frame, field, macroblock, block and/or sub-block layers.
- arrow paths show the possible flows of syntax elements.
- Syntax elements shown with square-edged boundaries indicate fixed-length syntax elements; those with rounded boundaries indicate variable-length syntax elements and those with a rounded boundary within an outer rounded boundary indicate a syntax element (e.g., a bitplane) made up of simpler syntax elements.
- a fixed-length syntax element is defined to be a syntax element for which the length of the syntax element is not dependent on data in the syntax element itself; the length of a fixed-length syntax element is either constant or determined by prior data in the syntax flow.
- a lower layer in a layer diagram e.g., a macroblock layer in a frame-layer diagram
- a lower layer in a layer diagram is indicated by a rectangle within a rectangle.
- Entry-point-level bitstream elements are shown in FIG. 27 .
- an entry point marks a position in a bitstream (e.g., an I-frame or other key frame) at which a decoder can begin decoding. In other words, no pictures before the entry point in the bitstream are needed to decode pictures after the entry point.
- An entry point header can be used to signal changes in coding control parameters (e.g., enabling or disabling compression tools (e.g., in-loop deblocking filtering) for frames following an entry point).
- bitstream elements in FIGS. 28 and 29 For interlaced P-frames and B-frames, frame-level bitstream elements are shown in FIGS. 28 and 29 , respectively.
- Data for each frame consists of a frame header followed by data for the macroblock layer (whether for intra or various inter type macroblocks).
- the bitstream elements that make up the macroblock layer for interlaced P-frames (whether for intra or various inter type macroblocks) are shown in FIG. 31 .
- Bitstream elements in the macroblock layer for interlaced P-frames may be present for macroblocks in other interlaced pictures (e.g., interlaced B-frames, interlaced P-fields, interlaced B-fields, etc.)
- frame-level bitstream elements are shown in FIG. 30 .
- Data for each frame consists of a frame header followed by data for the field layers (shown as the repeated “FieldPicLayer” element per field) and data for the macroblock layers (whether for intra, 1MV, or 4MV macroblocks).
- bitstream elements in the frame and macroblock layers that are related to signaling for interlaced pictures. Although the selected bitstream elements are described in the context of a particular layer, some bitstream elements can be used in more than one layer.
- EXTENDED_MV is a 1-bit syntax element that indicates whether extended motion vectors is turned on (value 1) or off (value 0). EXTENDED_MV indicates the possibility of extended motion vectors (signaled at frame level with the syntax element MVRANGE) in P-frames and B-frames.
- EXTENDED_DMV Extended Differential Motion Vector Range
- DMVRANGE extended differential motion vector range
- VSTRANSFORM (1 Bit)
- FIGS. 28 and 29 are diagrams showing frame-level bitstream syntaxes for interlaced P-frames and interlaced B-frames, respectively.
- FIG. 30 is a diagram showing a frame-layer bitstream syntax for frames containing interlaced P-fields, and/or B-fields (or potentially other kinds of interlaced fields). Specific bitstream elements are described below.
- FCM Frame Coding Mode
- FCM is a variable length codeword [“VLC”] used to indicate the picture coding type.
- FCM takes on values for frame coding modes as shown in Table 9 below: TABLE 9 Frame Coding Mode VLC FCM value Frame Coding Mode 0 Progressive 10 Frame-Interlace 11 Field-Interlace Field Picture Type (FPTYPE) (3 Bits)
- FPTYPE is a three-bit syntax element present in the frame header for a frame including interlaced P-fields and/or interlaced B-fields, and potentially other kinds of fields. FPTYPE takes on values for different combinations of field types in the interlaced video frame, according to Table 10 below. TABLE 10 Field Picture Type FLC FPTYPE FLC First Field Type Second Field Type 000 I I 001 I P 010 P I 011 P P 100 B B 101 B BI 110 BI B 111 BI BI Picture Type (PTYPE ) (Variable Size)
- PTYPE is a variable size syntax element present in the frame header for interlaced P-frames and interlaced B-frames (or other kinds of interlaced frames such as interlaced I-frames).
- PTYPE takes on values for different frame types according to Table 11 below. TABLE 11 Picture Type VLC PTYPE VLC Picture Type 110 I 0 P 10 B 1110 BI 1111 Skipped If PTYPE indicates that the frame is skipped then the frame is treated as a P-frame which is identical to its reference frame. The reconstruction of the skipped frame is equivalent conceptually to copying the reference frame. A skipped frame means that no further data is transmitted for this frame.
- UV Sampling Format (1 Bit)
- MVRANGE Extended MV Range
- MVRANGE is a variable-sized syntax element present when the entry-point-layer EXTENDED_MV bit is set to 1.
- the MVRANGE VLC represents a motion vector range.
- DMVRANGE Extended Differential MV Range
- the DMVRANGE VLC represents a motion vector differential range.
- the 4MVSWITCH syntax element is a 1-bit flag. If 4MVSWITCH is set to zero, the macroblocks in the picture have only one motion vector or two motion vectors, depending on whether the macroblock has been frame-coded or field-coded, respectively. If 4MVSWITCH is set to 1, there may be 1, 2 or 4 motion vectors per macroblock.
- Skipped Macroblock Decoding (SKIPMB) (Variable Size)
- the SKIPMB syntax element is a compressed bitplane containing information that indicates the skipped/not-skipped status of each macroblock in the picture.
- the decoded bitplane represents the skipped/not-skipped status for each macroblock with 1-bit values. A value of 0 indicates that the macroblock is not skipped. A value of 1 indicates that the macroblock is coded as skipped.
- a skipped status for a macroblock in interlaced P-frames means that the decoder treats the macroblock as 1MV with a motion vector differential of zero and a coded block pattern of zero. No other information is expected to follow for a skipped macroblock.
- MMODETAB Macroblock Mode Table (2 or 3 Bits)
- MBMODETAB syntax element is a fixed-length field.
- MBMODETAB is a 2-bit value that indicates which one of four code tables is used to decode the macroblock mode syntax element (MBMODE) in the macroblock layer. There are two sets of four code tables and the set that is being used depends on whether 4MV is used or not, as indicated by the 4MVSWITCH flag.
- MVTAB Motion Vector Table
- the MVTAB syntax element is a fixed length field.
- MVTAB is a 2-bit syntax element that indicates which of the four progressive (or, one-reference) motion vector code tables are used to code the MVDATA syntax element in the macroblock layer.
- the 2MVBPTAB syntax element is a 2-bit value that signals which of four code tables is used to decode the 2MV block pattern (2MVBP) syntax element in 2MV field macroblocks.
- the 4MVBPTAB syntax element is a 2-bit value that signals which of four code tables is used to decode the 4MV block pattern (4MVBP) syntax element in 4MV macroblocks. For interlaced P-frames, it is present if the 4MVSWITCH syntax element is set to 1.
- TMBF Macroblock-Level Transform Type Flag
- TFRM Frame-level Transform Type
- TFRM Frame-Level Transform Type
- TTFRM signals the transform type used to transform the 8 ⁇ 8 pixel error signal in predicted blocks.
- the 8 ⁇ 8 error blocks may be transformed using an 8 ⁇ 8 transform, two 8 ⁇ 4 transforms, two 4 ⁇ 8 transforms or four 4 ⁇ 4 transforms.
- FIG. 31 is a diagram showing a macroblock-level bitstream syntax for macroblocks interlaced P-frames in the combined implementation. Specific bitstream elements are described below. Data for a macroblock consists of a macroblock header followed by block layer data. Bitstream elements in the macroblock layer for interlaced P-frames (e.g., SKIPMBBIT) may potentially be present for macroblocks in other interlaced pictures (e.g., interlaced B-frames, etc.)
- MBMODE Macroblock Mode
- MBMODE is a variable-size syntax element that jointly specifies macroblock type (e.g., 1MV, 2 Field MV, 4 Field MV, 4 Frame MV or Intra), field/frame coding type (e.g., field, frame, or no coded blocks), and the presence of differential motion vector data for 1MV macroblocks.
- macroblock type e.g., 1MV, 2 Field MV, 4 Field MV, 4 Frame MV or Intra
- field/frame coding type e.g., field, frame, or no coded blocks
- 2MVBP is a variable-sized syntax element present in interlaced P-frame and interlaced B-frame macroblocks.
- 2MVBP is present if MBMODE indicates that the macroblock has two field motion vectors.
- 2MVBP indicates which of the 2 luma blocks contain non-zero motion vector differentials.
- 4MVBP is a variable-sized syntax element present in interlaced P-field, interlaced B-field, interlaced P-frame and interlaced B-frame macroblocks.
- interlaced P-frame 4MVBP is present if MBMODE indicates that the macroblock has four motion vectors.
- 4MVBP indicates which of the four luma blocks contain non-zero motion vector differentials.
- CBPPRESENT is a 1-bit syntax present in intra-coded macroblocks in interlaced P-frames and interlaced B-frames. If CBPPRESENT is 1, the CBPCY syntax element is present for that macroblock and is decoded. If CBPPRESENT is 0, the CBPCY syntax element is not present and shall be set to zero.
- CBPCY Coded Block Pattern
- CBPCY is a variable-length syntax element indicates the transform coefficient status for each block in the macroblock.
- CBPCY decodes to a 6-bit field which indicates whether coefficients are present for the corresponding block.
- a value of 0 in a particular bit position indicates that the corresponding block does not contain any non-zero AC coefficients.
- a value of 1 indicates that at least one non-zero AC coefficient is present.
- the DC coefficient is still present for each block in all cases.
- a value of 0 in a particular bit position indicates that the corresponding block does not contain any non-zero coefficients.
- a value of 1 indicates that at least one non-zero coefficient is present. For cases where the bit is 0, no data is encoded for that block.
- MVDA TA Motion Vector Data
- MVDATA is a variable sized syntax element that encodes differentials for the motion vector(s) for the macroblock, the decoding of which is described in detail in below.
- TTMB MB-Level Transform Type
- TTMB specifies a transform type, transform type signal level, and subblock pattern.
- each macroblock may be motion compensated in frame mode using one or four motion vectors or in field mode using two or four motion vectors.
- a macroblock that is inter-coded does not contain any intra blocks.
- the residual after motion compensation may be coded in frame transform mode or field transform mode. More specifically, the luma component of the residual is re-arranged according to fields if it is coded in field transform mode but remains unchanged in frame transform mode, while the chroma component remains the same.
- a macroblock may also be coded as intra.
- Motion compensation may be restricted to not include four (both field/frame) motion vectors, and this is signaled through 4MVSWITCH.
- the type of motion compensation and residual coding is jointly indicated for each macroblock through MBMODE and SKIPMB.
- MBMODE employs a different set of tables according to 4MVSWITCH.
- Macroblocks in interlaced P-frames are classified into five types: 1MV, 2 Field MV, 4 Frame MV, 4 Field MV, and Intra. These five types are described in further detail in above in Section III.
- the first four types of macroblock are inter-coded while the last type indicates that the macroblock is intra-coded.
- the macroblock type is signaled by the MBMODE syntax element in the macroblock layer along with the skip bit. (A skip condition for the macroblock also can be signaled at frame level in a compressed bit plane.)
- MBMODE jointly encodes macroblock types along with various pieces of information regarding the macroblock for different types of macroblock.
- the macroblock-level SKIPMBBIT field indicates the skip condition for a macroblock.
- the SKIPMB field indicates the presence of SKIPMBBIT at macroblock level (in raw mode) or stores skip information in a compressed bit plane.
- the decoded bitplane contains one bit per macroblock and indicates the skip condition for each respective macroblock.
- the residual is assumed to be frame-coded for loop filtering purposes.
- the MBMODE field is decoded to indicate the type of macroblock and other information regarding the current macroblock, such as information described in the following section.
- MBMODE jointly specifies the type of macroblock (1MV, 4 Frame MV, 2 Field MV, 4 Field MV, or intra), types of transform for inter-coded macroblock (i.e. field or frame or no coded blocks), and whether there is a differential motion vector for a 1MV macroblock.
- MBMODE can take one of 15 possible values:
- ⁇ MVP> denote the signaling of whether a nonzero 1MV differential motion vector is present or absent.
- MBMODE signals the following information jointly:
- the CBPCY syntax element is not decoded when ⁇ Field/frame Transform> in MBMODE indicates no coded blocks.
- CBPCY is decoded.
- the decoded ⁇ Field/frame Transform> is used to set the flag FIELDTX. If it indicates that the macroblock is field transform coded, FIELDTX is set to 1. If it indicates that the macroblock is frame transform coded, FIELDTX is set to 0.
- FIELDTX is set to the same type as the motion vector, i.e., FIELDTX is set to 1 if it is a field motion vector and to 0 if it is a frame motion vector.
- an additional field is sent to indicate which of the differential motion vectors is non-zero.
- the 2MVBP field is sent to indicate which of the two motion vectors contain nonzero differential motion vectors.
- the 4MVBP field is sent to indicate which of the four motion vectors contain nonzero differential motion vectors.
- the Field/Frame transform and zero coded blocks are coded in separate fields.
- the process of computing the motion vector predictor(s) for the current macroblock consists of two steps. First, three candidate motion vectors for the current macroblock are gathered from its neighboring macroblocks. Second, the motion vector predictor(s) for the current macroblock is computed from the set of candidate motion vectors. FIGS. 24A-24B show neighboring macroblocks from which the candidate motion vectors are gathered. The order of the collection of candidate motion vectors is important. In this combined implementation, the order of collection always starts at A, proceeds to B, and ends at C. A predictor candidate is considered to be non-existent if the corresponding block is outside the frame boundary or if the corresponding block is part of a different slice. Thus, motion vector prediction is not performed across slice boundaries.
- the pseudo-code 3200 in FIG. 32 is used to collect the up to three candidate motion vectors for the motion vector.
- the candidate motion vectors from the neighboring blocks are collected.
- the pseudo-code 3300 in FIG. 33 is used to collect the up to three candidate motion vectors for the top left frame block motion vector.
- the pseudo-code 3400 in FIG. 34 is used to collect the up to three candidate motion vectors for the top right frame block motion vector.
- the pseudo-code 3500 in FIG. 35 is used to collect the up to three candidate motion vectors for the bottom left frame block motion vector.
- the pseudo-code 3600 in FIG. 36 is used to collect the up to three candidate motion vectors for the bottom right frame block motion vector.
- the candidate motion vectors from the neighboring blocks are collected.
- the pseudo-code 3700 in FIG. 37 is used to collect the up to three candidate motion vectors for the top field motion vector.
- the pseudo-code 3800 in FIG. 38 is used to collect the up to three candidate motion vectors for the bottom field motion vector.
- the candidate motion vectors from the neighboring blocks are collected.
- the pseudo-code 3900 in FIG. 39 is used to collect the up to three candidate motion vectors for the top left field block motion vector.
- the pseudo-code 4000 in FIG. 40 is used to collect the up to three candidate motion vectors for the top right field block motion vector.
- the pseudo-code 4100 in FIG. 41 is used to collect the up to three candidate motion vectors for the bottom left field block motion vector.
- the pseudo-code 4200 in FIG. 42 is used to collect the up to three candidate motion vectors for the bottom right field block motion vector.
- MVX A (MVX 1 +MVX 2 +1)>>1
- MVY A (MVY 1 +MVY 2 +1)>>1
- This section describes how motion vector predictors are calculated for frame motion vectors given a set of candidate motion vectors.
- the operation is the same for computing the predictor for 1MV or for each one of the four frame block motion vectors in 4 Frame MV macroblocks.
- the pseudo-code 4300 in FIG. 43 describes how the motion vector predictor (PMV x , PMV y ) is computed for frame motion vectors.
- the ValidMV array denotes the motion vector in the set of candidate motion vectors.
- This section describes how motion vector predictors are computed for field motion vectors given the set of candidate motion vectors.
- the operation is the same for computing the predictor for each of the two field motion vectors in 2 Field MV macroblocks or for each of the four field block motion vectors in 4 Field MV macroblocks.
- the candidate motion vectors are separated into two sets, where one set contains only candidate motion vectors that point to the same field as the current field and the other set contains candidate motion vectors that point to the opposite field.
- the candidate motion vectors are represented in quarter pixel units
- the following check on its y-component verifies whether a candidate motion vector points to the same field: if (ValidMV y & 4) ⁇ ValidMV points to the opposite field: ⁇ else ⁇ ValidMV points to the same field. ⁇
- the pseudo-code 4400 in FIG. 44 describes how the motion vector predictor (PMV x , PMV y ) is computed for field motion vectors.
- SameFieldMV and OppFieldMV denote the two sets of candidate motion vectors and NumSameFieldMV and NumOppFieldMV denote the number of candidate motion vectors that belong to each set.
- the order of candidate motion vectors in each set starts with candidate A if it exists, followed by candidate B if it exists, and then candidate C if it exists. For example, if the set SameFieldMV contains only candidate B and candidate C, then SameFieldMV[0] is candidate B.
- the MVDATA syntax elements contain motion vector differential information for the macroblock. Depending on the type of motion compensation and motion vector block pattern signaled at each macroblock, there may be from zero to four MVDATA syntax elements per macroblock. More specifically,
- the motion vector differential is decoded in the same way as a one reference field motion vector differential for interlaced P-fields, without a half-pel mode.
- the pseudo-code 4500 in FIG. 45A illustrates how the motion vector differential is decoded for a one-reference field.
- the pseudo-code 4510 in FIG. 45B illustrates how the motion vector differential is decoded for a one-reference field in an alternative combined implementation.
- Pseudo-code 4510 decodes motion vector differentials in a different way. For example, pseudo-code 4510 omits handling of extended motion vector differential ranges.
- the smod operation ensures that the reconstructed vectors are valid.
- (A s mod b) lies within -b and b-1.
- range_x and range_y depend on MVRANGE.
- a corresponding chroma frame or field motion vector is derived to compensate a portion (or potentially all) of the chroma (C b /C r ) block.
- the FASTUVMC syntax element is ignored in interlaced P-frames and interlaced B-frames.
- the pseudo-code 4600 in FIG. 46 describes how a chroma motion vector CMV is derived from a luma motion vector LMV in interlace P-frames.
- Macroblock-specific binary information such as skip bits may be encoded in one binary symbol per macroblock. For example, whether or not a macroblock is skipped may be signaled with one bit. In these cases, the status for all macroblocks in a field or frame may be coded as a bitplane and transmitted in the field or frame header.
- bitplane coding mode is set to Raw Mode, in which case the status for each macroblock is coded as one bit per symbol and transmitted along with other macroblock level syntax elements at the macroblock level.
- Field/frame-level bitplane coding is used to encode two-dimensional binary arrays.
- the size of each array is rowMB ⁇ colMB, where rowMB and colMB are the number of macroblock rows and columns, respectively, in the field or frame in question.
- each array is coded as a set of consecutive bits.
- One of seven modes is used to encode each array. The seven modes are:
- the INVERT syntax element is a 1-bit value, which if set indicates that the bitplane has more set bits than zero bits. Depending on INVERT and the mode, the decoder shall invert the interpreted bitplane to recreate the original. Note that the value of this bit shall be ignored when the raw mode is used. Description of how the INVERT value is used in decoding the bitplane is provided below.
- the IMODE syntax element is a variable length value that indicates the coding mode used to encode the bitplane.
- Table 12 shows the code table used to encode the IMODE syntax element. Description of how the IMODE value is used in decoding the bitplane is provided below. TABLE 12 IMODE VLC Codetable IMODE VLC Coding mode 10 Norm-2 11 Norm-6 010 Rowskip 011 Colskip 001 Diff-2 0001 Diff-6 0000 Raw Bitplane Coding Bits (DATABITS)
- the DATABITS syntax element is variable sized syntax element that encodes the stream of symbols for the bitplane.
- the method used to encode the bitplane is determined by the value of IMODE.
- the seven coding modes are described in the following sections.
- bitplane is encoded as one bit per symbol scanned in the raster-scan order of macroblocks, and sent as part of the macroblock layer.
- information is coded in raw mode at the field or frame level and DATABITS is rowMB ⁇ colMB bits in length.
- the Normal-2 method is used to produce the bitplane as described above, and then the Diff ⁇ 1 operation is applied to the bitplane as described below.
- the bitplane is encoded in groups of six pixels. These pixels are grouped into either 2 ⁇ 3 or 3 ⁇ 2 tiles. The bitplane is tiled maximally using a set of rules, and the remaining pixels are encoded using a variant of row-skip and column-skip modes. 2 ⁇ 3 “vertical” tiles are used if and only if rowMB is a multiple of 3 and colMB is not. Otherwise, 3 ⁇ 2 “horizontal” tiles are used.
- FIG. 47A shows a simplified example of 2 ⁇ 3 “vertical” tiles.
- FIGS. 47B and 47C show simplified examples of 3 ⁇ 2 “horizontal” tiles for which the elongated dark rectangles are 1 pixel wide and encoded using row-skip and column-skip coding.
- the coding order of the tiles follows the following pattern.
- the 6-element tiles are encoded first, followed by the column-skip and row-skip encoded linear tiles. If the array size is a multiple of 2 ⁇ 3 or of 3 ⁇ 2, the latter linear tiles do not exist and the bitplane is perfectly tiled.
- the 6-element rectangular tiles are encoded using an incomplete Huffman code, i.e., a Huffman code which does not use all end nodes for encoding.
- N be the number of set bits in the tile, i.e. 0 ⁇ N ⁇ 6.
- N a VLC is used to encode the tile.
- the rectangular tile contains 6 bits of information.
- a combination of VLCs and escape codes please fixed length codes is used to signal k.
- the Normal-6 method is used to produce the bitplane as described above, and then the Diff ⁇ 1 operation is applied to the bitplane as described below.
- ROWSKIP indicates if the row is skipped; if the row is skipped, the ROWSKIP bit for the next row is next; otherwise (the row is not skipped), ROWBITS bits (a bit for each macroblock in the row) are next.
- ROWSKIP is set to 1, and the entire row is sent raw (ROWBITS). Rows are scanned from the top to the bottom of the field or frame.
- Columnskip is the transpose of rowskip. Columns are scanned from the left to the right of the field or frame.
- Diff-2 or Diff-6 a bitplane of “differential bits” is first decoded using the corresponding normal modes (Norm-2 or Norm-6 respectively). The differential bits are used to regenerate the original bitplane.
- the regeneration process is a 2-D DPCM on a binary alphabet.
- the bitwise inversion process based on INVERT is not performed.
- the INVERT flag is used in a different capacity to indicate the value of the symbol A for the derivation of the predictor shown above. More specifically, A equal to 0 if INVERT equals to 0 and A equals to 1 if INVERT equals to 1.
- the actual value of the bitplane is obtained by xor'ing the predictor with the decoded differential bit value.
- b(i,j) is the bit at the i,jth position after final decoding (i.e. after doing Norm-2/Norm-6, followed by differential xor with its predictor).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Patent Application No. 60/501,081, entitled “Video Encoding and Decoding Tools and Techniques,” filed Sep. 7, 2003, which is hereby incorporated by reference.
- The following co-pending U.S. patent applications relate to the present application and are hereby incorporated by reference: 1) U.S. patent application Ser. No. xx/yyy,zzz, entitled, “Motion Vector Coding and Decoding in Interlaced Frame Coded Pictures,” filed concurrently herewith; and 2) U.S. patent application Ser. No. xx/yyy,zzz, entitled, “Chroma Motion Vector Derivation,” filed concurrently herewith.
- A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by any one of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
- Techniques and tools for interlaced video coding and decoding are described. For example, an encoder signals macroblock mode information for macroblocks in an interlaced frame coded picture. A decoder performs corresponding decoding.
- Digital video consumes large amounts of storage and transmission capacity. A typical raw digital video sequence includes 15 or 30 pictures per second. Each picture can include tens or hundreds of thousands of pixels (also called pels). Each pixel represents a tiny element of the picture. In raw form, a computer commonly represents a pixel with 24 bits or more. Thus, the number of bits per second, or bit rate, of a typical raw digital video sequence can be 5 million bits/second or more.
- Most computers and computer networks lack the resources to process raw digital video. For this reason, engineers use compression (also called coding or encoding) to reduce the bit rate of digital video. Compression can be lossless, in which quality of the video does not suffer but decreases in bit rate are limited by the complexity of the video. Or, compression can be lossy, in which quality of the video suffers but decreases in bit rate are more dramatic. Decompression reverses compression.
- In general, video compression techniques include “intra” compression and “inter” or predictive compression. Intra compression techniques compress individual pictures, typically called I-frames or key frames. Inter compression techniques compress frames with reference to preceding and/or following frames, and inter-compressed frames are typically called predicted frames, P-frames, or B-frames.
- I. Inter Compression in Windows Media Video,
Versions 8 and 9 - Microsoft Corporation's Windows Media Video, Version 8 [“WMV8”] includes a video encoder and a video decoder. The WMV8 encoder uses intra and inter compression, and the WMV8 decoder uses intra and inter decompression. Windows Media Video, Version 9 [“WMV9”] uses a similar architecture for many operations.
- Inter compression in the WMV8 encoder uses block-based motion compensated prediction coding followed by transform coding of the residual error.
FIGS. 1 and 2 illustrate the block-based inter compression for a predicted frame in the WMV8 encoder. In particular,FIG. 1 illustrates motion estimation for a predictedframe 110 andFIG. 2 illustrates compression of a prediction residual for a motion-compensated block of a predicted frame. - For example, in
FIG. 1 , the WMV8 encoder computes a motion vector for amacroblock 115 in the predictedframe 110. To compute the motion vector, the encoder searches in asearch area 135 of areference frame 130. Within thesearch area 135, the encoder compares themacroblock 115 from the predictedframe 110 to various candidate macroblocks in order to find a candidate macroblock that is a good match. The encoder outputs information specifying the motion vector (entropy coded) for the matching macroblock. - Since a motion vector value is often correlated with the values of spatially surrounding motion vectors, compression of the data used to transmit the motion vector information can be achieved by selecting a motion vector predictor based upon motion vectors of neighboring macroblocks and predicting the motion vector for the current macroblock using the motion vector predictor. The encoder can encode the differential between the motion vector and the predictor. After reconstructing the motion vector by adding the differential to the predictor, a decoder uses the motion vector to compute a prediction macroblock for the
macroblock 115 using information from thereference frame 130, which is a previously reconstructed frame available at the encoder and the decoder. The prediction is rarely perfect, so the encoder usually encodes blocks of pixel differences (also called the error or residual blocks) between the prediction macroblock and themacroblock 115 itself. -
FIG. 2 illustrates an example of computation and encoding of anerror block 235 in the WMV8 encoder. Theerror block 235 is the difference between the predictedblock 215 and the originalcurrent block 225. The encoder applies a discrete cosine transform [“DCT”] 240 to theerror block 235, resulting in an 8×8block 245 of coefficients. The encoder then quantizes 250 the DCT coefficients, resulting in an 8×8 block of quantizedDCT coefficients 255. Theencoder scans 260 the 8×8block 255 into a one-dimensional array 265 such that coefficients are generally ordered from lowest frequency to highest frequency. The encoder entropy encodes the scanned coefficients using a variation ofrun length coding 270. The encoder selects an entropy code from one or more run/level/last tables 275 and outputs the entropy code. -
FIG. 3 shows an example of acorresponding decoding process 300 for an inter-coded block. In summary ofFIG. 3 , a decoder decodes (310, 320) entropy-coded information representing a prediction residual usingvariable length decoding 310 with one or more run/level/last tables 315 andrun length decoding 320. The decoder inverse scans 330 a one-dimensional array 325 storing the entropy-decoded information into a two-dimensional block 335. The decoder inverse quantizes and inverse discrete cosine transforms (together, 340) the data, resulting in a reconstructederror block 345. In a separate motion compensation path, the decoder computes a predictedblock 365 usingmotion vector information 355 for displacement from a reference frame. The decoder combines 370 the predictedblock 365 with the reconstructederror block 345 to form the reconstructed block 375. - The amount of change between the original and reconstructed frames is the distortion and the number of bits required to code the frame indicates the rate for the frame. The amount of distortion is roughly inversely proportional to the rate.
- II. Interlaced Video and Progressive Video
- A video frame contains lines of spatial information of a video signal. For progressive video, these lines contain samples starting from one time instant and continuing through successive lines to the bottom of the frame. A progressive I-frame is an intra-coded progressive video frame. A progressive P-frame is a progressive video frame coded using forward prediction, and a progressive B-frame is a progressive video frame coded using bi-directional prediction.
- A typical interlaced video frame consists of two fields scanned starting at different times. For example, referring to
FIG. 4 , an interlacedvideo frame 400 includestop field 410 andbottom field 420. Typically, the even-numbered lines (top field) are scanned starting at one time (e.g., time t) and the odd-numbered lines (bottom field) are scanned starting at a different (typically later) time (e.g., time t+1). This timing can create jagged tooth-like features in regions of an interlaced video frame where motion is present because the two fields are scanned starting at different times. For this reason, interlaced video frames can be rearranged according to a field structure, with the odd lines grouped together in one field, and the even lines grouped together in another field. This arrangement, known as field coding, is useful in high-motion pictures for reduction of such jagged edge artifacts. On the other hand, in stationary regions, image detail in the interlaced video frame may be more efficiently preserved without such a rearrangement. Accordingly, frame coding is often used in stationary or low-motion interlaced video frames, in which the original alternating field line arrangement is preserved. - A typical progressive video frame consists of one frame of content with non-alternating lines. In contrast to interlaced video, progressive video does not divide video frames into separate fields, and an entire frame is scanned left to right, top to bottom starting at a single time.
- III. P-Frame Coding and Decoding in a Previous WMV Encoder and Decoder
- The encoder and decoder use progressive and interlace coding and decoding in P-frames. In interlaced and progressive P-frames, a motion vector is encoded in the encoder by computing a differential between the motion vector and a motion vector predictor, which is computed based on neighboring motion vectors. And, in the decoder, the motion vector is reconstructed by adding the motion vector differential to the motion vector predictor, which is again computed (this time in the decoder) based on neighboring motion vectors. Thus, a motion vector predictor for the current macroblock or field of the current macroblock is selected based on the candidates, and a motion vector differential is calculated based on the motion vector predictor. The motion vector can be reconstructed by adding the motion vector differential to the selected motion vector predictor at either the encoder or the decoder side. Typically, luminance motion vectors are reconstructed from the encoded motion information, and chrominance motion vectors are derived from the reconstructed luminance motion vectors.
- A. Progressive P-Frame Coding and Decoding
- For example, in the encoder and decoder, progressive P-frames can contain macroblocks encoded in one motion vector (1MV) mode or in four motion vector (4MV) mode, or skipped macroblocks, with a decision generally made on a macroblock-by-macroblock basis. P-frames with only 1MV macroblocks (and, potentially, skipped macroblocks) are referred to as 1MV P-frames, and P-frames with both 1MV and 4MV macroblocks (and, potentially, skipped macroblocks) are referred to as Mixed-MV P-frames. One luma motion vector is associated with each 1MV macroblock, and up to four luma motion vectors are associated with each 4MV macroblock (one for each block).
-
FIGS. 5A and 5B are diagrams showing the locations of macroblocks considered for candidate motion vector predictors for a macroblock in a 1MV progressive P-frame. The candidate predictors are taken from the left, top and top-right macroblocks, except in the case where the macroblock is the last macroblock in the row. In this case, Predictor B is taken from the top-left macroblock instead of the top-right. For the special case where the frame is one macroblock wide, the predictor is always Predictor A (the top predictor). When Predictor A is out of bounds because the macroblock is in the top row, the predictor is Predictor C. Various other rules address other special cases such as intra-coded predictors. -
FIGS. 6A-10 show the locations of the blocks or macroblocks considered for the up-to-three candidate motion vectors for a motion vector for a 1MV or 4MV macroblock in a Mixed-MV frame. In the following figures, the larger squares are macroblock boundaries and the smaller squares are block boundaries. For the special case where the frame is one macroblock wide, the predictor is always Predictor A (the top predictor). Various other rules address other special cases such as top row blocks for top row 4MV macroblocks, top row 1MV macroblocks, and intra-coded predictors. -
FIGS. 6A and 6B are diagrams showing locations of blocks considered for candidate motion vector predictors for a 1MV current macroblock in a Mixed-MV frame. The neighboring macroblocks may be 1MV or 4MV macroblocks.FIGS. 6A and 6B show the locations for the candidate motion vectors assuming the neighbors are 4MV (i.e., predictor A is the motion vector forblock 2 in the macroblock above the current macroblock, and predictor C is the motion vector forblock 1 in the macroblock immediately to the left of the current macroblock). If any of the neighbors is a 1MV macroblock, then the motion vector predictor shown inFIGS. 5A and 5B is taken to be the motion vector predictor for the entire macroblock. AsFIG. 6B shows, if the macroblock is the last macroblock in the row, then Predictor B is fromblock 3 of the top-left macroblock instead of fromblock 2 in the top-right macroblock as is the case otherwise. -
FIGS. 7A-10 show the locations of blocks considered for candidate motion vector predictors for each of the 4 luminance blocks in a 4MV macroblock.FIGS. 7A and 7B are diagrams showing the locations of blocks considered for candidate motion vector predictors for a block atposition 0;FIGS. 8A and 8B are diagrams showing the locations of blocks considered for candidate motion vector predictors for a block atposition 1;FIG. 9 is a diagram showing the locations of blocks considered for candidate motion vector predictors for a block atposition 2; andFIG. 10 is a diagram showing the locations of blocks considered for candidate motion vector predictors for a block atposition 3. Again, if a neighbor is a 1MV macroblock, the motion vector predictor for the macroblock is used for the blocks of the macroblock. - For the case where the macroblock is the first macroblock in the row, Predictor B for
block 0 is handled differently thanblock 0 for the remaining macroblocks in the row (seeFIGS. 7A and 7B ). In this case, Predictor B is taken fromblock 3 in the macroblock immediately above the current macroblock instead of fromblock 3 in the macroblock above and to the left of current macroblock, as is the case otherwise. Similarly, for the case where the macroblock is the last macroblock in the row, Predictor B forblock 1 is handled differently (FIGS. 8A and 8B ). In this case, the predictor is taken fromblock 2 in the macroblock immediately above the current macroblock instead of fromblock 2 in the macroblock above and to the right of the current macroblock, as is the case otherwise. In general, if the macroblock is in the first macroblock column, then Predictor C forblocks - B. Interlaced P-Frame Coding and Decoding
- The encoder and decoder use a 4:1:1 macroblock format for interlaced P-frames, which can contain macroblocks encoded in field mode or in frame mode, or skipped macroblocks, with a decision generally made on a macroblock-by-macroblock basis. Two motion vectors are associated with each field-coded macroblock (one motion vector per field), and one motion vector is associated with each frame-coded macroblock. An encoder jointly encodes motion information, including horizontal and vertical motion vector differential components, potentially along with other signaling information.
-
FIGS. 11, 12 and 13 show examples of candidate predictors for motion vector prediction for frame-coded 4:1:1 macroblocks and field-coded 4:1:1 macroblocks, respectively, in interlaced P-frames in the encoder and decoder.FIG. 11 shows candidate predictors A, B and C for a current frame-coded 4:1:1 macroblock in an interior position in an interlaced P-frame (not the first or last macroblock in a macroblock row, not in the top row). Predictors can be obtained from different candidate directions other than those labeled A, B, and C (e.g., in special cases such as when the current macroblock is the first macroblock or last macroblock in a row, or in the top row, since certain predictors are unavailable for such cases). For a current frame-coded macroblock, predictor candidates are calculated differently depending on whether the neighboring macroblocks are field-coded or frame-coded. For a neighboring frame-coded macroblock, the motion vector is simply taken as the predictor candidate. For a neighboring field-coded macroblock, the candidate motion vector is determined by averaging the top and bottom field motion vectors. -
FIGS. 12 and 13 show candidate predictors A, B and C for a current field in a field-coded 4:1:1 macroblock in an interior position in the field. InFIG. 12 , the current field is a bottom field, and the bottom field motion vectors in the neighboring macroblocks are used as candidate predictors. InFIG. 13 , the current field is a top field, and the top field motion vectors in the neighboring macroblocks are used as candidate predictors. Thus, for each field in a current field-coded macroblock, the number of motion vector predictor candidates for each field is at most three, with each candidate coming from the same field type (e.g., top or bottom) as the current field. Again, various special cases (not shown) apply when the current macroblock is the first macroblock or last macroblock in a row, or in the top row, since certain predictors are unavailable for such cases. - To select a predictor from a set of predictor candidates, the encoder and decoder use different selection algorithms, such as a median-of-three algorithm. A procedure for median-of-three prediction is described in pseudo-code 1400 in
FIG. 14 . - IV. B-Frame Coding and Decoding in a Previous WMV Encoder and Decoder
- The encoder and decoder use progressive and interlaced B-frames. B-frames use two frames from the source video as reference (or anchor) frames rather than the one anchor used in P-frames. Among anchor frames for a typical B-frame, one anchor frame is from the temporal past and one anchor frame is from the temporal future. Referring to
FIG. 15 , a B-frame 1510 in a video sequence has a temporally previous reference frame 1520 and a temporallyfuture reference frame 1530. Encoded bit streams with B-frames typically use less bits than encoded bit streams with no B-frames, while providing similar visual quality. A decoder also can accommodate space and time restrictions by opting not to decode or display B-frames, since B-frames are not generally used as reference frames. - While macroblocks in forward-predicted frames (e.g., P-frames) have only one directional mode of prediction (forward, from previous I- or P-frames), macroblocks in B-frames can be predicted using five different prediction modes: forward, backward, direct, interpolated and intra. The encoder selects and signals different prediction modes in the bit stream. Forward mode is similar to conventional P-frame prediction. In forward mode, a macroblock is derived from a temporally previous anchor. In backward mode, a macroblock is derived from a temporally subsequent anchor. Macroblocks predicted in direct or interpolated modes use both forward and backward anchors for prediction.
- V. Signaling Macroblock Information in a Previous WMV Encoder and Decoder
- In the encoder and decoder, macroblocks in interlaced P-frames can be one of three possible types: frame-coded, field-coded and skipped. The macroblock type is indicated by a multi-element combination of frame-level and macroblock-level syntax elements.
- For interlaced P-frames, the frame-level element INTRLCF indicates the mode used to code the macroblocks in that frame. If INTRLCF=0, all macroblocks in the frame are frame-coded. If INTRLCF=1, the macroblocks may be field-coded or frame-coded. The INTRLCMB element is present at in the frame layer when INTRLCF=1. INTRLCMB is a bitplane-coded array that indicates the field/frame coding status for each macroblock in the picture. The decoded bitplane represents the interlaced status for each macroblock as an array of 1-bit values. A value of 0 for a particular bit indicates that a corresponding macroblock is coded in frame mode. A value of 1 indicates that the corresponding macroblock is coded in field mode.
- For frame-coded macroblocks, the macroblock-level MVDATA element is associated with all blocks in the macroblock. MVDATA signals whether the blocks in the macroblocks are intra-coded or inter-coded. If they are inter-coded, MVDATA also indicates the motion vector differential.
- For field-coded macroblocks, a TOPMVDATA element is associated with the top field blocks in the macroblock and a BOTMVDATA element is associated with the bottom field blocks in the macroblock. TOPMVDATA and BOTMVDATA are sent at the first block of each field. TOPMVDATA indicates whether the top field blocks are intra-coded or inter-coded. Likewise, BOTMVDATA indicates whether the bottom field blocks are intra-coded or inter-coded. For inter-coded blocks, TOPMVDATA and BOTMVDATA also indicate motion vector differential information.
- The CBPCY element indicates coded block pattern (CBP) information for luminance and chrominance components in a macroblock. The CBPCY element also indicates which fields have motion vector data elements present in the bitstream. CBPCY and the motion vector data elements are used to specify whether blocks have AC coefficients. CBPCY is present for a frame-coded macroblock of an interlaced P-frame if the “last” value decoded from MVDATA indicates that there are data following the motion vector to decode. If CBPCY is present, it decodes to a 6-bit field, one bit for each of the four Y blocks, one bit for both U blocks (top field and bottom field), and one bit for both V blocks (top field and bottom field).
- CBPCY is always present for a field-coded macroblock. CBPCY and the two field motion vector data elements are used to determine the presence of AC coefficients in the blocks of the macroblock. The meaning of CBPCY is the same as for frame-coded macroblocks for
bits bit positions bit position 0 indicates that TOPMVDATA is not present and the motion vector predictor is used as the motion vector for the top field blocks. It also indicates that the left top field block does not contain any nonzero coefficients. A 1 inbit position 0 indicates that TOPMVDATA is present. TOPMVDATA indicates whether the top field blocks are inter or intra and, if they are inter, also indicates the motion vector differential. If the “last” value decoded from TOPMVDATA decodes to 1, then no AC coefficients are present for the left top field block, otherwise, there are nonzero AC coefficients for the left top field block. Similarly, the above rules apply to bitposition 2 for BOTMVDATA and the left bottom field block. - VI. Skipped Macroblocks in a Previous WMV Encoder and Decoder
- The encoder and decoder use skipped macroblocks to reduce bitrate. For example, the encoder signals skipped macroblocks in the bitstream. When the decoder receives information (e.g., a skipped macroblock flag) in the bitstream indicating that a macroblock is skipped, the decoder skips decoding residual block information for the macroblock. Instead, the decoder uses corresponding pixel data from a co-located or motion compensated (with a motion vector predictor) macroblock in a reference frame to reconstruct the macroblock. The encoder and decoder select between multiple coding/decoding modes for encoding and decoding the skipped macroblock information. For example, skipped macroblock information is signaled at frame level of the bitstream (e.g., in a compressed bitplane) or at macroblock level (e.g., with one “skip” bit per macroblock). For bitplane coding, the encoder and decoder select between different bitplane coding modes.
- One previous encoder and decoder define a skipped macroblock as a predicted macroblock whose motion is equal to its causally predicted motion and which has zero residual error. Another previous encoder and decoder define a skipped macroblock as a predicted macroblock with zero motion and zero residual error.
- For more information on skipped macroblocks and bitplane coding, see U.S. patent application Ser. No. 10/321,415, entitled “Skip Macroblock Coding,” filed Dec. 16, 2002.
- VII. Standards for Video Compression and Decompression
- Several international standards relate to video compression and decompression. These standards include the Motion Picture Experts Group [“MPEG”] 1, 2, and 4 standards and the H.261, H.262 (another title for MPEG-2), H.263 and H.264 (also called JVT/AVC) standards from the International Telecommunication Union [“ITU”]. These standards specify aspects of video decoders and formats for compressed video information. Directly or by implication, they also specify certain encoder details, but other encoder details are not specified. These standards use (or support the use of) different combinations of intraframe and interframe decompression and compression.
- A. Signaling Field- or Frame-Coded Macroblocks in the Standards
- Some international standards describe signaling of field/frame coding type (e.g., field-coding or frame-coding) for macroblocks in interlaced pictures.
- Draft JVT-d157 of the JVT/AVC standard describes the mb_field_decoding_flag syntax element, which is used to signal whether a macroblock pair is decoded in frame mode or field mode in interlaced P-frames. Section 7.3.4 describes a bitstream syntax where mb_field_decoding_flag is sent as an element of slice data in cases where a sequence parameter (mb_frame_field_adaptive_flag) indicates switching between frame and field decoding in macroblocks and a slice header element (pic structure) identifies the picture structure as a progressive picture or an interlaced frame picture.
- The May 28, 1998 committee draft of MPEG-4 describes the dct_type syntax element, which is used to signal whether a macroblock is frame DCT coded or field DCT coded. According to Sections 6.2.7.3 and 6.3.7.3, dct_type is a macroblock-layer element that is only present in the MPEG-4 bitstream in interlaced content where the macroblock has a non-zero coded block pattern or is intra-coded.
- In MPEG-2, the dct_type element indicates whether a macroblock is frame DCT coded or field DCT coded. MPEG-2 also describes a picture coding extension element frame_pred_frame_dct. When frame_pred_frame_dct is set to ‘1’, only frame DCT coding is used in interlaced frames. The condition dct_type=0 is “derived” when frame_pred_frame_dct=1 and the dct_type element is not present in the bitstream.
- B. Skipped Macroblocks in the Standards
- Some international standards use skipped macroblocks. For example, draft JVT-d157 of the JVT/AVC standard defines a skipped macroblock as “a macroblock for which no data is coded other than an indication that the macroblock is to be decoded as ‘skipped.’” Similarly, the committee draft of MPEG-4 states, “A skipped macroblock is one for which no information is transmitted.”
- C. Limitations of the Standards
- These international standards are limited in several important ways. For example, although the standards provide for signaling of macroblock types, field/frame coding type information is signaled separately from motion compensation types (e.g., field prediction or frame prediction, one motion vector or multiple motion vectors, etc.). As another example, although some international standards allow for bitrate savings by skipping certain macroblocks, the skipped macroblock condition in these standards only indicates that no further information for the macroblock is encoded, and fails to provide other potentially valuable information about the macroblock.
- Given the critical importance of video compression and decompression to digital video, it is not surprising that video compression and decompression are richly developed fields. Whatever the benefits of previous video compression and decompression techniques, however, they do not have the advantages of the following techniques and tools.
- In summary, the detailed description is directed to various techniques and tools for encoding and decoding interlaced video frames. Described embodiments implement one or more of the described techniques and tools including, but not limited to, the following:
- In one aspect, a decoder decodes one or more skipped macroblocks among plural macroblocks of an interlaced frame (e.g., an interlaced P-frame, interlaced B-frame, or a frame having interlaced P-fields and/or interlaced B-fields). Each of the one or more skipped macroblocks (1) is indicated by a skipped macroblock signal in a bitstream, (2) uses exactly one predicted motion vector (e.g., a frame motion vector) and has no motion vector differential information, and (3) lacks residual information. The skipped macroblock signal for each of the one or more skipped macroblocks indicates one-motion-vector motion-compensated decoding for the respective skipped macroblock. The skipped macroblock signal can be part of a compressed bitplane sent at frame layer in a bitstream having plural layers. Or, the skipped macroblock signal can be an individual bit sent at macroblock layer.
- In another aspect, a coding mode from a group of plural available coding modes is selected, and a bitplane is processed in an encoder or decoder according to the selected coding mode. The bitplane includes binary information signifying whether macroblocks in an interlaced frame are skipped or not skipped. A macroblock in the interlaced frame is skipped if the macroblock has only one motion vector, the only one motion vector is equal to a predicted motion vector for the macroblock, and the macroblock has no residual error. A macroblock is not skipped if it has plural motion vectors.
- In another aspect, an encoder selects a motion compensation type (e.g., 1MV, 4 Frame MV, 2 Field MV, or 4 Field MV) for a macroblock in an interlaced P-frame and selects a field/frame coding type (e.g., field-coded, frame-coded, or no coded blocks) for the macroblock. The encoder jointly encodes the motion compensation type and the field/frame coding type for the macroblock. The encoder also can jointly encode other information for the macroblock with the motion compensation type and the field/frame coding type (e.g., an indicator of the presence of a differential motion vector, such as for a one-motion-vector macroblock).
- In another aspect, a decoder receives macroblock information for a macroblock in an interlaced P-frame, including a joint code representing motion compensation type and field/frame coding type for the macroblock. The decoder decodes the joint code (e.g., a variable length code in a variable length coding table) to obtain both motion compensation type information and field/frame coding type information for the macroblock.
- The various techniques and tools can be used in combination or independently.
- Additional features and advantages will be made apparent from the following detailed description of different embodiments that proceeds with reference to the accompanying drawings.
-
FIG. 1 is a diagram showing motion estimation in a video encoder according to the prior art. -
FIG. 2 is a diagram showing block-based compression for an 8×8 block of prediction residuals in a video encoder according to the prior art. -
FIG. 3 is a diagram showing block-based decompression for an 8×8 block of prediction residuals in a video encoder according to the prior art. -
FIG. 4 is a diagram showing an interlaced frame according to the prior art. -
FIGS. 5A and 5B are diagrams showing locations of macroblocks for candidate motion vector predictors for a 1MV macroblock in a progressive P-frame according to the prior art. -
FIGS. 6A and 6B are diagrams showing locations of blocks for candidate motion vector predictors for a 1MV macroblock in a mixed 1MV/4MV progressive P-frame according to the prior art. -
FIGS. 7A, 7B , 8A, 8B, 9, and 10 are diagrams showing the locations of blocks for candidate motion vector predictors for a block at various positions in a 4MV macroblock in a mixed 1MV/4MV progressive P-frame according to the prior art. -
FIG. 11 is a diagram showing candidate motion vector predictors for a current frame-coded macroblock in an interlaced P-frame according to the prior art. -
FIGS. 12 and 13 are diagrams showing candidate motion vector predictors for a current field-coded macroblock in an interlaced P-frame according to the prior art. -
FIG. 14 is a code diagram showing pseudo-code for performing a median-of-3 calculation according to the prior art. -
FIG. 15 is a diagram showing a B-frame with past and future reference frames according to the prior art. -
FIG. 16 is a block diagram of a suitable computing environment in conjunction with which several described embodiments may be implemented. -
FIG. 17 is a block diagram of a generalized video encoder system in conjunction with which several described embodiments may be implemented. -
FIG. 18 is a block diagram of a generalized video decoder system in conjunction with which several described embodiments may be implemented. -
FIG. 19 is a diagram of a macroblock format used in several described embodiments. -
FIG. 20A is a diagram of part of an interlaced video frame, showing alternating lines of a top field and a bottom field.FIG. 20B is a diagram of the interlaced video frame organized for encoding/decoding as a frame, andFIG. 20C is a diagram of the interlaced video frame organized for encoding/decoding as fields. -
FIG. 21 is a diagram showing motion vectors for luminance blocks and derived motion vectors for chrominance blocks in a 2 field MV macroblock of an interlaced P-frame. -
FIG. 22 is a diagram showing different motion vectors for each of four luminance blocks, and derived motion vectors for each of four chrominance sub-blocks, in a 4 frame MV macroblock of an interlaced P-frame. -
FIG. 23 is a diagram showing motion vectors for luminance blocks and derived motion vectors for chrominance blocks in a 4 field MV macroblock of an interlaced P-frame. -
FIGS. 24A-24B are diagrams showing candidate predictors for a current macroblock of an interlaced P-frame. -
FIG. 25 is a flow chart showing a technique for determining whether to skip coding of particular macroblocks in an interlaced predicted frame. -
FIG. 26 is a flow chart showing a technique for decoding jointly coded motion compensation type information and field/frame coding type information for a macroblock in an interlaced P-frame. -
FIG. 27 is a diagram showing an entry-point-layer bitstream syntax in a combined implementation. -
FIG. 28 is a diagram showing a frame-layer bitstream syntax for interlaced P-frames in a combined implementation. -
FIG. 29 is a diagram showing a frame-layer bitstream syntax for interlaced B-frames in a combined implementation. -
FIG. 30 is a diagram showing a frame-layer bitstream syntax for interlaced P-fields or B-fields in a combined implementation. -
FIG. 31 is a diagram showing a macroblock-layer bitstream syntax for macroblocks of interlaced P-frames in a combined implementation. -
FIG. 32 is a code listing showing pseudo-code for collecting candidate motion vectors for 1MV macroblocks in an interlaced P-frame in a combined implementation. -
FIGS. 33, 34 , 35, and 36 are code listings showing pseudo-code for collecting candidate motion vectors for 4 Frame MV macroblocks in an interlaced P-frame in a combined implementation. -
FIGS. 37 and 38 are code listings showing pseudo-code for collecting candidate motion vectors for 2 Field MV macroblocks in an interlaced P-frame in a combined implementation. -
FIGS. 39, 40 , 41, and 42 are code listings showing pseudo-code for collecting candidate motion vectors for 4 Field MV macroblocks in an interlaced P-frame in a combined implementation. -
FIG. 43 is a code listing showing pseudo-code for computing motion vector predictors for frame motion vectors in an interlaced P-frame in a combined implementation. -
FIG. 44 is a code listing showing pseudo-code for computing motion vector predictors for field motion vectors in an interlaced P-frame in a combined implementation. -
FIG. 45A and 45B are code listings showing pseudo-code for decoding a motion vector differential for interlaced P-frames in a combined implementation. -
FIG. 46 is a code listing showing pseudo-code for deriving a chroma motion vector in an interlaced P-frame in a combined implementation. -
FIGS. 47A-47C are diagrams showing tiles for Norm-6 and Diff-6 bitplane coding modes in a combined implementation. - The present application relates to techniques and tools for efficient compression and decompression of interlaced video. In various described embodiments, a video encoder and decoder incorporate techniques for encoding and decoding interlaced video, and corresponding signaling techniques for use with a bit stream format or syntax comprising different layers or levels (e.g., sequence level, frame level, field level, macroblock level, and/or block level).
- Various alternatives to the implementations described herein are possible. For example, techniques described with reference to flowchart diagrams can be altered by changing the ordering of stages shown in the flowcharts, by repeating or omitting certain stages, etc. As another example, although some implementations are described with reference to specific macroblock formats, other formats also can be used. Further, techniques and tools described with reference to forward prediction may also be applicable to other types of prediction.
- The various techniques and tools can be used in combination or independently. Different embodiments implement one or more of the described techniques and tools. Some techniques and tools described herein can be used in a video encoder or decoder, or in some other system not specifically limited to video encoding or decoding.
- I. Computing Environment
-
FIG. 16 illustrates a generalized example of a suitable computing environment 1600 in which several of the described embodiments may be implemented. The computing environment 1600 is not intended to suggest any limitation as to scope of use or functionality, as the techniques and tools may be implemented in diverse general-purpose or special-purpose computing environments. - With reference to
FIG. 16 , the computing environment 1600 includes at least oneprocessing unit 1610 andmemory 1620. InFIG. 16 , this mostbasic configuration 1630 is included within a dashed line. Theprocessing unit 1610 executes computer-executable instructions and may be a real or a virtual processor. In a multi-processing system, multiple processing units execute computer-executable instructions to increase processing power. Thememory 1620 may be volatile memory (e.g., registers, cache, RAM), non-volatile memory (e.g., ROM, EEPROM, flash memory, etc.), or some combination of the two. Thememory 1620stores software 1680 implementing a video encoder or decoder with one or more of the described techniques and tools. - A computing environment may have additional features. For example, the computing environment 1600 includes
storage 1640, one ormore input devices 1650, one ormore output devices 1660, and one or more communication connections 1670. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 1600. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 1600, and coordinates activities of the components of the computing environment 1600. - The
storage 1640 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 1600. Thestorage 1640 stores instructions for thesoftware 1680 implementing the video encoder or decoder. - The input device(s) 1650 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 1600. For audio or video encoding, the input device(s) 1650 may be a sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD-ROM or CD-RW that reads audio or video samples into the computing environment 1600. The output device(s) 1660 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 1600.
- The communication connection(s) 1670 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
- The techniques and tools can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment 1600, computer-readable media include
memory 1620,storage 1640, communication media, and combinations of any of the above. - The techniques and tools can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
- For the sake of presentation, the detailed description uses terms like “estimate,” “compensate,” “predict,” and “apply” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
- II. Generalized Video Encoder and Decoder
-
FIG. 17 is a block diagram of ageneralized video encoder 1700 in conjunction with which some described embodiments may be implemented.FIG. 18 is a block diagram of ageneralized video decoder 1800 in conjunction with which some described embodiments may be implemented. - The relationships shown between modules within the
encoder 1700 anddecoder 1800 indicate general flows of information in the encoder and decoder; other relationships are not shown for the sake of simplicity. In particular,FIGS. 17 and 18 usually do not show side information indicating the encoder settings, modes, tables, etc. used for a video sequence, picture, macroblock, block, etc. Such side information is sent in the output bitstream, typically after entropy encoding of the side information. The format of the output bitstream can be a WindowsMedia Video version 9 format or other format. - The
encoder 1700 anddecoder 1800 process video pictures, which may be video frames, video fields or combinations of frames and fields. The bitstream syntax and semantics at the picture and macroblock levels may depend on whether frames or fields are used. There may be changes to macroblock organization and overall timing as well. Theencoder 1700 anddecoder 1800 are block-based and use a 4:2:0 macroblock format for frames, with each macroblock including four 8×8 luminance blocks (at times treated as one 16×16 macroblock) and two 8×8 chrominance blocks. For fields, the same or a different macroblock organization and format may be used. The 8×8 blocks may be further sub-divided at different stages, e.g., at the frequency transform and entropy encoding stages. Example video frame organizations are described in more detail below. Alternatively, theencoder 1700 anddecoder 1800 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration than 8×8 blocks and 16×16 macroblocks. - Depending on implementation and the type of compression desired, modules of the encoder or decoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules. In alternative embodiments, encoders or decoders with different modules and/or other configurations of modules perform one or more of the described techniques.
- A. Video Frame Organizations
- In some implementations, the
encoder 1700 anddecoder 1800 process video frames organized as follows. A frame contains lines of spatial information of a video signal. For progressive video, these lines contain samples starting from one time instant and continuing through successive lines to the bottom of the frame. A progressive video frame is divided into macroblocks such as themacroblock 1900 shown inFIG. 19 . Themacroblock 1900 includes four 8×8 luminance blocks (Y1 through Y4) and two 8×8 chrominance blocks that are co-located with the four luminance blocks but half resolution horizontally and vertically, following the conventional 4:2:0 macroblock format. The 8×8 blocks may be further sub-divided at different stages, e.g., at the frequency transform (e.g., 8×4, 4×8 or 4×4 DCTs) and entropy encoding stages. A progressive I-frame is an intra-coded progressive video frame. A progressive P-frame is a progressive video frame coded using forward prediction, and a progressive B-frame is a progressive video frame coded using bi-directional prediction. Progressive P- and B-frames may include intra-coded macroblocks as well as different types of predicted macroblocks. - An interlaced video frame consists of two scans of a frame—one comprising the even lines of the frame (the top field) and the other comprising the odd lines of the frame (the bottom field). The two fields may represent two different time periods or they may be from the same time period.
FIG. 20A shows part of an interlacedvideo frame 2000, including the alternating lines of the top field and bottom field at the top left part of the interlacedvideo frame 2000. -
FIG. 20B shows the interlacedvideo frame 2000 ofFIG. 20A organized for encoding/decoding as aframe 2030. The interlacedvideo frame 2000 has been partitioned into macroblocks such as themacroblocks FIG. 19 . In the luminance plane, eachmacroblock macroblocks -
FIG. 20C shows the interlacedvideo frame 2000 ofFIG. 20A organized for encoding/decoding asfields 2060. Each of the two fields of the interlacedvideo frame 2000 is partitioned into macroblocks. The top field is partitioned into macroblocks such as themacroblock 2061, and the bottom field is partitioned into macroblocks such as themacroblock 2062. (Again, the macroblocks use a 4:2:0 format as shown inFIG. 19 , and the organization and placement of luminance blocks and chrominance blocks within the macroblocks are not shown.) In the luminance plane, themacroblock 2061 includes 16 lines from the top field and themacroblock 2062 includes 16 lines from the bottom field, and each line is 16 pixels long. An interlaced I-field is a single, separately represented field of an interlaced video frame. An interlaced P-field is a single, separately represented field of an interlaced video frame coded using forward prediction, and an interlaced B-field is a single, separately represented field of an interlaced video frame coded using bidirectional prediction. Interlaced P- and B-fields may include intra-coded macroblocks as well as different types of predicted macroblocks. Interlaced BI-fields are a hybrid of interlaced I-fields and interlaced B-fields; they are intra-coded, but are not used as anchors for other fields. - Interlaced video frames organized for encoding/decoding as fields can include various combinations of different field types. For example, such a frame can have the same field type in both the top and bottom fields or different field types in each field. In one implementation, the possible combinations of field types include I/I, I/P, P/I, P/P, B/B, B/BI, BI/B, and BI/BI.
- The term picture generally refers to source, coded or reconstructed image data. For progressive video, a picture is a progressive video frame. For interlaced video, a picture may refer to an interlaced video frame, the top field of the frame, or the bottom field of the frame, depending on the context.
- Alternatively, the
encoder 1700 anddecoder 1800 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration than 8×8 blocks and 16×16 macroblocks. - B. Video Encoder
-
FIG. 17 is a block diagram of a generalizedvideo encoder system 1700. Theencoder system 1700 receives a sequence of video pictures including a current picture 1705 (e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame), and produces compressedvideo information 1795 as output. Particular embodiments of video encoders typically use a variation or supplemented version of thegeneralized encoder 1700. - The
encoder system 1700 compresses predicted pictures and key pictures. For the sake of presentation,FIG. 17 shows a path for key pictures through theencoder system 1700 and a path for predicted pictures. Many of the components of theencoder system 1700 are used for compressing both key pictures and predicted pictures. The exact operations performed by those components can vary depending on the type of information being compressed. - A predicted picture (e.g., progressive P-frame or B-frame, interlaced P-field or B-field, or interlaced P-frame or B-frame) is represented in terms of prediction (or difference) from one or more other pictures (which are typically referred to as reference pictures or anchors). A prediction residual is the difference between what was predicted and the original picture. In contrast, a key picture (e.g., progressive I-frame, interlaced I-field, or interlaced I-frame) is compressed without reference to other pictures.
- If the
current picture 1705 is a forward-predicted picture, amotion estimator 1710 estimates motion of macroblocks or other sets of pixels of thecurrent picture 1705 with respect to one or more reference pictures, for example, the reconstructedprevious picture 1725 buffered in thepicture store 1720. If thecurrent picture 1705 is a bi-directionally-predicted picture, amotion estimator 1710 estimates motion in thecurrent picture 1705 with respect to up to four reconstructed reference pictures (for an interlaced B-field, for example). Typically, a motion estimator estimates motion in a B-picture with respect to one or more temporally previous reference pictures and one or more temporally future reference pictures. Accordingly, theencoder system 1700 can use theseparate stores - The
motion estimator 1710 can estimate motion by pixel, ½ pixel, ½ pixel, or other increments, and can switch the resolution of the motion estimation on a picture-by-picture basis or other basis. The motion estimator 1710 (and compensator 1730) also can switch between types of reference picture pixel interpolation (e.g., between bicubic and bilinear) on a per-frame or other basis. The resolution of the motion estimation can be the same or different horizontally and vertically. Themotion estimator 1710 outputs as sideinformation motion information 1715 such as differential motion vector information. Theencoder 1700 encodes themotion information 1715 by, for example, computing one or more predictors for motion vectors, computing differentials between the motion vectors and predictors, and entropy coding the differentials. To reconstruct a motion vector, amotion compensator 1730 combines a predictor with differential motion vector information. Various techniques for computing motion vector predictors, computing differential motion vectors, and reconstructing motion vectors for interlaced P-frames are described below. - The
motion compensator 1730 applies the reconstructed motion vector to the reconstructed picture(s) 1725 to form a motion-compensatedcurrent picture 1735. The prediction is rarely perfect, however, and the difference between the motion-compensatedcurrent picture 1735 and the originalcurrent picture 1705 is the prediction residual 1745. During later reconstruction of the picture, the prediction residual 1745 is added to the motion compensatedcurrent picture 1735 to obtain a reconstructed picture that is closer to the originalcurrent picture 1705. In lossy compression, however, some information is still lost from the originalcurrent picture 1705. Alternatively, a motion estimator and motion compensator apply another type of motion estimation/compensation. - A
frequency transformer 1760 converts the spatial domain video information into frequency domain (i.e., spectral) data. For block-based video pictures, thefrequency transformer 1760 applies a DCT, variant of DCT, or other block transform to blocks of the pixel data or prediction residual data, producing blocks of frequency transform coefficients. Alternatively, thefrequency transformer 1760 applies another conventional frequency transform such as a Fourier transform or uses wavelet or sub-band analysis. Thefrequency transformer 1760 may apply an 8×8, 8×4, 4×8, 4×4 or other size frequency transform. - A
quantizer 1770 then quantizes the blocks of spectral data coefficients. The quantizer applies uniform, scalar quantization to the spectral data with a step-size that varies on a picture-by-picture basis or other basis. Alternatively, the quantizer applies another type of quantization to the spectral data coefficients, for example, a non-uniform, vector, or non-adaptive quantization, or directly quantizes spatial domain data in an encoder system that does not use frequency transformations. In addition to adaptive quantization, theencoder 1700 can use frame dropping, adaptive filtering, or other techniques for rate control. - The
encoder 1700 may use special signaling for a skipped macroblock, which is a macroblock that has no information of certain types. Skipped macroblocks are described in further detail below. - When a reconstructed current picture is needed for subsequent motion estimation/compensation, an
inverse quantizer 1776 performs inverse quantization on the quantized spectral data coefficients. Aninverse frequency transformer 1766 then performs the inverse of the operations of thefrequency transformer 1760, producing a reconstructed prediction residual (for a predicted picture) or a reconstructed key picture. If thecurrent picture 1705 was a key picture, the reconstructed key picture is taken as the reconstructed current picture (not shown). If thecurrent picture 1705 was a predicted picture, the reconstructed prediction residual is added to the motion-compensatedcurrent picture 1735 to form the reconstructed current picture. One or both of thepicture stores - The
entropy coder 1780 compresses the output of thequantizer 1770 as well as certain side information (e.g.,motion information 1715, quantization step size). Typical entropy coding techniques include arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above. Theentropy coder 1780 typically uses different coding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular coding technique. - The
entropy coder 1780 provides compressedvideo information 1795 to the multiplexer [“MUX”] 1790. TheMUX 1790 may include a buffer, and a buffer level indicator may be fed back to bit rate adaptive modules for rate control. Before or after theMUX 1790, thecompressed video information 1795 can be channel coded for transmission over the network. The channel coding can apply error detection and correction data to thecompressed video information 1795. - C. Video Decoder
-
FIG. 18 is a block diagram of a generalvideo decoder system 1800. Thedecoder system 1800 receivesinformation 1895 for a compressed sequence of video pictures and produces output including a reconstructed picture 1805 (e.g., progressive video frame, interlaced video frame, or field of an interlaced video frame). Particular embodiments of video decoders typically use a variation or supplemented version of thegeneralized decoder 1800. - The
decoder system 1800 decompresses predicted pictures and key pictures. For the sake of presentation,FIG. 18 shows a path for key pictures through thedecoder system 1800 and a path for forward-predicted pictures. Many of the components of thedecoder system 1800 are used for decompressing both key pictures and predicted pictures. The exact operations performed by those components can vary depending on the type of information being decompressed. - A
DEMUX 1890 receives theinformation 1895 for the compressed video sequence and makes the received information available to theentropy decoder 1880. TheDEMUX 1890 may include a jitter buffer and other buffers as well. Before or after theDEMUX 1890, the compressed video information can be channel decoded and processed for error detection and correction. - The
entropy decoder 1880 entropy decodes entropy-coded quantized data as well as entropy-coded side information (e.g.,motion information 1815, quantization step size), typically applying the inverse of the entropy encoding performed in the encoder. Entropy decoding techniques include arithmetic decoding, differential decoding, Huffman decoding, run length decoding, LZ decoding, dictionary decoding, and combinations of the above. Theentropy decoder 1880 typically uses different decoding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular decoding technique. - The
decoder 1800 decodes themotion information 1815 by, for example, computing one or more predictors for motion vectors, entropy decoding differential motion vectors, and combining decoded differential motion vectors with predictors to reconstruct motion vectors. - A
motion compensator 1830 appliesmotion information 1815 to one ormore reference pictures 1825 to form aprediction 1835 of thepicture 1805 being reconstructed. For example, themotion compensator 1830 uses one or more macroblock motion vector to find macroblock(s) in the reference picture(s) 1825. One or more picture stores (e.g.,picture store 1820, 1822) store previous reconstructed pictures for use as reference pictures. Typically, B-pictures have more than one reference picture (e.g., at least one temporally previous reference picture and at least one temporally future reference picture). Accordingly, thedecoder system 1800 can useseparate picture stores motion compensator 1830 can compensate for motion at pixel, ½ pixel, ¼ pixel, or other increments, and can switch the resolution of the motion compensation on a picture-by-picture basis or other basis. Themotion compensator 1830 also can switch between types of reference picture pixel interpolation (e.g., between bicubic and bilinear) on a per-frame or other basis. The resolution of the motion compensation can be the same or different horizontally and vertically. Alternatively, a motion compensator applies another type of motion compensation. The prediction by the motion compensator is rarely perfect, so thedecoder 1800 also reconstructs prediction residuals. - An
inverse quantizer 1870 inverse quantizes entropy-decoded data. In general, the inverse quantizer applies uniform, scalar inverse quantization to the entropy-decoded data with a step-size that varies on a picture-by-picture basis or other basis. Alternatively, the inverse quantizer applies another type of inverse quantization to the data, for example, to reconstruct after a non-uniform, vector, or non-adaptive quantization, or directly inverse quantizes spatial domain data in a decoder system that does not use inverse frequency transformations. - An
inverse frequency transformer 1860 converts the quantized, frequency domain data into spatial domain video information. For block-based video pictures, theinverse frequency transformer 1860 applies an inverse DCT [“IDCT”], variant of IDCT, or other inverse block transform to blocks of the frequency transform coefficients, producing pixel data or prediction residual data for key pictures or predicted pictures, respectively. Alternatively, theinverse frequency transformer 1860 applies another conventional inverse frequency transform such as an inverse Fourier transform or uses wavelet or sub-band synthesis. Theinverse frequency transformer 1860 may apply an 8×8, 8×4, 4×8, 4×4, or other size inverse frequency transform. - For a predicted picture, the
decoder 1800 combines the reconstructed prediction residual 1845 with the motion compensatedprediction 1835 to form thereconstructed picture 1805. When the decoder needs areconstructed picture 1805 for subsequent motion compensation, one or both of the picture stores (e.g., picture store 1820) buffers thereconstructed picture 1805 for use in predicting the next picture. In some embodiments, thedecoder 1800 applies a de-blocking filter to the reconstructed picture to adaptively smooth discontinuities and other artifacts in the picture. - III. Interlaced P-Frames
- A typical interlaced video frame consists of two fields (e.g., a top field and a bottom field) scanned at different times. In general, it is more efficient to encode stationary regions of an interlaced video frame by coding fields together (“frame mode” coding). On the other hand, it is often more efficient to code moving regions of an interlaced video frame by coding fields separately (“field mode” coding), because the two fields tend to have different motion. A forward-predicted interlaced video frame may be coded as two separate forward-predicted fields—interlaced P-fields. Coding fields separately for a forward-predicted interlaced video frame may be efficient, for example, when there is high motion throughout the interlaced video frame, and hence much difference between the fields. An interlaced P-field references one or more previously decoded fields. For example, in some implementations, an interlaced P-field references either one or two previously decoded fields. For more information on interlaced P-fields, see U.S. Provisional Patent Application No. 60/501,081, entitled “Video Encoding and Decoding Tools and Techniques,” filed Sep. 7, 2003, and U.S. Patent Application Ser. No. 10/857,473, entitled, “Predicting Motion Vectors for Fields of Forward-predicted Interlaced Video Frames,” filed May 27, 2004, which is incorporated herein by reference.
- Or, a forward-predicted interlaced video frame may be coded using a mixture of field coding and frame coding, as an interlaced P-frame. For a macroblock of an interlaced P-frame, the macroblock includes lines of pixels for the top and bottom fields, and the lines may be coded collectively in a frame-coding mode or separately in a field-coding mode.
- A. Macroblock Types in Interlaced P-Frames
- In some implementations, macroblocks in interlaced P-frames can be one of five types: 1MV, 2 Field MV, 4 Frame MV, 4 Field MV, and Intra.
- In a 1MV macroblock, the displacement of the four luminance blocks in the macroblock is represented by a single motion vector. A corresponding chroma motion vector can be derived from the luma motion vector to represent the displacements of each of the two 8×8 chroma blocks for the motion vector. For example, referring again to the macroblock arrangement shown in
FIG. 19 , a1MV macroblock 1900 includes four 8×8 luminance blocks and two 8×8 chrominance blocks. The displacement of the luminance blocks (Y1 through Y4) are represented by single motion vector, and a corresponding chroma motion vector can be derived from the luma motion vector to represent the displacements of each of the two chroma blocks (U and V). - In a 2 Field MV macroblock, the displacement of each field for the 16×16 luminance component in the macroblock is described by a different motion vector. For example,
FIG. 21 shows that a top field motion vector describes the displacement of the even lines of the luminance component and that a bottom field motion vector describes the displacement of the odd lines of the luminance component. Using the top field motion vector, an encoder can derive a corresponding top field chroma motion vector that describes the displacement of the even lines of the chroma blocks. Similarly, an encoder can derive a bottom field chroma motion vector that describes the displacements of the odd lines of the chroma blocks. - Referring to
FIG. 22 , in a 4 Frame MV macroblock, the displacement of each of the four luminance blocks is described by a different motion vector (MV1, MV2, MV3 and MV4). Each chroma block can be motion compensated by using four derived chroma motion vectors (MV1′, MV2′, MV3′ and MV4′) that describe the displacement of four 4×4 chroma sub-blocks. A motion vector for each 4×4 chroma sub-block can be derived from the motion vector for the spatially corresponding luminance block. - Referring to
FIG. 23 , in a 4 Field MV macroblock, the displacement of each field in the 16×16 luminance component is described by two different motion vectors. The lines of the luminance component are subdivided vertically to form two 8×16 regions each comprised of an 8×8 region of even lines interleaved with an 8×8 region of odd lines. For the even lines, the displacement of the left 8×8 region is described by the top left field block motion vector and the displacement of the right 8×8 region is described by the top right field block motion vector. For the odd lines, the displacement of the left 8×8 region is described by the bottom left field block motion vector and the displacement of the right 8×8 region is described by the bottom right field block motion vector. Each chroma block also can be partitioned into four regions and each chroma block region can be motion compensated using a derived motion vector. - For Intra macroblocks, motion is assumed to be zero.
- B. Computing Motion Vector Predictors in Interlaced P-Frames
- In general, the process of computing the motion vector predictor(s) for a current macroblock in an interlaced P-frame consists of two steps. First, three candidate motion vectors for the current macroblock are gathered from its neighboring macroblocks. For example, in one implementation, candidate motion vectors are gathered based on the arrangement shown in
FIGS. 24A-24B (and various special cases for top row macroblocks, etc.). Alternatively, candidate motion vectors can be gathered in some other order or arrangement. Second, the motion vector predictor(s) for the current macroblock is computed from the set of candidate motion vectors. For example, the predictor can be computed using median-of-3 prediction, or by some other method. - IV. Innovations in Macroblock Information Signaling for Interlaced Frame Coded Pictures
- Described embodiments include techniques and tools for signaling macroblock information for interlaced frame coded pictures (e.g., interlaced P-frames, interlaced B-frames, etc.). For example, described techniques and tools include techniques and tools for signaling macroblock information for interlaced P-frames, and techniques and tools for using and signaling skipped macroblocks in interlaced P-frames and other interlaced pictures (e.g., interlaced B-frames, interlaced P-fields, interlaced B-fields, etc.). Described embodiments implement one or more of the described techniques and tools including, but not limited to, the following:
-
- 1. Jointly coding motion compensation type (e.g., 1 Frame MV, 4 Frame MV, 2 Field MV, 4 Field MV, etc.), and potentially other information, with field/frame coding type information (e.g., using the macroblock-level syntax element MBMODE) for interlaced P-frames.
- 2. Signaling a macroblock skip condition. The signaling can be performed separately from other syntax elements such as MBMODE. The skip condition indicates that the macroblock is a 1MV macroblock, has a zero differential motion vector, and has no coded blocks. The skip information can be coded in a compressed bitplane.
The described techniques and tools can be used in combination with one another or with other techniques and tools, or can be used independently.
- A. Skipped Macroblock Signaling
- In some implementations, an encoder signals skipped macroblocks. For example, an encoder signals a skipped macroblock in an interlaced frame when a macroblock is coded with one motion vector, has a zero motion vector differential, and has no coded blocks (i.e., no residuals for any block). The skip information can coded as a compressed bitplane (e.g., at frame level) or can be signaled on a one bit per macroblock basis (e.g., at macroblock level). The signaling of the skip condition for the macroblock is separate from the signaling of a macroblock mode for the macroblock. A decoder performs corresponding decoding.
- This definition of a skipped macroblock takes advantage of the observation that when more than one motion vector is used to encode a macroblock, the macroblock is rarely skipped because it is unlikely that all of the motion vector differentials will be zero and that all of the blocks will not be coded. Thus, when a macroblock is signaled as being skipped, the macroblock mode (1MV) is implied from the skip condition and need not be sent for the macroblock. In interlaced P-frames, a 1MV macroblock is motion compensated with one frame motion vector.
-
FIG. 25 shows atechnique 2500 for determining whether to skip coding of particular macroblocks in an interlaced predicted frame (e.g., an interlaced P-frame, an interlaced B-frame, or a frame comprising interlaced P-fields and/or interlaced B-fields). For a given macroblock, the encoder checks whether the macroblock is a 1MV macroblock at 2510. At 2520, if the macroblock is not a 1MV macroblock, the encoder does not skip the macroblock. Otherwise, at 2530, the encoder checks whether the one motion vector for the macroblock is equal to its causally predicted motion vector (e.g., whether the differential motion vector for the macroblock is equal to zero). At 2540, if the motion for a macroblock does not equal the causally predicted motion, the encoder does not skip the macroblock. Otherwise, at 2550, the encoder checks whether there is any residual to be encoded for the blocks of the macroblock. At 2560, if there is a residual to be coded, the encoder does not skip the macroblock. At 2570, if there is no residual for the blocks of the macroblock, the encoder skips the macroblock. At 2580, the encoder can continue to encode or skip macroblocks until encoding is done. - In one implementation, the macroblock-level SKIPMBBIT field (which can also be labeled SKIPMB, etc.) indicates the skip condition for a macroblock. If the SKIPMBBIT field is 1, then the current macroblock is skipped and no other information is sent after the SKIPMBBIT field. On the other hand, if the SKIPMBBIT field is not 1, the MBMODE field is decoded to indicate the type of macroblock and other information regarding the current macroblock, such as information described below in Section IV.B.
- At frame level, the SKIPMB field indicates skip information for macroblocks in the frame. In one implementation, the skip information can be encoded in one of several modes. For example, in raw coding mode, the SKIPMB field indicates the presence of SKIPMBBIT at macroblock level. In a bitplane coding mode, the SKIPMB field stores skip information in a compressed bit plane. Available bitplane coding modes include normal-2 mode, differential-2 mode, normal-6 mode, differential-6 mode, rowskip mode, and columnskip mode. Bitplane coding modes are described in further detail in Section V.C, below. The decoded SKIPMB bitplane contains one bit per macroblock and indicates the skip condition for each respective macroblock.
- Alternatively, skipped macroblocks are signaled in some other way or at some other level in the bitstream. For example, a compressed bitplane is sent at field level. As another alternative, the skip condition can be defined to imply information about a skipped macroblock other than and/or in addition to the information described above.
- B. Macroblock Mode Signaling
- In some implementations, an encoder jointly encodes motion compensation type and potentially other information about a macroblock with field/frame coding type information for the macroblock. For example, an encoder jointly encodes one of five motion compensation types (1MV, 4 Frame MV, 2 Field MV, 4 Field MV, and intra) with a field transform/frame transform/no coded blocks event using one or more variable length coding tables. A decoder performs corresponding decoding.
- Jointly coding motion compensation type and field/frame coding type information for a macroblock takes advantage of the observation that certain field/frame coding types are more likely to occur in certain contexts for a macroblock of a given motion compensation type. Variable length coding can then be used to assigned shorter codes to the more likely combinations of motion compensation type and field/frame coding type. For even more flexibility, multiple variable length coding tables can be used, and an encoder can switch between the tables depending on the situation. Thus, jointly coding motion compensation type and field/frame coding type information for a macroblock can provide savings in coding overhead that would otherwise be used to signal field/frame coding type separately for each macroblock.
- For example, in some implementations an encoder selects a motion compensation type (e.g., 1MV, 4 Frame MV, 2 Field MV, or 4 Field MV) and a field/frame coding type (e.g., field, frame, or no coded blocks) for a macroblock. The encoder jointly encodes the motion compensation type and the field/frame coding type for the macroblock. The encoder also can encode other information jointly with the motion compensation type and field/frame coding type. For example, the encoder can jointly encode information indicating the presence or absence of a differential motion vector for the macroblock (e.g., for a macroblock having one motion vector).
- A decoder performs corresponding decoding. For example,
FIG. 26 shows atechnique 2600 for decoding jointly coded motion compensation type information and field/frame coding type information for a macroblock in an interlaced P-frame in some implementations. At 2610, a decoder receives macroblock information which includes a joint code (e.g., a variable length code from a variable coding table) representing motion compensation type and field/frame coding type for a macroblock. At 2620, the decoder decodes the joint code (e.g., by looking up the joint code in a variable length coding table) to obtain motion compensation type information and field/frame coding type information for the macroblock. - In one implementation, the macroblock-level bitstream element MBMODE jointly specifies the type of macroblock (1MV, 4 Frame MV, 2 Field MV, 4 Field MV, or intra), field/frame coding types for inter-coded macroblock (field, frame, or no coded blocks), and whether there is a differential motion vector for a 1MV macroblock. In this example, MBMODE can take one of 15 possible values. Let <MVP> denote the signaling of whether a nonzero 1MV differential motion vector is present or absent. Let <Field/Frame transform> denote the signaling of whether the residual of the macroblock is (1) frame-coded; (2) field-coded; or (3) zero coded blocks (i.e. CBP=0). MBMODE signals the following information jointly:
-
- MBMODE={<1MV, MVP, Field/Frame transform>, <2 Field MV, Field/Frame transform>, <4 Frame MV, Field/Frame transform>, <4 Field MV, Field/Frame transform>, <INTRA>};
The case <1MV, MVP=0, CBP=0>, is not signaled by MBMODE, but is signaled by the skip condition. (Examples of signaling this skip condition are provided above in Section IV.A.)
- MBMODE={<1MV, MVP, Field/Frame transform>, <2 Field MV, Field/Frame transform>, <4 Frame MV, Field/Frame transform>, <4 Field MV, Field/Frame transform>, <INTRA>};
- In this example, for inter-coded macroblocks, the CBPCY syntax element is not decoded when <Field/frame Transform> in MBMODE indicates no coded blocks. On the other hand, if <Field/frame Transform> in MBMODE indicates field or frame transform, then CBPCY is decoded. For non-1MV inter-coded macroblocks, an additional field is sent to indicate which of the differential motion vectors is non-zero. In the case of 2 Field MV macroblocks, the 2MVBP field is sent to indicate which of the two motion vectors contain nonzero differential motion vectors. Similarly, the 4MVBP field is sent to indicate which of four motion vectors contain nonzero differential motion vectors. For intra-coded macroblocks, the Field/Frame coding types and zero coded blocks are coded in separate fields.
- Alternatively, an encoder/decoder uses joint coding with different combinations of motion compensation types and field/frame coding types. As another alternative, an encoder/decoder jointly encodes/decodes additional information other than the presence of motion vector differentials.
- In some implementations, an encoder/decoder uses one of several variable length code tables to encode MBMODE and can adaptively switch between code tables. For example, in some implementations, the frame-level syntax element MBMODETAB is a 2-bit field that indicates the table used to decode the MBMODE for macroblocks in the frame. In this example, the tables are grouped into sets of four tables, and the set of tables used depends on whether four-motion-vector coding is enabled for the frame.
- Exemplary MBMODE variable length code tables (e.g., Tables 0-3 for each set—Mixed MV or 1MV) are provided below in Tables 1-8:
TABLE 1 Interlace P-Frame Mixed MV MB Mode Table 0 MV VLC VLC MB Type Present Transform Codeword VLC Size (binary) 1 MV 1 Frame 22 5 10110 1 MV 1 Field 17 5 10001 1 MV 1 No CBP 0 2 00 1 MV 0 Frame 47 6 101111 1 MV 0 Field 32 6 100000 2 Field MV N/A Frame 10 4 1010 2 Field MV N/ A Field 1 2 01 2 Field MV N/ A No CBP 3 2 11 4 Frame MV N/A Frame 67 7 1000011 4 Frame MV N/A Field 133 8 10000101 4 Frame MV N/A No CBP 132 8 10000100 4 Field MV N/A Frame 92 7 1011100 4 Field MV N/A Field 19 5 10011 4 Field MV N/A No CBP 93 7 1011101 INTRA N/A N/A 18 5 10010 -
TABLE 2 Interlace Frame Mixed MV MB Mode Table 1 MV VLC VLC MB Type Present Transform Codeword VLC Size (binary) 1 MV 1 Frame 3 3 011 1 MV 1 Field 45 6 101101 1 MV 1 No CBP 0 3 000 1 MV 0 Frame 7 3 111 1 MV 0 Field 23 5 10111 2 Field MV N/A Frame 6 3 110 2 Field MV N/ A Field 1 3 001 2 Field MV N/ A No CBP 2 3 010 4 Frame MV N/A Frame 10 4 1010 4 Frame MV N/A Field 39 6 100111 4 Frame MV N/A No CBP 44 6 101100 4 Field MV N/A Frame 8 4 1000 4 Field MV N/A Field 18 5 10010 4 Field MV N/A No CBP 77 7 1001101 INTRA N/A N/A 76 7 1001100 -
TABLE 3 Interlace Frame Mixed MV MB Mode Table 2 MV VLC VLC MB Type Present Transform Codeword VLC Size (binary) 1 MV 1 Frame 15 4 1111 1 MV 1 Field 6 3 110 1 MV 1 No CBP 28 5 11100 1 MV 0 Frame 9 5 01001 1 MV 0 Field 41 7 0101001 2 Field MV N/A Frame 6 4 0110 2 Field MV N/ A Field 2 2 10 2 Field MV N/A No CBP 15 5 01111 4 Frame MV N/A Frame 14 5 01110 4 Frame MV N/A Field 8 5 01000 4 Frame MV N/A No CBP 40 7 0101000 4 Field MV N/A Frame 29 5 11101 4 Field MV N/ A Field 0 2 00 4 Field MV N/A No CBP 21 6 010101 INTRA N/A N/A 11 5 01011 -
TABLE 4 Interlace Frame Mixed MV MB Mode Table 3 MV VLC VLC VLC MB Type Present Transform Codeword Size (binary) 1 MV 1 Frame 7 4 0111 1 MV 1 Field 198 9 011000110 1 MV 1 No CBP 1 1 1 1 MV 0 Frame 2 3 010 1 MV 0 Field 193 9 011000001 2 Field MV N/A Frame 13 5 01101 2 Field MV N/A Field 25 6 011001 2 Field MV N/ A No CBP 0 2 00 4 Frame MV N/A Frame 97 8 01100001 4 Frame MV N/A Field 1599 12 011000111111 4 Frame MV N/A No CBP 98 8 01100010 4 Field MV N/A Frame 398 10 0110001100 4 Field MV N/A Field 798 11 01100011110 4 Field MV N/A No CBP 192 9 011000000 INTRA N/A N/A 1598 12 011000111110 -
TABLE 5 Interlace Frame 1 MV MB Mode Table 0MV VLC VLC MB Type Present Transform Codeword VLC Size (binary) 1 MV 1 Frame 9 4 1001 1 MV 1 Field 22 5 10110 1 MV 1 No CBP 0 2 00 1 MV 0 Frame 17 5 10001 1 MV 0 Field 16 5 10000 2 Field MV N/A Frame 10 4 1010 2 Field MV N/ A Field 1 2 01 2 Field MV N/ A No CBP 3 2 11 INTRA N/A N/A 23 5 10111 -
TABLE 6 Interlace Frame 1 MV MB Mode Table 1MV VLC VLC MB Type Present Transform Codeword VLC Size (binary) 1 MV 1 Frame 7 3 111 1 MV 1 Field 0 4 0000 1 MV 1 No CBP 5 6 000101 1 MV 0 Frame 2 2 10 1 MV 0 Field 1 3 001 2 Field MV N/ A Frame 1 2 01 2 Field MV N/A Field 6 3 110 2 Field MV N/ A No CBP 3 5 00011 INTRA N/A N/ A 4 6 000100 -
TABLE 7 Interlace Frame 1 MV MB Mode Table 2VLC VLC MB Type MV Present Transform Codeword VLC Size (binary) 1 MV 1 Frame 1 2 01 1 MV 1 Field 0 2 00 1 MV 1 No CBP 10 4 1010 1 MV 0 Frame 23 5 10111 1 MV 0 Field 44 6 101100 2 Field MV N/A Frame 8 4 1000 2 Field MV N/ A Field 3 2 11 2 Field MV N/ A No CBP 9 4 1001 INTRA N/A N/A 45 6 101101 -
TABLE 8 Interlace Frame 1 MV MB Mode Table 3MV VLC VLC MB Type Present Transform Codeword Size VLC (binary) 1 MV 1 Frame 7 4 0111 1 MV 1 Field 97 8 01100001 1 MV 1 No CBP 1 1 1 1 MV 0 Frame 2 3 010 1 MV 0 Field 49 7 0110001 2 Field MV N/A Frame 13 5 01101 2 Field MV N/A Field 25 6 011001 2 Field MV N/ A No CBP 0 2 00 INTRA N/A N/A 96 8 01100000
V. Combined Implementations - A detailed combined implementation for a bitstream syntax, semantics, and decoder are now described, in addition to an alternative combined implementation with minor differences from the main combined implementation.
- A. Bitstream Syntax
- In various combined implementations, data for interlaced pictures is presented in the form of a bitstream having plural layers (e.g., sequence, entry point, frame, field, macroblock, block and/or sub-block layers).
- In the syntax diagrams, arrow paths show the possible flows of syntax elements. Syntax elements shown with square-edged boundaries indicate fixed-length syntax elements; those with rounded boundaries indicate variable-length syntax elements and those with a rounded boundary within an outer rounded boundary indicate a syntax element (e.g., a bitplane) made up of simpler syntax elements. A fixed-length syntax element is defined to be a syntax element for which the length of the syntax element is not dependent on data in the syntax element itself; the length of a fixed-length syntax element is either constant or determined by prior data in the syntax flow. A lower layer in a layer diagram (e.g., a macroblock layer in a frame-layer diagram) is indicated by a rectangle within a rectangle.
- Entry-point-level bitstream elements are shown in
FIG. 27 . In general, an entry point marks a position in a bitstream (e.g., an I-frame or other key frame) at which a decoder can begin decoding. In other words, no pictures before the entry point in the bitstream are needed to decode pictures after the entry point. An entry point header can be used to signal changes in coding control parameters (e.g., enabling or disabling compression tools (e.g., in-loop deblocking filtering) for frames following an entry point). - For interlaced P-frames and B-frames, frame-level bitstream elements are shown in
FIGS. 28 and 29 , respectively. Data for each frame consists of a frame header followed by data for the macroblock layer (whether for intra or various inter type macroblocks). The bitstream elements that make up the macroblock layer for interlaced P-frames (whether for intra or various inter type macroblocks) are shown inFIG. 31 . Bitstream elements in the macroblock layer for interlaced P-frames may be present for macroblocks in other interlaced pictures (e.g., interlaced B-frames, interlaced P-fields, interlaced B-fields, etc.) - For interlaced video frames with interlaced P-fields and/or B-fields, frame-level bitstream elements are shown in
FIG. 30 . Data for each frame consists of a frame header followed by data for the field layers (shown as the repeated “FieldPicLayer” element per field) and data for the macroblock layers (whether for intra, 1MV, or 4MV macroblocks). - The following sections describe selected bitstream elements in the frame and macroblock layers that are related to signaling for interlaced pictures. Although the selected bitstream elements are described in the context of a particular layer, some bitstream elements can be used in more than one layer.
- 1. Selected Entry Point Layer Elements
- Loop Filter (LOOPFILTER) (1 Bit)
- LOOPFILTER is a Boolean flag that indicates whether loop filtering is enabled for the entry point segment. If LOOPFILTER=0, then loop filtering is not enabled. If LOOPFILTER=1, then loop filtering is enabled. In an alternative combined implementation, LOOPFILTER is a sequence level element.
- Extended Motion Vectors (EXTENDED_MV) (1 Bit)
- EXTENDED_MV is a 1-bit syntax element that indicates whether extended motion vectors is turned on (value 1) or off (value 0). EXTENDED_MV indicates the possibility of extended motion vectors (signaled at frame level with the syntax element MVRANGE) in P-frames and B-frames.
- Extended Differential Motion Vector Range (EXTENDED_DMV)(1 Bit)
- EXTENDED_DMV is a 1-bit syntax element that is present if EXTENDED_MV=1. If EXTENDED_DMV is 1, extended differential motion vector range (DMVRANGE) shall be signaled at frame layer for the P-frames and B-frames within the entry point segment. If EXTENDED_DMV is 0, DMVRANGE shall not be signaled.
- FAST UV Motion Comp (FASTUVMC) (1 Bit)
- FASTUVMC is a Boolean flag that controls the sub-pixel interpolation and rounding of chroma motion vectors. If FASTUVMC=1, the chroma motion vectors that are at quarter-pel offsets will be rounded to the nearest half or full-pel positions. If FASTUVMC=0, no special rounding or filtering is done for chroma. The FASTUVMC syntax element is ignored in interlaced P-frames and interlaced B-frames.
- Variable Sized Transform (VSTRANSFORM) (1 Bit)
- VSTRANSFORM is a Boolean flag that indicates whether variable-sized transform coding is enabled for the sequence. If VSTRANSFORM=0, then variable-sized transform coding is not enabled. If VSTRANSFORM=1, then variable-sized transform coding is enabled.
- 2. Selected Frame Layer Elements
-
FIGS. 28 and 29 are diagrams showing frame-level bitstream syntaxes for interlaced P-frames and interlaced B-frames, respectively.FIG. 30 is a diagram showing a frame-layer bitstream syntax for frames containing interlaced P-fields, and/or B-fields (or potentially other kinds of interlaced fields). Specific bitstream elements are described below. - Frame Coding Mode (FCM) (Variable Size)
- FCM is a variable length codeword [“VLC”] used to indicate the picture coding type. FCM takes on values for frame coding modes as shown in Table 9 below:
TABLE 9 Frame Coding Mode VLC FCM value Frame Coding Mode 0 Progressive 10 Frame-Interlace 11 Field-Interlace
Field Picture Type (FPTYPE) (3 Bits) - FPTYPE is a three-bit syntax element present in the frame header for a frame including interlaced P-fields and/or interlaced B-fields, and potentially other kinds of fields. FPTYPE takes on values for different combinations of field types in the interlaced video frame, according to Table 10 below.
TABLE 10 Field Picture Type FLC FPTYPE FLC First Field Type Second Field Type 000 I I 001 I P 010 P I 011 P P 100 B B 101 B BI 110 BI B 111 BI BI
Picture Type (PTYPE ) (Variable Size) - PTYPE is a variable size syntax element present in the frame header for interlaced P-frames and interlaced B-frames (or other kinds of interlaced frames such as interlaced I-frames). PTYPE takes on values for different frame types according to Table 11 below.
TABLE 11 Picture Type VLC PTYPE VLC Picture Type 110 I 0 P 10 B 1110 BI 1111 Skipped
If PTYPE indicates that the frame is skipped then the frame is treated as a P-frame which is identical to its reference frame. The reconstruction of the skipped frame is equivalent conceptually to copying the reference frame. A skipped frame means that no further data is transmitted for this frame.
UV Sampling Format (UVSAMP) (1 Bit) - UVSAMP is a 1-bit syntax element that is present when the sequence-level field INTERLACE=1. UVSAMP indicates the type of chroma subsampling used for the current frame. If UVSAMP=1, then progressive subsampling of the chroma is used, otherwise, interlace subsampling of the chroma is used. This syntax element does not affect decoding of the bitstream.
- Extended MV Range (MVRANGE) (Variable Size)
- MVRANGE is a variable-sized syntax element present when the entry-point-layer EXTENDED_MV bit is set to 1. The MVRANGE VLC represents a motion vector range.
- Extended Differential MV Range (DMVRANGE) (Variable Size)
- DMVRANGE is a variable-sized syntax element present if the entry-point-layer syntax element EXTENDED_DMV=1. The DMVRANGE VLC represents a motion vector differential range.
- 4 Motion Vector Switch (4MVSWITCH) (Variable Size or 1 Bit)
- For interlaced P-frames, the 4MVSWITCH syntax element is a 1-bit flag. If 4MVSWITCH is set to zero, the macroblocks in the picture have only one motion vector or two motion vectors, depending on whether the macroblock has been frame-coded or field-coded, respectively. If 4MVSWITCH is set to 1, there may be 1, 2 or 4 motion vectors per macroblock.
- Skipped Macroblock Decoding (SKIPMB) (Variable Size)
- For interlaced P-frames, the SKIPMB syntax element is a compressed bitplane containing information that indicates the skipped/not-skipped status of each macroblock in the picture. The decoded bitplane represents the skipped/not-skipped status for each macroblock with 1-bit values. A value of 0 indicates that the macroblock is not skipped. A value of 1 indicates that the macroblock is coded as skipped. A skipped status for a macroblock in interlaced P-frames means that the decoder treats the macroblock as 1MV with a motion vector differential of zero and a coded block pattern of zero. No other information is expected to follow for a skipped macroblock.
- Macroblock Mode Table (MBMODETAB) (2 or 3 Bits)
- The MBMODETAB syntax element is a fixed-length field. For interlaced P-frames, MBMODETAB is a 2-bit value that indicates which one of four code tables is used to decode the macroblock mode syntax element (MBMODE) in the macroblock layer. There are two sets of four code tables and the set that is being used depends on whether 4MV is used or not, as indicated by the 4MVSWITCH flag.
- Motion Vector Table (MVTAB) (2 or 3 Bits)
- The MVTAB syntax element is a fixed length field. For interlaced P-frames, MVTAB is a 2-bit syntax element that indicates which of the four progressive (or, one-reference) motion vector code tables are used to code the MVDATA syntax element in the macroblock layer.
- 2MV Block Pattern Table (2MVBPTAB) (2 Bits)
- The 2MVBPTAB syntax element is a 2-bit value that signals which of four code tables is used to decode the 2MV block pattern (2MVBP) syntax element in 2MV field macroblocks.
- 4MV Block Pattern Table (4MVBPTAB) (2 Bits)
- The 4MVBPTAB syntax element is a 2-bit value that signals which of four code tables is used to decode the 4MV block pattern (4MVBP) syntax element in 4MV macroblocks. For interlaced P-frames, it is present if the 4MVSWITCH syntax element is set to 1.
- Macroblock-Level Transform Type Flag (TTMBF) (1 Bit)
- This syntax element is present in P-frames and B-frames if the sequence-level syntax element VSTRANSFORM=1. TTMBF is a one-bit syntax element that signals whether transform type coding is enabled at the frame or macroblock level. If TTMBF=1, the same transform type is used for all blocks in the frame. In this case, the transform type is signaled in the Frame-level Transform Type (TTFRM) syntax element that follows. If TTMBF=0, the transform type may vary throughout the frame and is signaled at the macroblock or block levels.
- Frame-Level Transform Type (TTFRM) (2 Bits)
- This syntax element is present in P-frames and B-frames if VSTRANSFORM=1 and TTMBF=1. TTFRM signals the transform type used to transform the 8×8 pixel error signal in predicted blocks. The 8×8 error blocks may be transformed using an 8×8 transform, two 8×4 transforms, two 4×8 transforms or four 4×4 transforms.
- 3. Selected Macroblock Layer Elements
-
FIG. 31 is a diagram showing a macroblock-level bitstream syntax for macroblocks interlaced P-frames in the combined implementation. Specific bitstream elements are described below. Data for a macroblock consists of a macroblock header followed by block layer data. Bitstream elements in the macroblock layer for interlaced P-frames (e.g., SKIPMBBIT) may potentially be present for macroblocks in other interlaced pictures (e.g., interlaced B-frames, etc.) - Skip MB Bit (SKIPMBBIT) (1 Bit)
- SKIPMBBIT is a 1-bit syntax element present in interlaced P-frame macroblocks and interlaced B-frame macroblocks if the frame-level syntax element SKIPMB indicates that raw mode is used. If SKIPMBBIT=1, the macroblock is skipped. SKIPMBBIT also may be labeled as SKIPMB at the macroblock level.
- Macroblock Mode (MBMODE) (Variable Size)
- MBMODE is a variable-size syntax element that jointly specifies macroblock type (e.g., 1MV, 2 Field MV, 4 Field MV, 4 Frame MV or Intra), field/frame coding type (e.g., field, frame, or no coded blocks), and the presence of differential motion vector data for 1MV macroblocks. MBMODE is explained in detail below and in Section IV above.
- 2MV Block Pattern (2MVBP) (Variable Size)
- 2MVBP is a variable-sized syntax element present in interlaced P-frame and interlaced B-frame macroblocks. In interlaced P-frame macroblocks, 2MVBP is present if MBMODE indicates that the macroblock has two field motion vectors. In this case, 2MVBP indicates which of the 2 luma blocks contain non-zero motion vector differentials.
- 4MV Block Pattern (4MVBP) (Variable Size)
- 4MVBP is a variable-sized syntax element present in interlaced P-field, interlaced B-field, interlaced P-frame and interlaced B-frame macroblocks. In interlaced P-frame, 4MVBP is present if MBMODE indicates that the macroblock has four motion vectors. In this case, 4MVBP indicates which of the four luma blocks contain non-zero motion vector differentials.
- Field Transform Flag (FIELDTX) (1 Bit)
- FIELDTX is a 1-bit syntax present in interlaced B-frame intra-coded macroblocks. This syntax element indicates whether a macroblock is frame or field coded (basically, the internal organization of the macroblock). FIELDTX=1 indicates that the macroblock is field-coded. Otherwise, the macroblock is frame-coded. In inter-coded macroblocks, this syntax element can be inferred from MBMODE as explained in detail below and in Section IV above.
- CBP Present Flag (CBPPRESENT) (1 Bit)
- CBPPRESENT is a 1-bit syntax present in intra-coded macroblocks in interlaced P-frames and interlaced B-frames. If CBPPRESENT is 1, the CBPCY syntax element is present for that macroblock and is decoded. If CBPPRESENT is 0, the CBPCY syntax element is not present and shall be set to zero.
- Coded Block Pattern (CBPCY) (Variable Size)
- CBPCY is a variable-length syntax element indicates the transform coefficient status for each block in the macroblock. CBPCY decodes to a 6-bit field which indicates whether coefficients are present for the corresponding block. For intra-coded macroblocks, a value of 0 in a particular bit position indicates that the corresponding block does not contain any non-zero AC coefficients. A value of 1 indicates that at least one non-zero AC coefficient is present. The DC coefficient is still present for each block in all cases. For inter-coded macroblocks, a value of 0 in a particular bit position indicates that the corresponding block does not contain any non-zero coefficients. A value of 1 indicates that at least one non-zero coefficient is present. For cases where the bit is 0, no data is encoded for that block.
- Motion Vector Data (MVDA TA) (Variable Size)
- MVDATA is a variable sized syntax element that encodes differentials for the motion vector(s) for the macroblock, the decoding of which is described in detail in below.
- MB-Level Transform Type (TTMB) (Variable Size)
- TTMB is a variable-size syntax element in P-picture and B-picture macroblocks when the picture layer syntax element TTMBF=0. TTMB specifies a transform type, transform type signal level, and subblock pattern.
- B. Decoding Interlaced P-Frames
- A process for decoding interlaced P-frames in a combined implementation is described below.
- 1. Macroblock Layer Decoding of Interlaced P-Frames
- In an interlaced P-frame, each macroblock may be motion compensated in frame mode using one or four motion vectors or in field mode using two or four motion vectors. A macroblock that is inter-coded does not contain any intra blocks. In addition, the residual after motion compensation may be coded in frame transform mode or field transform mode. More specifically, the luma component of the residual is re-arranged according to fields if it is coded in field transform mode but remains unchanged in frame transform mode, while the chroma component remains the same. A macroblock may also be coded as intra.
- Motion compensation may be restricted to not include four (both field/frame) motion vectors, and this is signaled through 4MVSWITCH. The type of motion compensation and residual coding is jointly indicated for each macroblock through MBMODE and SKIPMB. MBMODE employs a different set of tables according to 4MVSWITCH.
- Macroblocks in interlaced P-frames are classified into five types: 1MV, 2 Field MV, 4 Frame MV, 4 Field MV, and Intra. These five types are described in further detail in above in Section III. The first four types of macroblock are inter-coded while the last type indicates that the macroblock is intra-coded. The macroblock type is signaled by the MBMODE syntax element in the macroblock layer along with the skip bit. (A skip condition for the macroblock also can be signaled at frame level in a compressed bit plane.) MBMODE jointly encodes macroblock types along with various pieces of information regarding the macroblock for different types of macroblock.
- Skipped Macroblock Signaling
- The macroblock-level SKIPMBBIT field indicates the skip condition for a macroblock. (Additional detail regarding skip conditions and corresponding signaling is provided in Section IV, above.) If the SKIPMBBIT field is 1, then the current macroblock is said to be skipped and there is no other information sent after the SKIPMBBIT field. (At frame level, the SKIPMB field indicates the presence of SKIPMBBIT at macroblock level (in raw mode) or stores skip information in a compressed bit plane. The decoded bitplane contains one bit per macroblock and indicates the skip condition for each respective macroblock.) The skip condition implies that the current macroblock is 1MV with zero differential motion vector (i.e. the macroblock is motion compensated using its 1MV motion predictor) and there are no coded blocks (CBP=0). In an alternative combined implementation, the residual is assumed to be frame-coded for loop filtering purposes.
- On the other hand, if the SKIPMB field is not 1, the MBMODE field is decoded to indicate the type of macroblock and other information regarding the current macroblock, such as information described in the following section.
- Macroblock Mode Signaling
- MBMODE jointly specifies the type of macroblock (1MV, 4 Frame MV, 2 Field MV, 4 Field MV, or intra), types of transform for inter-coded macroblock (i.e. field or frame or no coded blocks), and whether there is a differential motion vector for a 1MV macroblock. (Additional detail regarding signaling of macroblock information is provided in Section IV, above.) MBMODE can take one of 15 possible values:
- Let <MVP> denote the signaling of whether a nonzero 1MV differential motion vector is present or absent. Let <Field/Frame transform> denote the signaling of whether the residual of the macroblock is (1) frame transform coded; (2) field transform coded; or (3) zero coded blocks (i.e. CBP=0). MBMODE signals the following information jointly:
-
- MBMODE={<1MV, MVP, Field/Frame transform>, <2 Field MV, Field/Frame transform>, <4 Frame MV, Field/Frame transform>, <4 Field MV, Field/Frame transform>, <INTRA>);
The case <1MV, MVP=0, CBP=0>, is not signaled by MBMODE, but is signaled by the skip condition.
- MBMODE={<1MV, MVP, Field/Frame transform>, <2 Field MV, Field/Frame transform>, <4 Frame MV, Field/Frame transform>, <4 Field MV, Field/Frame transform>, <INTRA>);
- For inter-coded macroblocks, the CBPCY syntax element is not decoded when <Field/frame Transform> in MBMODE indicates no coded blocks. On the other hand, if <Field/frame Transform> in MBMODE indicates field or frame transform, then CBPCY is decoded. The decoded <Field/frame Transform> is used to set the flag FIELDTX. If it indicates that the macroblock is field transform coded, FIELDTX is set to 1. If it indicates that the macroblock is frame transform coded, FIELDTX is set to 0. If it indicates a zero-coded block, FIELDTX is set to the same type as the motion vector, i.e., FIELDTX is set to 1 if it is a field motion vector and to 0 if it is a frame motion vector.
- For non-1MV inter-coded macroblocks, an additional field is sent to indicate which of the differential motion vectors is non-zero. In the case of 2 Field MV macroblocks, the 2MVBP field is sent to indicate which of the two motion vectors contain nonzero differential motion vectors. Similarly, the 4MVBP field is sent to indicate which of the four motion vectors contain nonzero differential motion vectors.
- For intra-coded macroblocks, the Field/Frame transform and zero coded blocks are coded in separate fields.
- 2. Motion Vector Decoding for Interlaced P-Frames
- Motion Vector Predictors for Interlaced P-Frames
- The process of computing the motion vector predictor(s) for the current macroblock consists of two steps. First, three candidate motion vectors for the current macroblock are gathered from its neighboring macroblocks. Second, the motion vector predictor(s) for the current macroblock is computed from the set of candidate motion vectors.
FIGS. 24A-24B show neighboring macroblocks from which the candidate motion vectors are gathered. The order of the collection of candidate motion vectors is important. In this combined implementation, the order of collection always starts at A, proceeds to B, and ends at C. A predictor candidate is considered to be non-existent if the corresponding block is outside the frame boundary or if the corresponding block is part of a different slice. Thus, motion vector prediction is not performed across slice boundaries. - The following sections describe how the candidate motion vectors are collected for different types of macroblocks and how the motion vector predictors are computed.
- 1MV Candidate Motion Vectors
- In this combined implementation, the
pseudo-code 3200 inFIG. 32 is used to collect the up to three candidate motion vectors for the motion vector. - 4 Frame MV Candidate Motion Vectors
- For 4 Frame MV macroblocks, for each of the four frame block motion vectors in the current macroblock, the candidate motion vectors from the neighboring blocks are collected. In this combined implementation, the
pseudo-code 3300 inFIG. 33 is used to collect the up to three candidate motion vectors for the top left frame block motion vector. Thepseudo-code 3400 inFIG. 34 is used to collect the up to three candidate motion vectors for the top right frame block motion vector. Thepseudo-code 3500 inFIG. 35 is used to collect the up to three candidate motion vectors for the bottom left frame block motion vector. Thepseudo-code 3600 inFIG. 36 is used to collect the up to three candidate motion vectors for the bottom right frame block motion vector. - 2 Field MV Candidate Motion Vectors
- For 2 Field MV macroblocks, for each of the two field motion vectors in the current macroblock, the candidate motion vectors from the neighboring blocks are collected. The
pseudo-code 3700 inFIG. 37 is used to collect the up to three candidate motion vectors for the top field motion vector. Thepseudo-code 3800 inFIG. 38 is used to collect the up to three candidate motion vectors for the bottom field motion vector. - 4 Field MV Candidate Motion Vectors
- For 4 Field MV macroblocks, for each of the four field blocks in the current macroblock, the candidate motion vectors from the neighboring blocks are collected. The
pseudo-code 3900 inFIG. 39 is used to collect the up to three candidate motion vectors for the top left field block motion vector. Thepseudo-code 4000 inFIG. 40 is used to collect the up to three candidate motion vectors for the top right field block motion vector. Thepseudo-code 4100 inFIG. 41 is used to collect the up to three candidate motion vectors for the bottom left field block motion vector. Thepseudo-code 4200 inFIG. 42 is used to collect the up to three candidate motion vectors for the bottom right field block motion vector. - Average Field Motion Vectors
- Given two field motion vectors (MVX1, MVY1) and (MVX2, MVY2), the average operation used to form a candidate motion vector (MVXA, MVYA) is:
MVXA=(MVX1+MVX2+1)>>1;
MVYA=(MVY1+MVY2+1)>>1;
Computing Frame MV Predictors from Candidate Motion Vectors - This section describes how motion vector predictors are calculated for frame motion vectors given a set of candidate motion vectors. In this combined implementation, the operation is the same for computing the predictor for 1MV or for each one of the four frame block motion vectors in 4 Frame MV macroblocks.
- The
pseudo-code 4300 inFIG. 43 describes how the motion vector predictor (PMVx, PMVy) is computed for frame motion vectors. In thepseudo-code 4300, TotalValidMV denotes the total number of motion vectors in the set of candidate motion vectors (TotalValidMV=0, 1, 2, or 3), and the ValidMV array denotes the motion vector in the set of candidate motion vectors. - Computing Field MV Predictors from Candidate Motion Vectors
- This section describes how motion vector predictors are computed for field motion vectors given the set of candidate motion vectors. In this combined implementation, the operation is the same for computing the predictor for each of the two field motion vectors in 2 Field MV macroblocks or for each of the four field block motion vectors in 4 Field MV macroblocks.
- First, the candidate motion vectors are separated into two sets, where one set contains only candidate motion vectors that point to the same field as the current field and the other set contains candidate motion vectors that point to the opposite field. Assuming that the candidate motion vectors are represented in quarter pixel units, the following check on its y-component verifies whether a candidate motion vector points to the same field:
if (ValidMVy & 4) { ValidMV points to the opposite field: } else { ValidMV points to the same field. } - The
pseudo-code 4400 inFIG. 44 describes how the motion vector predictor (PMVx, PMVy) is computed for field motion vectors. In thepseudo-code 4400, SameFieldMV and OppFieldMV denote the two sets of candidate motion vectors and NumSameFieldMV and NumOppFieldMV denote the number of candidate motion vectors that belong to each set. The order of candidate motion vectors in each set starts with candidate A if it exists, followed by candidate B if it exists, and then candidate C if it exists. For example, if the set SameFieldMV contains only candidate B and candidate C, then SameFieldMV[0] is candidate B. - Decoding Motion Vector Differentials
- The MVDATA syntax elements contain motion vector differential information for the macroblock. Depending on the type of motion compensation and motion vector block pattern signaled at each macroblock, there may be from zero to four MVDATA syntax elements per macroblock. More specifically,
-
- * For 1MV macroblocks, there may be either zero or one MVDATA syntax element present depending on the MVP field in MBMODE.
- For 2 Field MV macroblocks, there may be either zero, one, or two MVDATA syntax element(s) present depending on 2MVBP.
- For 4 Frame/Field MV macroblocks, there may be either zero, one, two, three, or four MVDATA syntax element(s) present depending on 4MVBP.
- In this combined implementation, the motion vector differential is decoded in the same way as a one reference field motion vector differential for interlaced P-fields, without a half-pel mode. (The pseudo-code 4500 in
FIG. 45A illustrates how the motion vector differential is decoded for a one-reference field. Thepseudo-code 4510 inFIG. 45B illustrates how the motion vector differential is decoded for a one-reference field in an alternative combined implementation.Pseudo-code 4510 decodes motion vector differentials in a different way. For example,pseudo-code 4510 omits handling of extended motion vector differential ranges.) - Reconstructing Motion Vectors
- Given the motion vector differential dmv, the luma motion vector is reconstructed by adding the differential to the predictor as follows:
mv — x=(dmv — x+predictor— x)s mod range— x
mv — y=(dmv — y+predictor— y)s mod range— y - The smod operation ensures that the reconstructed vectors are valid. (A s mod b) lies within -b and b-1. range_x and range_y depend on MVRANGE.
- Given a luma frame or field motion vector, a corresponding chroma frame or field motion vector is derived to compensate a portion (or potentially all) of the chroma (Cb/Cr) block. The FASTUVMC syntax element is ignored in interlaced P-frames and interlaced B-frames. The
pseudo-code 4600 inFIG. 46 describes how a chroma motion vector CMV is derived from a luma motion vector LMV in interlace P-frames. - C. Bitplane Coding
- Macroblock-specific binary information such as skip bits may be encoded in one binary symbol per macroblock. For example, whether or not a macroblock is skipped may be signaled with one bit. In these cases, the status for all macroblocks in a field or frame may be coded as a bitplane and transmitted in the field or frame header. One exception for this rule is if the bitplane coding mode is set to Raw Mode, in which case the status for each macroblock is coded as one bit per symbol and transmitted along with other macroblock level syntax elements at the macroblock level.
- Field/frame-level bitplane coding is used to encode two-dimensional binary arrays. The size of each array is rowMB×colMB, where rowMB and colMB are the number of macroblock rows and columns, respectively, in the field or frame in question. Within the bitstream, each array is coded as a set of consecutive bits. One of seven modes is used to encode each array. The seven modes are:
-
- 1. raw mode—information coded as one bit per symbol and transmitted as part of MB level syntax;
- 2. normal-2 mode—two symbols coded jointly;
- 3. differential-2 mode—differential coding of the bitplane, followed by coding two residual symbols jointly;
- 4. normal-6 mode—six symbols coded jointly;
- 5. differential-6 mode—differential coding of the bitplane, followed by coding six residual symbols jointly;
- 6. rowskip mode—one bit skip to signal rows with no set bits; and
- 7. columnskip mode—one bit skip to signal columns with no set bits.
The syntax elements for a bitplane at the field or frame level are in the following sequence: INVERT, IMODE, and DATABITS.
Invert Flag (INVERT)
- The INVERT syntax element is a 1-bit value, which if set indicates that the bitplane has more set bits than zero bits. Depending on INVERT and the mode, the decoder shall invert the interpreted bitplane to recreate the original. Note that the value of this bit shall be ignored when the raw mode is used. Description of how the INVERT value is used in decoding the bitplane is provided below.
- Coding Mode (IMODE)
- The IMODE syntax element is a variable length value that indicates the coding mode used to encode the bitplane. Table 12 shows the code table used to encode the IMODE syntax element. Description of how the IMODE value is used in decoding the bitplane is provided below.
TABLE 12 IMODE VLC Codetable IMODE VLC Coding mode 10 Norm-2 11 Norm-6 010 Rowskip 011 Colskip 001 Diff-2 0001 Diff-6 0000 Raw
Bitplane Coding Bits (DATABITS) - The DATABITS syntax element is variable sized syntax element that encodes the stream of symbols for the bitplane. The method used to encode the bitplane is determined by the value of IMODE. The seven coding modes are described in the following sections.
- Raw Mode
- In this mode, the bitplane is encoded as one bit per symbol scanned in the raster-scan order of macroblocks, and sent as part of the macroblock layer. Alternatively, the information is coded in raw mode at the field or frame level and DATABITS is rowMB×colMB bits in length.
- Normal-2 Mode
- If rowMB×colMB is odd, the first symbol is encoded raw. Subsequent symbols are encoded pairwise, in natural scan order. The binary VLC table in Table 13 is used to encode symbol pairs.
TABLE 13 Norm-2/Diff-2 Code Table Symbol 2n Symbol 2n + 1 Codeword 0 0 0 1 0 100 0 1 101 1 1 11
Diff-2 Mode - The Normal-2 method is used to produce the bitplane as described above, and then the Diff−1 operation is applied to the bitplane as described below.
- Normal-6 Mode
- In the Norm-6 and Diff-6 modes, the bitplane is encoded in groups of six pixels. These pixels are grouped into either 2×3 or 3×2 tiles. The bitplane is tiled maximally using a set of rules, and the remaining pixels are encoded using a variant of row-skip and column-skip modes. 2×3 “vertical” tiles are used if and only if rowMB is a multiple of 3 and colMB is not. Otherwise, 3×2 “horizontal” tiles are used.
FIG. 47A shows a simplified example of 2×3 “vertical” tiles.FIGS. 47B and 47C show simplified examples of 3×2 “horizontal” tiles for which the elongated dark rectangles are 1 pixel wide and encoded using row-skip and column-skip coding. For a plane tiled as shown inFIG. 47C , with linear tiles along the top and left edges of the picture, the coding order of the tiles follows the following pattern. The 6-element tiles are encoded first, followed by the column-skip and row-skip encoded linear tiles. If the array size is a multiple of 2×3 or of 3×2, the latter linear tiles do not exist and the bitplane is perfectly tiled. - The 6-element rectangular tiles are encoded using an incomplete Huffman code, i.e., a Huffman code which does not use all end nodes for encoding. Let N be the number of set bits in the tile, i.e. 0≦N≦6. For N <3, a VLC is used to encode the tile. For N=3, a fixed length escape is followed by a 5 bit fixed length code, and for N>3, a fixed length escape is followed by the code of the complement of the tile. The rectangular tile contains 6 bits of information. Let k be the code associated with the tile, where k=
b i2i, bi is the binary value of the ith bit in natural scan order within the tile. Hence 0≦k<64. A combination of VLCs and escape codes please fixed length codes is used to signal k. - Diff-6 Mode
- The Normal-6 method is used to produce the bitplane as described above, and then the Diff−1 operation is applied to the bitplane as described below.
- Rowskip Mode
- In the rowskip coding mode, all-zero rows are skipped with one bit overhead. The syntax is as follows: for each row, a single ROWSKIP bit indicates if the row is skipped; if the row is skipped, the ROWSKIP bit for the next row is next; otherwise (the row is not skipped), ROWBITS bits (a bit for each macroblock in the row) are next. Thus, if the entire row is zero, a zero bit is sent as the ROWSKIP symbol, and ROWBITS is skipped. If there is a set bit in the row, ROWSKIP is set to 1, and the entire row is sent raw (ROWBITS). Rows are scanned from the top to the bottom of the field or frame.
- Columnskip Mode
- Columnskip is the transpose of rowskip. Columns are scanned from the left to the right of the field or frame.
- Diff−1: Inverse Differential Decoding
- If either differential mode (Diff-2 or Diff-6) is used, a bitplane of “differential bits” is first decoded using the corresponding normal modes (Norm-2 or Norm-6 respectively). The differential bits are used to regenerate the original bitplane. The regeneration process is a 2-D DPCM on a binary alphabet. In order to regenerate the bit at location (i, j), the predictor bp(i,j) is generated as follows (from bits b(i, j) at positions (i, j)):
For the differential coding mode, the bitwise inversion process based on INVERT is not performed. However, the INVERT flag is used in a different capacity to indicate the value of the symbol A for the derivation of the predictor shown above. More specifically, A equal to 0 if INVERT equals to 0 and A equals to 1 if INVERT equals to 1. The actual value of the bitplane is obtained by xor'ing the predictor with the decoded differential bit value. In the above equation, b(i,j) is the bit at the i,jth position after final decoding (i.e. after doing Norm-2/Norm-6, followed by differential xor with its predictor). - Having described and illustrated the principles of our invention with reference to various embodiments, it will be recognized that the various embodiments can be modified in arrangement and detail without departing from such principles. It should be understood that the programs, processes, or methods described herein are not related or limited to any particular type of computing environment, unless indicated otherwise. Various types of general purpose or specialized computing environments may be used with or perform operations in accordance with the teachings described herein. Elements of embodiments shown in software may be implemented in hardware and vice versa.
- In view of the many possible embodiments to which the principles of our invention may be applied, we claim as our invention all such embodiments as may come within the scope and spirit of the following claims and equivalents thereto.
Claims (30)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/934,929 US7606311B2 (en) | 2003-09-07 | 2004-09-02 | Macroblock information signaling for interlaced frames |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US50108103P | 2003-09-07 | 2003-09-07 | |
US10/934,929 US7606311B2 (en) | 2003-09-07 | 2004-09-02 | Macroblock information signaling for interlaced frames |
Publications (2)
Publication Number | Publication Date |
---|---|
US20050053145A1 true US20050053145A1 (en) | 2005-03-10 |
US7606311B2 US7606311B2 (en) | 2009-10-20 |
Family
ID=37064688
Family Applications (9)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/826,971 Active 2027-10-24 US7724827B2 (en) | 2003-09-07 | 2004-04-15 | Multi-layer run level encoding and decoding |
US10/931,695 Active 2026-07-19 US7412102B2 (en) | 2003-09-07 | 2004-08-31 | Interlace frame lapped transform |
US10/933,910 Active 2026-05-10 US7469011B2 (en) | 2003-09-07 | 2004-09-02 | Escape mode code resizing for fields and slices |
US10/933,908 Active 2026-07-13 US7352905B2 (en) | 2003-09-07 | 2004-09-02 | Chroma motion vector derivation |
US10/933,883 Expired - Lifetime US7099515B2 (en) | 2003-09-07 | 2004-09-02 | Bitplane coding and decoding for AC prediction status information |
US10/933,882 Active 2027-12-29 US7924920B2 (en) | 2003-09-07 | 2004-09-02 | Motion vector coding and decoding in interlaced frame coded pictures |
US10/934,929 Active 2028-06-19 US7606311B2 (en) | 2003-09-07 | 2004-09-02 | Macroblock information signaling for interlaced frames |
US10/934,116 Active 2027-12-03 US8687709B2 (en) | 2003-09-07 | 2004-09-04 | In-loop deblocking for interlaced video |
US10/934,117 Active 2026-10-19 US8116380B2 (en) | 2003-09-07 | 2004-09-04 | Signaling for field ordering and field/frame display repetition |
Family Applications Before (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/826,971 Active 2027-10-24 US7724827B2 (en) | 2003-09-07 | 2004-04-15 | Multi-layer run level encoding and decoding |
US10/931,695 Active 2026-07-19 US7412102B2 (en) | 2003-09-07 | 2004-08-31 | Interlace frame lapped transform |
US10/933,910 Active 2026-05-10 US7469011B2 (en) | 2003-09-07 | 2004-09-02 | Escape mode code resizing for fields and slices |
US10/933,908 Active 2026-07-13 US7352905B2 (en) | 2003-09-07 | 2004-09-02 | Chroma motion vector derivation |
US10/933,883 Expired - Lifetime US7099515B2 (en) | 2003-09-07 | 2004-09-02 | Bitplane coding and decoding for AC prediction status information |
US10/933,882 Active 2027-12-29 US7924920B2 (en) | 2003-09-07 | 2004-09-02 | Motion vector coding and decoding in interlaced frame coded pictures |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/934,116 Active 2027-12-03 US8687709B2 (en) | 2003-09-07 | 2004-09-04 | In-loop deblocking for interlaced video |
US10/934,117 Active 2026-10-19 US8116380B2 (en) | 2003-09-07 | 2004-09-04 | Signaling for field ordering and field/frame display repetition |
Country Status (3)
Country | Link |
---|---|
US (9) | US7724827B2 (en) |
EP (2) | EP2285113B1 (en) |
CN (5) | CN100586183C (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050259960A1 (en) * | 2004-05-18 | 2005-11-24 | Wan Wade K | Index table generation in PVR applications for AVC video streams |
US20090232217A1 (en) * | 2008-03-17 | 2009-09-17 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding and decoding image |
US20090238479A1 (en) * | 2008-03-20 | 2009-09-24 | Pawan Jaggi | Flexible frame based energy efficient multimedia processor architecture and method |
US20090238263A1 (en) * | 2008-03-20 | 2009-09-24 | Pawan Jaggi | Flexible field based energy efficient multimedia processor architecture and method |
US20110110427A1 (en) * | 2005-10-18 | 2011-05-12 | Chia-Yuan Teng | Selective deblock filtering techniques for video coding |
US20110129015A1 (en) * | 2007-09-04 | 2011-06-02 | The Regents Of The University Of California | Hierarchical motion vector processing method, software and devices |
US20110135000A1 (en) * | 2009-12-09 | 2011-06-09 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
US20130202041A1 (en) * | 2006-06-27 | 2013-08-08 | Yi-Jen Chiu | Chroma motion vector processing apparatus, system, and method |
CN103297784A (en) * | 2008-10-31 | 2013-09-11 | Sk电信有限公司 | Apparatus for encoding image |
US20160057415A1 (en) * | 2011-11-07 | 2016-02-25 | Canon Kabushiki Kaisha | Image encoding method, image encoding apparatus, and related encoding medium, image decoding method, image decoding apparatus, and related decoding medium |
US9621909B2 (en) | 2012-07-02 | 2017-04-11 | Lg Electronics Inc. | Method for decoding image and apparatus using same |
US9860551B2 (en) | 2011-02-09 | 2018-01-02 | Lg Electronics Inc. | Method for encoding and decoding image and device using same |
US10057592B2 (en) | 2011-03-09 | 2018-08-21 | Canon Kabushiki Kaisha | Video encoding and decoding |
USRE47243E1 (en) * | 2009-12-09 | 2019-02-12 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
US10469874B2 (en) * | 2013-10-07 | 2019-11-05 | Lg Electronics Inc. | Method for encoding and decoding a media signal and apparatus using the same |
WO2020186060A1 (en) * | 2019-03-12 | 2020-09-17 | Futurewei Technologies, Inc. | Patch data unit coding and decoding for point-cloud data |
Families Citing this family (369)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6563953B2 (en) | 1998-11-30 | 2003-05-13 | Microsoft Corporation | Predictive image compression using a single variable length code for both the luminance and chrominance blocks for each macroblock |
US9130810B2 (en) | 2000-09-13 | 2015-09-08 | Qualcomm Incorporated | OFDM communications methods and apparatus |
US7295509B2 (en) | 2000-09-13 | 2007-11-13 | Qualcomm, Incorporated | Signaling method in an OFDM multiple access system |
BR0206629A (en) * | 2001-11-22 | 2004-02-25 | Matsushita Electric Ind Co Ltd | Variable Length Encoding Method and Variable Length Decoding Method |
CN101448162B (en) | 2001-12-17 | 2013-01-02 | 微软公司 | Method for processing video image |
JP4610195B2 (en) | 2001-12-17 | 2011-01-12 | マイクロソフト コーポレーション | Skip macroblock coding |
US7016547B1 (en) * | 2002-06-28 | 2006-03-21 | Microsoft Corporation | Adaptive entropy encoding/decoding for screen capture content |
US7433824B2 (en) * | 2002-09-04 | 2008-10-07 | Microsoft Corporation | Entropy coding by adapting coding between level and run-length/level modes |
DE20321883U1 (en) * | 2002-09-04 | 2012-01-20 | Microsoft Corp. | Computer apparatus and system for entropy decoding quantized transform coefficients of a block |
CN100493199C (en) * | 2003-06-16 | 2009-05-27 | 松下电器产业株式会社 | Coding apparatus, coding method, and codebook |
US10554985B2 (en) | 2003-07-18 | 2020-02-04 | Microsoft Technology Licensing, Llc | DC coefficient signaling at small quantization step sizes |
US7738554B2 (en) | 2003-07-18 | 2010-06-15 | Microsoft Corporation | DC coefficient signaling at small quantization step sizes |
US7602851B2 (en) * | 2003-07-18 | 2009-10-13 | Microsoft Corporation | Intelligent differential quantization of video coding |
US7426308B2 (en) * | 2003-07-18 | 2008-09-16 | Microsoft Corporation | Intraframe and interframe interlace coding and decoding |
US7580584B2 (en) * | 2003-07-18 | 2009-08-25 | Microsoft Corporation | Adaptive multiple quantization |
US7609763B2 (en) * | 2003-07-18 | 2009-10-27 | Microsoft Corporation | Advanced bi-directional predictive coding of video frames |
US8218624B2 (en) | 2003-07-18 | 2012-07-10 | Microsoft Corporation | Fractional quantization step sizes for high bit rates |
US7961786B2 (en) | 2003-09-07 | 2011-06-14 | Microsoft Corporation | Signaling field type information |
US8107531B2 (en) * | 2003-09-07 | 2012-01-31 | Microsoft Corporation | Signaling and repeat padding for skip frames |
US7092576B2 (en) * | 2003-09-07 | 2006-08-15 | Microsoft Corporation | Bitplane coding for macroblock field/frame coding type information |
US8064520B2 (en) * | 2003-09-07 | 2011-11-22 | Microsoft Corporation | Advanced bi-directional predictive coding of interlaced video |
US20050058203A1 (en) * | 2003-09-17 | 2005-03-17 | Fernandes Felix C. | Transcoders and methods |
WO2005034092A2 (en) * | 2003-09-29 | 2005-04-14 | Handheld Entertainment, Inc. | Method and apparatus for coding information |
US8077778B2 (en) * | 2003-10-31 | 2011-12-13 | Broadcom Corporation | Video display and decode utilizing off-chip processor and DRAM |
JP4118232B2 (en) * | 2003-12-19 | 2008-07-16 | 三菱電機株式会社 | Video data processing method and video data processing apparatus |
US8427494B2 (en) * | 2004-01-30 | 2013-04-23 | Nvidia Corporation | Variable-length coding data transfer interface |
US7801383B2 (en) | 2004-05-15 | 2010-09-21 | Microsoft Corporation | Embedded scalar quantizers with arbitrary dead-zone ratios |
US20060029135A1 (en) * | 2004-06-22 | 2006-02-09 | Minhua Zhou | In-loop deblocking filter |
EP1766783B1 (en) | 2004-07-14 | 2011-11-02 | Slipstream Data Inc. | Method, system and computer program product for optimization of data compression |
US7570827B2 (en) | 2004-07-14 | 2009-08-04 | Slipstream Data Inc. | Method, system and computer program product for optimization of data compression with cost function |
US9137822B2 (en) | 2004-07-21 | 2015-09-15 | Qualcomm Incorporated | Efficient signaling over access channel |
US9148256B2 (en) | 2004-07-21 | 2015-09-29 | Qualcomm Incorporated | Performance based rank prediction for MIMO design |
JP3919115B2 (en) * | 2004-08-18 | 2007-05-23 | ソニー株式会社 | DECODING DEVICE, DECODING METHOD, DECODING PROGRAM, RECORDING MEDIUM CONTAINING DECODING PROGRAM, AND REVERSE REPRODUCTION DEVICE, REVERSE REPRODUCTION METHOD, REVERSE REPRODUCTION PROGRAM, AND RECORDING MEDIUM CONTAINING REVERSE REPRODUCTION PROGRAM |
BRPI0515943B1 (en) * | 2004-09-29 | 2018-10-16 | Thomson Res Funding Corporation | method and apparatus for reduced resolution update video encoding and decoding |
JP4533081B2 (en) * | 2004-10-12 | 2010-08-25 | キヤノン株式会社 | Image encoding apparatus and method |
US7574060B2 (en) * | 2004-11-22 | 2009-08-11 | Broadcom Corporation | Deblocker for postprocess deblocking |
US8565307B2 (en) * | 2005-02-01 | 2013-10-22 | Panasonic Corporation | Picture encoding method and picture encoding device |
US9246560B2 (en) | 2005-03-10 | 2016-01-26 | Qualcomm Incorporated | Systems and methods for beamforming and rate control in a multi-input multi-output communication systems |
US9154211B2 (en) | 2005-03-11 | 2015-10-06 | Qualcomm Incorporated | Systems and methods for beamforming feedback in multi antenna communication systems |
US8446892B2 (en) | 2005-03-16 | 2013-05-21 | Qualcomm Incorporated | Channel structures for a quasi-orthogonal multiple-access communication system |
US9520972B2 (en) | 2005-03-17 | 2016-12-13 | Qualcomm Incorporated | Pilot signal transmission for an orthogonal frequency division wireless communication system |
US9461859B2 (en) | 2005-03-17 | 2016-10-04 | Qualcomm Incorporated | Pilot signal transmission for an orthogonal frequency division wireless communication system |
US9143305B2 (en) | 2005-03-17 | 2015-09-22 | Qualcomm Incorporated | Pilot signal transmission for an orthogonal frequency division wireless communication system |
US9184870B2 (en) | 2005-04-01 | 2015-11-10 | Qualcomm Incorporated | Systems and methods for control channel signaling |
US8149926B2 (en) * | 2005-04-11 | 2012-04-03 | Intel Corporation | Generating edge masks for a deblocking filter |
US9408220B2 (en) | 2005-04-19 | 2016-08-02 | Qualcomm Incorporated | Channel quality reporting for adaptive sectorization |
US9036538B2 (en) | 2005-04-19 | 2015-05-19 | Qualcomm Incorporated | Frequency hopping design for single carrier FDMA systems |
US20060248163A1 (en) * | 2005-04-28 | 2006-11-02 | Macinnis Alexander | Systems, methods, and apparatus for video frame repeat indication & processing |
US7768538B2 (en) * | 2005-05-09 | 2010-08-03 | Hewlett-Packard Development Company, L.P. | Hybrid data planes |
CN101185338B (en) * | 2005-05-25 | 2010-11-24 | Nxp股份有限公司 | Multiple instance video decoder for macroblocks coded in a progressive and an interlaced way |
US8422546B2 (en) * | 2005-05-25 | 2013-04-16 | Microsoft Corporation | Adaptive video encoding using a perceptual model |
US8879511B2 (en) | 2005-10-27 | 2014-11-04 | Qualcomm Incorporated | Assignment acknowledgement for a wireless communication system |
US8565194B2 (en) | 2005-10-27 | 2013-10-22 | Qualcomm Incorporated | Puncturing signaling channel for a wireless communication system |
US8611284B2 (en) | 2005-05-31 | 2013-12-17 | Qualcomm Incorporated | Use of supplemental assignments to decrement resources |
US8462859B2 (en) | 2005-06-01 | 2013-06-11 | Qualcomm Incorporated | Sphere decoding apparatus |
WO2006129280A2 (en) * | 2005-06-03 | 2006-12-07 | Nxp B.V. | Video decoder with hybrid reference texture |
US8599945B2 (en) | 2005-06-16 | 2013-12-03 | Qualcomm Incorporated | Robust rank prediction for a MIMO system |
US9179319B2 (en) | 2005-06-16 | 2015-11-03 | Qualcomm Incorporated | Adaptive sectorization in cellular systems |
KR100667806B1 (en) * | 2005-07-07 | 2007-01-12 | 삼성전자주식회사 | Method and apparatus for video encoding and decoding |
US7684981B2 (en) * | 2005-07-15 | 2010-03-23 | Microsoft Corporation | Prediction of spectral coefficients in waveform coding and decoding |
US7693709B2 (en) | 2005-07-15 | 2010-04-06 | Microsoft Corporation | Reordering coefficients for waveform coding or decoding |
US7599840B2 (en) * | 2005-07-15 | 2009-10-06 | Microsoft Corporation | Selectively using multiple entropy models in adaptive coding and decoding |
US20070053425A1 (en) * | 2005-07-21 | 2007-03-08 | Nokia Corporation | Variable length codes for scalable video coding |
US8625914B2 (en) * | 2013-02-04 | 2014-01-07 | Sony Corporation | Image processing system, image processing method and program |
US8885628B2 (en) | 2005-08-08 | 2014-11-11 | Qualcomm Incorporated | Code division multiplexing in a single-carrier frequency division multiple access system |
US7933337B2 (en) | 2005-08-12 | 2011-04-26 | Microsoft Corporation | Prediction of transform coefficients for image compression |
US7565018B2 (en) * | 2005-08-12 | 2009-07-21 | Microsoft Corporation | Adaptive coding and decoding of wide-range coefficients |
US8599925B2 (en) * | 2005-08-12 | 2013-12-03 | Microsoft Corporation | Efficient coding and decoding of transform blocks |
US9077960B2 (en) | 2005-08-12 | 2015-07-07 | Microsoft Corporation | Non-zero coefficient block pattern coding |
US8036274B2 (en) * | 2005-08-12 | 2011-10-11 | Microsoft Corporation | SIMD lapped transform-based digital media encoding/decoding |
US9209956B2 (en) | 2005-08-22 | 2015-12-08 | Qualcomm Incorporated | Segment sensitive scheduling |
US20070041457A1 (en) | 2005-08-22 | 2007-02-22 | Tamer Kadous | Method and apparatus for providing antenna diversity in a wireless communication system |
US8644292B2 (en) | 2005-08-24 | 2014-02-04 | Qualcomm Incorporated | Varied transmission time intervals for wireless communication system |
EP1938493B1 (en) * | 2005-08-24 | 2014-10-08 | Qualcomm Incorporated | Varied transmission time intervals for wireless communication system |
US9136974B2 (en) | 2005-08-30 | 2015-09-15 | Qualcomm Incorporated | Precoding and SDMA support |
WO2007027418A2 (en) * | 2005-08-31 | 2007-03-08 | Micronas Usa, Inc. | Systems and methods for video transformation and in loop filtering |
KR100668346B1 (en) * | 2005-10-04 | 2007-01-12 | 삼성전자주식회사 | Filtering apparatus and method for a multi-codec |
US20070094035A1 (en) * | 2005-10-21 | 2007-04-26 | Nokia Corporation | Audio coding |
US7505069B2 (en) * | 2005-10-26 | 2009-03-17 | Hewlett-Packard Development Company, L.P. | Method and apparatus for maintaining consistent white balance in successive digital images |
US9088384B2 (en) | 2005-10-27 | 2015-07-21 | Qualcomm Incorporated | Pilot symbol transmission in wireless communication systems |
US8045512B2 (en) | 2005-10-27 | 2011-10-25 | Qualcomm Incorporated | Scalable frequency band operation in wireless communication systems |
US8693405B2 (en) | 2005-10-27 | 2014-04-08 | Qualcomm Incorporated | SDMA resource management |
US9225488B2 (en) | 2005-10-27 | 2015-12-29 | Qualcomm Incorporated | Shared signaling channel |
US9144060B2 (en) | 2005-10-27 | 2015-09-22 | Qualcomm Incorporated | Resource allocation for shared signaling channels |
US9172453B2 (en) | 2005-10-27 | 2015-10-27 | Qualcomm Incorporated | Method and apparatus for pre-coding frequency division duplexing system |
US8477684B2 (en) | 2005-10-27 | 2013-07-02 | Qualcomm Incorporated | Acknowledgement of control messages in a wireless communication system |
US8582509B2 (en) | 2005-10-27 | 2013-11-12 | Qualcomm Incorporated | Scalable frequency band operation in wireless communication systems |
US9210651B2 (en) | 2005-10-27 | 2015-12-08 | Qualcomm Incorporated | Method and apparatus for bootstraping information in a communication system |
US9225416B2 (en) | 2005-10-27 | 2015-12-29 | Qualcomm Incorporated | Varied signaling channels for a reverse link in a wireless communication system |
KR100873636B1 (en) | 2005-11-14 | 2008-12-12 | 삼성전자주식회사 | Method and apparatus for encoding/decoding image using single coding mode |
US8582548B2 (en) | 2005-11-18 | 2013-11-12 | Qualcomm Incorporated | Frequency division multiple access schemes for wireless communication |
JP2007180723A (en) * | 2005-12-27 | 2007-07-12 | Toshiba Corp | Image processor and image processing method |
US8494042B2 (en) * | 2006-01-09 | 2013-07-23 | Lg Electronics Inc. | Inter-layer prediction method for video signal |
KR100791295B1 (en) * | 2006-01-12 | 2008-01-04 | 삼성전자주식회사 | Flag encoding method, flag decoding method, and apparatus thereof |
JP2007195117A (en) * | 2006-01-23 | 2007-08-02 | Toshiba Corp | Moving image decoding device |
KR100775104B1 (en) * | 2006-02-27 | 2007-11-08 | 삼성전자주식회사 | Image stabilizer and system having the same and method thereof |
US8116371B2 (en) * | 2006-03-08 | 2012-02-14 | Texas Instruments Incorporated | VLC technique for layered video coding using distinct element grouping |
KR101330630B1 (en) * | 2006-03-13 | 2013-11-22 | 삼성전자주식회사 | Method and apparatus for encoding moving picture, method and apparatus for decoding moving picture, applying adaptively an optimal prediction mode |
US8503536B2 (en) * | 2006-04-07 | 2013-08-06 | Microsoft Corporation | Quantization adjustments for DC shift artifacts |
US7974340B2 (en) * | 2006-04-07 | 2011-07-05 | Microsoft Corporation | Adaptive B-picture quantization control |
US8059721B2 (en) | 2006-04-07 | 2011-11-15 | Microsoft Corporation | Estimating sample-domain distortion in the transform domain with rounding compensation |
US7995649B2 (en) | 2006-04-07 | 2011-08-09 | Microsoft Corporation | Quantization adjustment based on texture level |
US8130828B2 (en) | 2006-04-07 | 2012-03-06 | Microsoft Corporation | Adjusting quantization to preserve non-zero AC coefficients |
US8711925B2 (en) | 2006-05-05 | 2014-04-29 | Microsoft Corporation | Flexible quantization |
WO2008042023A2 (en) * | 2006-05-18 | 2008-04-10 | Florida Atlantic University | Methods for encrypting and compressing video |
US7529416B2 (en) * | 2006-08-18 | 2009-05-05 | Terayon Communication Systems, Inc. | Method and apparatus for transferring digital data between circuits |
JP2008048240A (en) * | 2006-08-18 | 2008-02-28 | Nec Electronics Corp | Bit plane decoding device and its method |
US7760960B2 (en) * | 2006-09-15 | 2010-07-20 | Freescale Semiconductor, Inc. | Localized content adaptive filter for low power scalable image processing |
US7327289B1 (en) * | 2006-09-20 | 2008-02-05 | Intel Corporation | Data-modifying run length encoder to avoid data expansion |
US20080084932A1 (en) * | 2006-10-06 | 2008-04-10 | Microsoft Corporation | Controlling loop filtering for interlaced video frames |
KR101078038B1 (en) * | 2006-10-10 | 2011-10-28 | 니폰덴신뎅와 가부시키가이샤 | Video encoding method and decoding method, their device, their program, and storage medium containing the program |
KR100819289B1 (en) * | 2006-10-20 | 2008-04-02 | 삼성전자주식회사 | Deblocking filtering method and deblocking filter for video data |
JP2008109389A (en) * | 2006-10-25 | 2008-05-08 | Canon Inc | Image processing device and control method of image processing device |
US7756348B2 (en) * | 2006-10-30 | 2010-07-13 | Hewlett-Packard Development Company, L.P. | Method for decomposing a video sequence frame |
US8443398B2 (en) * | 2006-11-01 | 2013-05-14 | Skyfire Labs, Inc. | Architecture for delivery of video content responsive to remote interaction |
US8375304B2 (en) * | 2006-11-01 | 2013-02-12 | Skyfire Labs, Inc. | Maintaining state of a web page |
US9247260B1 (en) | 2006-11-01 | 2016-01-26 | Opera Software Ireland Limited | Hybrid bitmap-mode encoding |
US8711929B2 (en) * | 2006-11-01 | 2014-04-29 | Skyfire Labs, Inc. | Network-based dynamic encoding |
US7460725B2 (en) * | 2006-11-09 | 2008-12-02 | Calista Technologies, Inc. | System and method for effectively encoding and decoding electronic information |
US20080159637A1 (en) * | 2006-12-27 | 2008-07-03 | Ricardo Citro | Deblocking filter hardware accelerator with interlace frame support |
US20080159407A1 (en) * | 2006-12-28 | 2008-07-03 | Yang Nick Y | Mechanism for a parallel processing in-loop deblock filter |
US7907789B2 (en) * | 2007-01-05 | 2011-03-15 | Freescale Semiconductor, Inc. | Reduction of block effects in spatially re-sampled image information for block-based image coding |
US20080184128A1 (en) * | 2007-01-25 | 2008-07-31 | Swenson Erik R | Mobile device user interface for remote interaction |
US8238424B2 (en) * | 2007-02-09 | 2012-08-07 | Microsoft Corporation | Complexity-based adaptive preprocessing for multiple-pass video compression |
US8184710B2 (en) * | 2007-02-21 | 2012-05-22 | Microsoft Corporation | Adaptive truncation of transform coefficient data in a transform-based digital media codec |
US20080225947A1 (en) * | 2007-03-13 | 2008-09-18 | Matthias Narroschke | Quantization for hybrid video coding |
US8111750B2 (en) * | 2007-03-20 | 2012-02-07 | Himax Technologies Limited | System and method for 3-D recursive search motion estimation |
US8498335B2 (en) * | 2007-03-26 | 2013-07-30 | Microsoft Corporation | Adaptive deadzone size adjustment in quantization |
US8243797B2 (en) * | 2007-03-30 | 2012-08-14 | Microsoft Corporation | Regions of interest for quality adjustments |
JP5686594B2 (en) | 2007-04-12 | 2015-03-18 | トムソン ライセンシングThomson Licensing | Method and apparatus for video usability information (VUI) for scalable video coding |
US8442337B2 (en) * | 2007-04-18 | 2013-05-14 | Microsoft Corporation | Encoding adjustments for animation content |
US8331438B2 (en) | 2007-06-05 | 2012-12-11 | Microsoft Corporation | Adaptive selection of picture-level quantization parameters for predicted video pictures |
US8725504B1 (en) | 2007-06-06 | 2014-05-13 | Nvidia Corporation | Inverse quantization in audio decoding |
US8726125B1 (en) | 2007-06-06 | 2014-05-13 | Nvidia Corporation | Reducing interpolation error |
US7774205B2 (en) * | 2007-06-15 | 2010-08-10 | Microsoft Corporation | Coding of sparse digital media spectral data |
US8477852B2 (en) * | 2007-06-20 | 2013-07-02 | Nvidia Corporation | Uniform video decoding and display |
US8254455B2 (en) * | 2007-06-30 | 2012-08-28 | Microsoft Corporation | Computing collocated macroblock information for direct mode macroblocks |
TWI375470B (en) * | 2007-08-03 | 2012-10-21 | Via Tech Inc | Method for determining boundary strength |
CN101796813A (en) * | 2007-09-10 | 2010-08-04 | Nxp股份有限公司 | Method and apparatus for motion estimation in video image data |
US8849051B2 (en) * | 2007-09-17 | 2014-09-30 | Nvidia Corporation | Decoding variable length codes in JPEG applications |
US8502709B2 (en) * | 2007-09-17 | 2013-08-06 | Nvidia Corporation | Decoding variable length codes in media applications |
JP5414684B2 (en) | 2007-11-12 | 2014-02-12 | ザ ニールセン カンパニー (ユー エス) エルエルシー | Method and apparatus for performing audio watermarking, watermark detection, and watermark extraction |
CN101179720B (en) * | 2007-11-16 | 2010-09-01 | 海信集团有限公司 | Video decoding method |
CN101453651B (en) * | 2007-11-30 | 2012-02-01 | 华为技术有限公司 | A deblocking filtering method and apparatus |
US8934539B2 (en) | 2007-12-03 | 2015-01-13 | Nvidia Corporation | Vector processor acceleration for media quantization |
US8704834B2 (en) | 2007-12-03 | 2014-04-22 | Nvidia Corporation | Synchronization of video input data streams and video output data streams |
US8687875B2 (en) | 2007-12-03 | 2014-04-01 | Nvidia Corporation | Comparator based acceleration for media quantization |
US8743972B2 (en) * | 2007-12-20 | 2014-06-03 | Vixs Systems, Inc. | Coding adaptive deblocking filter and method for use therewith |
US20090161757A1 (en) * | 2007-12-21 | 2009-06-25 | General Instrument Corporation | Method and Apparatus for Selecting a Coding Mode for a Block |
US8457951B2 (en) | 2008-01-29 | 2013-06-04 | The Nielsen Company (Us), Llc | Methods and apparatus for performing variable black length watermarking of media |
JP5109707B2 (en) * | 2008-02-19 | 2012-12-26 | コニカミノルタビジネステクノロジーズ株式会社 | Fixing apparatus and image forming apparatus |
US8145794B2 (en) | 2008-03-14 | 2012-03-27 | Microsoft Corporation | Encoding/decoding while allowing varying message formats per message |
ES2812473T3 (en) | 2008-03-19 | 2021-03-17 | Nokia Technologies Oy | Combined motion vector and benchmark prediction for video encoding |
TWI370690B (en) | 2008-03-21 | 2012-08-11 | Novatek Microelectronics Corp | Method and apparatus for generating coded block pattern for highpass coeffecients |
US8189933B2 (en) * | 2008-03-31 | 2012-05-29 | Microsoft Corporation | Classifying and controlling encoding quality for textured, dark smooth and smooth video content |
CN101552918B (en) * | 2008-03-31 | 2011-05-11 | 联咏科技股份有限公司 | Generation method of block type information with high-pass coefficient and generation circuit thereof |
US8179974B2 (en) * | 2008-05-02 | 2012-05-15 | Microsoft Corporation | Multi-level representation of reordered transform coefficients |
US8369638B2 (en) | 2008-05-27 | 2013-02-05 | Microsoft Corporation | Reducing DC leakage in HD photo transform |
US8447591B2 (en) * | 2008-05-30 | 2013-05-21 | Microsoft Corporation | Factorization of overlapping tranforms into two block transforms |
US8897359B2 (en) | 2008-06-03 | 2014-11-25 | Microsoft Corporation | Adaptive quantization for enhancement layer video coding |
US20090304086A1 (en) * | 2008-06-06 | 2009-12-10 | Apple Inc. | Method and system for video coder and decoder joint optimization |
KR101379187B1 (en) * | 2008-06-23 | 2014-04-15 | 에스케이 텔레콤주식회사 | Image Encoding/Decoding Method and Apparatus Using Block Transformation |
US8406307B2 (en) | 2008-08-22 | 2013-03-26 | Microsoft Corporation | Entropy coding/decoding of hierarchically organized data |
US8326075B2 (en) | 2008-09-11 | 2012-12-04 | Google Inc. | System and method for video encoding using adaptive loop filter |
US8180166B2 (en) * | 2008-09-23 | 2012-05-15 | Mediatek Inc. | Transcoding method |
CA2679509C (en) | 2008-09-25 | 2014-08-05 | Research In Motion Limited | A method and apparatus for configuring compressed mode |
US8275209B2 (en) * | 2008-10-10 | 2012-09-25 | Microsoft Corporation | Reduced DC gain mismatch and DC leakage in overlap transform processing |
US9307267B2 (en) | 2008-12-11 | 2016-04-05 | Nvidia Corporation | Techniques for scalable dynamic data encoding and decoding |
FR2940736B1 (en) * | 2008-12-30 | 2011-04-08 | Sagem Comm | SYSTEM AND METHOD FOR VIDEO CODING |
US8189666B2 (en) | 2009-02-02 | 2012-05-29 | Microsoft Corporation | Local picture identifier and computation of co-located information |
US20110026593A1 (en) * | 2009-02-10 | 2011-02-03 | New Wei Lee | Image processing apparatus, image processing method, program and integrated circuit |
KR20100095992A (en) * | 2009-02-23 | 2010-09-01 | 한국과학기술원 | Method for encoding partitioned block in video encoding, method for decoding partitioned block in video decoding and recording medium implementing the same |
JP5115498B2 (en) * | 2009-03-05 | 2013-01-09 | 富士通株式会社 | Image coding apparatus, image coding control method, and program |
JP5800396B2 (en) * | 2009-04-14 | 2015-10-28 | トムソン ライセンシングThomson Licensing | Method and apparatus for determining and selecting filter parameters in response to variable transformation in sparsity-based artifact removal filtering |
US9076239B2 (en) * | 2009-04-30 | 2015-07-07 | Stmicroelectronics S.R.L. | Method and systems for thumbnail generation, and corresponding computer program product |
TWI343192B (en) * | 2009-06-12 | 2011-06-01 | Ind Tech Res Inst | Decoding method |
CN102474601B (en) * | 2009-06-29 | 2017-06-23 | 汤姆森特许公司 | The method and apparatus that the adaptive probability of uncoded grammer updates |
US9161057B2 (en) * | 2009-07-09 | 2015-10-13 | Qualcomm Incorporated | Non-zero rounding and prediction mode selection techniques in video encoding |
CN105141958B (en) * | 2009-08-12 | 2019-03-01 | 汤姆森特许公司 | For chroma coder in improved frame and decoded method and device |
KR101452859B1 (en) * | 2009-08-13 | 2014-10-23 | 삼성전자주식회사 | Method and apparatus for encoding and decoding motion vector |
US8654838B2 (en) * | 2009-08-31 | 2014-02-18 | Nxp B.V. | System and method for video and graphic compression using multiple different compression techniques and compression error feedback |
JP5234368B2 (en) * | 2009-09-30 | 2013-07-10 | ソニー株式会社 | Image processing apparatus and method |
EP2514210A4 (en) | 2009-12-17 | 2014-03-19 | Ericsson Telefon Ab L M | Method and arrangement for video coding |
KR101703327B1 (en) * | 2010-01-14 | 2017-02-06 | 삼성전자 주식회사 | Method and apparatus for video encoding using pattern information of hierarchical data unit, and method and apparatus for video decoding using pattern information of hierarchical data unit |
KR101675118B1 (en) | 2010-01-14 | 2016-11-10 | 삼성전자 주식회사 | Method and apparatus for video encoding considering order of skip and split, and method and apparatus for video decoding considering order of skip and split |
US20110176611A1 (en) * | 2010-01-15 | 2011-07-21 | Yu-Wen Huang | Methods for decoder-side motion vector derivation |
JP5793511B2 (en) * | 2010-02-05 | 2015-10-14 | テレフオンアクチーボラゲット エル エム エリクソン(パブル) | Deblocking filtering control |
JP5020391B2 (en) * | 2010-02-22 | 2012-09-05 | パナソニック株式会社 | Decoding device and decoding method |
US8527649B2 (en) | 2010-03-09 | 2013-09-03 | Mobixell Networks Ltd. | Multi-stream bit rate adaptation |
HUE045579T2 (en) | 2010-04-13 | 2020-01-28 | Ge Video Compression Llc | Inter-plane prediction |
EP2559005B1 (en) | 2010-04-13 | 2015-11-04 | GE Video Compression, LLC | Inheritance in sample array multitree subdivision |
CN102939754B (en) | 2010-04-13 | 2016-09-07 | Ge视频压缩有限责任公司 | Sample areas folding |
ES2953668T3 (en) | 2010-04-13 | 2023-11-15 | Ge Video Compression Llc | Video encoding using multitree subdivisions of images |
US20110261070A1 (en) * | 2010-04-23 | 2011-10-27 | Peter Francis Chevalley De Rivaz | Method and system for reducing remote display latency |
JP5584757B2 (en) | 2010-05-06 | 2014-09-03 | 日本電信電話株式会社 | Video encoding control method and apparatus |
EP2568705B1 (en) * | 2010-05-07 | 2018-09-26 | Nippon Telegraph And Telephone Corporation | Moving image encoding control method, moving image encoding apparatus and moving image encoding program |
CA2798354C (en) * | 2010-05-12 | 2016-01-26 | Nippon Telegraph And Telephone Corporation | A video encoding bit rate control technique using a quantization statistic threshold to determine whether re-encoding of an encoding-order picture group is required |
JP5625512B2 (en) * | 2010-06-09 | 2014-11-19 | ソニー株式会社 | Encoding device, encoding method, program, and recording medium |
CN101883286B (en) * | 2010-06-25 | 2012-12-05 | 无锡中星微电子有限公司 | Calibration method and device, and motion estimation method and device in motion estimation |
US8832709B2 (en) | 2010-07-19 | 2014-09-09 | Flash Networks Ltd. | Network optimization |
MX2013003606A (en) | 2010-09-30 | 2013-04-24 | Samsung Electronics Co Ltd | Method and device for interpolating images by using a smoothing interpolation filter. |
CN103222265B (en) * | 2010-09-30 | 2017-02-08 | 三菱电机株式会社 | Dynamic image encoding device, dynamic image decoding device, dynamic image encoding method, and dynamic image decoding method |
US8885704B2 (en) * | 2010-10-01 | 2014-11-11 | Qualcomm Incorporated | Coding prediction modes in video coding |
US8787443B2 (en) | 2010-10-05 | 2014-07-22 | Microsoft Corporation | Content adaptive deblocking during video encoding and decoding |
ES2859635T3 (en) | 2010-10-08 | 2021-10-04 | Ge Video Compression Llc | Image encoding that supports block partitioning and block merging |
SI3595303T1 (en) | 2010-11-25 | 2022-01-31 | Lg Electronics Inc. | Method for decoding image information, decoding apparatus, method for encoding image information, encoding apparatus and storage medium |
US11284081B2 (en) | 2010-11-25 | 2022-03-22 | Lg Electronics Inc. | Method for signaling image information, and method for decoding image information using same |
US9137544B2 (en) * | 2010-11-29 | 2015-09-15 | Mediatek Inc. | Method and apparatus for derivation of mv/mvp candidate for inter/skip/merge modes |
US10244239B2 (en) | 2010-12-28 | 2019-03-26 | Dolby Laboratories Licensing Corporation | Parameter set for picture segmentation |
US8914534B2 (en) | 2011-01-05 | 2014-12-16 | Sonic Ip, Inc. | Systems and methods for adaptive bitrate streaming of media stored in matroska container files using hypertext transfer protocol |
US9635383B2 (en) * | 2011-01-07 | 2017-04-25 | Texas Instruments Incorporated | Method, system and computer program product for computing a motion vector |
KR101824241B1 (en) * | 2011-01-11 | 2018-03-14 | 에스케이 텔레콤주식회사 | Intra Additional Information Encoding/Decoding Apparatus and Method |
WO2012096164A1 (en) * | 2011-01-12 | 2012-07-19 | パナソニック株式会社 | Image encoding method, image decoding method, image encoding device, and image decoding device |
JP5478740B2 (en) * | 2011-01-12 | 2014-04-23 | 三菱電機株式会社 | Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, and moving picture decoding method |
BR122019025406B1 (en) * | 2011-01-13 | 2023-03-21 | Canon Kabushiki Kaisha | IMAGE CODING APPARATUS, IMAGE CODING METHOD, IMAGE DECODING APPARATUS, IMAGE DECODING METHOD AND STORAGE MEDIA |
JP6056122B2 (en) * | 2011-01-24 | 2017-01-11 | ソニー株式会社 | Image encoding apparatus, image decoding apparatus, method and program thereof |
US9380319B2 (en) | 2011-02-04 | 2016-06-28 | Google Technology Holdings LLC | Implicit transform unit representation |
US8688074B2 (en) | 2011-02-28 | 2014-04-01 | Moisixell Networks Ltd. | Service classification of web traffic |
JP5982734B2 (en) * | 2011-03-11 | 2016-08-31 | ソニー株式会社 | Image processing apparatus and method |
JP5842357B2 (en) * | 2011-03-25 | 2016-01-13 | 富士ゼロックス株式会社 | Image processing apparatus and image processing program |
US9042458B2 (en) | 2011-04-01 | 2015-05-26 | Microsoft Technology Licensing, Llc | Multi-threaded implementations of deblock filtering |
US8780971B1 (en) | 2011-04-07 | 2014-07-15 | Google, Inc. | System and method of encoding using selectable loop filters |
US8780996B2 (en) | 2011-04-07 | 2014-07-15 | Google, Inc. | System and method for encoding and decoding video data |
US8781004B1 (en) | 2011-04-07 | 2014-07-15 | Google Inc. | System and method for encoding video using variable loop filter |
HUE043181T2 (en) * | 2011-04-21 | 2019-08-28 | Hfi Innovation Inc | Method and apparatus for improved in-loop filtering |
US9058223B2 (en) * | 2011-04-22 | 2015-06-16 | Microsoft Technology Licensing Llc | Parallel entropy encoding on GPU |
US20130322523A1 (en) | 2011-05-10 | 2013-12-05 | Mediatek Inc. | Method and apparatus for reduction of in-loop filter buffer |
PL3879834T3 (en) * | 2011-05-31 | 2024-07-29 | Jvckenwood Corporation | Moving image encoding device, moving image encoding method and moving image encoding program, as well as moving image decoding device, moving image decoding method and moving image decoding program |
KR102649023B1 (en) | 2011-06-15 | 2024-03-18 | 파나소닉 인텔렉츄얼 프로퍼티 코포레이션 오브 아메리카 | Decoding method and device, and encoding method and device |
KR20140035408A (en) | 2011-06-17 | 2014-03-21 | 파나소닉 주식회사 | Video decoding device and video decoding method |
MY165357A (en) | 2011-06-23 | 2018-03-21 | Sun Patent Trust | Image decoding method and apparatus based on a signal type of the control parameter of the current block |
USRE47366E1 (en) | 2011-06-23 | 2019-04-23 | Sun Patent Trust | Image decoding method and apparatus based on a signal type of the control parameter of the current block |
US9049462B2 (en) | 2011-06-24 | 2015-06-02 | Panasonic Intellectual Property Corporation Of America | Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus |
KR102067683B1 (en) | 2011-06-24 | 2020-01-17 | 선 페이턴트 트러스트 | Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device |
CA2842646C (en) * | 2011-06-27 | 2018-09-04 | Panasonic Corporation | Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus |
MY165469A (en) | 2011-06-28 | 2018-03-23 | Sun Patent Trust | Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus |
WO2013001767A1 (en) | 2011-06-29 | 2013-01-03 | パナソニック株式会社 | Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device |
KR102060619B1 (en) | 2011-06-30 | 2019-12-30 | 선 페이턴트 트러스트 | Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device |
KR101955374B1 (en) * | 2011-06-30 | 2019-05-31 | 에스케이 텔레콤주식회사 | Method and Apparatus for Image Encoding/Decoding By Fast Coding Unit Mode Decision |
EP2728869B1 (en) | 2011-06-30 | 2021-11-10 | Sun Patent Trust | Image decoding method |
US10536701B2 (en) | 2011-07-01 | 2020-01-14 | Qualcomm Incorporated | Video coding using adaptive motion vector resolution |
EP2733941B1 (en) | 2011-07-11 | 2020-02-19 | Sun Patent Trust | Image decoding method, image decoding apparatus |
US8767824B2 (en) | 2011-07-11 | 2014-07-01 | Sharp Kabushiki Kaisha | Video decoder parallelization for tiles |
GB2493755B (en) * | 2011-08-17 | 2016-10-19 | Canon Kk | Method and device for encoding a sequence of images and method and device for decoding a sequence of images |
US9467708B2 (en) | 2011-08-30 | 2016-10-11 | Sonic Ip, Inc. | Selection of resolutions for seamless resolution switching of multimedia content |
KR102020764B1 (en) | 2011-08-30 | 2019-09-11 | 디브이엑스, 엘엘씨 | Systems and methods for encoding and streaming video encoded using a plurality of maximum bitrate levels |
MX2014002529A (en) * | 2011-09-09 | 2014-05-13 | Panasonic Corp | Low complex deblocking filter decisions. |
US8885706B2 (en) | 2011-09-16 | 2014-11-11 | Google Inc. | Apparatus and methodology for a video codec system with noise reduction capability |
RU2646308C1 (en) * | 2011-10-17 | 2018-03-02 | Кт Корпорейшен | Method of video signal decoding |
CN107360421B (en) | 2011-10-17 | 2020-03-06 | 株式会社Kt | Method for decoding video signal using decoding apparatus |
US8891630B2 (en) * | 2011-10-24 | 2014-11-18 | Blackberry Limited | Significance map encoding and decoding using partition set based context assignment |
US20140307790A1 (en) * | 2011-11-02 | 2014-10-16 | Nec Corporation | Video encoding device, video decoding device, video encoding method, video decoding method, and program |
KR20130050149A (en) * | 2011-11-07 | 2013-05-15 | 오수미 | Method for generating prediction block in inter prediction mode |
TWI580264B (en) * | 2011-11-10 | 2017-04-21 | Sony Corp | Image processing apparatus and method |
RU2628130C2 (en) | 2011-12-28 | 2017-08-15 | Шарп Кабусики Кайся | Arithmetic decoding device, image decoding device and arithmetic coding device |
MY195620A (en) * | 2012-01-17 | 2023-02-02 | Infobridge Pte Ltd | Method Of Applying Edge Offset |
CN104170385B (en) * | 2012-02-06 | 2019-02-19 | 诺基亚技术有限公司 | Method and apparatus for coding |
US9013760B1 (en) | 2012-02-15 | 2015-04-21 | Marvell International Ltd. | Method and apparatus for using data compression techniques to increase a speed at which documents are scanned through a scanning device |
CN102595164A (en) * | 2012-02-27 | 2012-07-18 | 中兴通讯股份有限公司 | Method, device and system for sending video image |
US9131073B1 (en) | 2012-03-02 | 2015-09-08 | Google Inc. | Motion estimation aided noise reduction |
EP2642755B1 (en) * | 2012-03-20 | 2018-01-03 | Dolby Laboratories Licensing Corporation | Complexity scalable multilayer video coding |
US9432666B2 (en) * | 2012-03-29 | 2016-08-30 | Intel Corporation | CAVLC decoder with multi-symbol run before parallel decode |
GB2502047B (en) * | 2012-04-04 | 2019-06-05 | Snell Advanced Media Ltd | Video sequence processing |
US9124872B2 (en) | 2012-04-16 | 2015-09-01 | Qualcomm Incorporated | Coefficient groups and coefficient coding for coefficient scans |
GB2501535A (en) | 2012-04-26 | 2013-10-30 | Sony Corp | Chrominance Processing in High Efficiency Video Codecs |
CN104350753B (en) * | 2012-06-01 | 2019-07-09 | 威勒斯媒体国际有限公司 | Arithmetic decoding device, picture decoding apparatus, arithmetic coding device and picture coding device |
GB2503875B (en) * | 2012-06-29 | 2015-06-10 | Canon Kk | Method and device for encoding or decoding an image |
HUE039986T2 (en) * | 2012-07-02 | 2019-02-28 | Samsung Electronics Co Ltd | METHOD FOR ENTROPY DECODING of a VIDEO |
US9344729B1 (en) | 2012-07-11 | 2016-05-17 | Google Inc. | Selective prediction signal filtering |
CN103634606B (en) * | 2012-08-21 | 2015-04-08 | 腾讯科技(深圳)有限公司 | Video encoding method and apparatus |
CN113518228B (en) * | 2012-09-28 | 2024-06-11 | 交互数字麦迪逊专利控股公司 | Method for video encoding, method for video decoding, and apparatus therefor |
EP2887663B1 (en) | 2012-09-29 | 2017-02-22 | Huawei Technologies Co., Ltd. | Method, apparatus and system for encoding and decoding video |
US20140092992A1 (en) | 2012-09-30 | 2014-04-03 | Microsoft Corporation | Supplemental enhancement information including confidence level and mixed content information |
US9979960B2 (en) * | 2012-10-01 | 2018-05-22 | Microsoft Technology Licensing, Llc | Frame packing and unpacking between frames of chroma sampling formats with different chroma resolutions |
WO2014053099A1 (en) * | 2012-10-03 | 2014-04-10 | Mediatek Inc. | Method and apparatus for motion information inheritance in three-dimensional video coding |
CN103841425B (en) * | 2012-10-08 | 2017-04-05 | 华为技术有限公司 | For the method for the motion vector list foundation of motion-vector prediction, device |
CN102883163B (en) | 2012-10-08 | 2014-05-28 | 华为技术有限公司 | Method and device for building motion vector lists for prediction of motion vectors |
CN102946504B (en) * | 2012-11-22 | 2015-02-18 | 四川虹微技术有限公司 | Self-adaptive moving detection method based on edge detection |
US9560361B2 (en) * | 2012-12-05 | 2017-01-31 | Vixs Systems Inc. | Adaptive single-field/dual-field video encoding |
US9191457B2 (en) | 2012-12-31 | 2015-11-17 | Sonic Ip, Inc. | Systems, methods, and media for controlling delivery of content |
US9008363B1 (en) | 2013-01-02 | 2015-04-14 | Google Inc. | System and method for computing optical flow |
EP2946553B1 (en) * | 2013-01-16 | 2019-01-02 | BlackBerry Limited | Transform coefficient coding for context-adaptive binary entropy coding of video |
US9219915B1 (en) * | 2013-01-17 | 2015-12-22 | Google Inc. | Selection of transform size in video coding |
CN103051857B (en) * | 2013-01-25 | 2015-07-15 | 西安电子科技大学 | Motion compensation-based 1/4 pixel precision video image deinterlacing method |
US9967559B1 (en) | 2013-02-11 | 2018-05-08 | Google Llc | Motion vector dependent spatial transformation in video coding |
US9544597B1 (en) | 2013-02-11 | 2017-01-10 | Google Inc. | Hybrid transform in video encoding and decoding |
WO2014146079A1 (en) | 2013-03-15 | 2014-09-18 | Zenkich Raymond | System and method for non-uniform video coding |
US9749627B2 (en) | 2013-04-08 | 2017-08-29 | Microsoft Technology Licensing, Llc | Control data for motion-constrained tile set |
US9674530B1 (en) | 2013-04-30 | 2017-06-06 | Google Inc. | Hybrid transforms in video coding |
JP6003803B2 (en) * | 2013-05-22 | 2016-10-05 | 株式会社Jvcケンウッド | Moving picture coding apparatus, moving picture coding method, and moving picture coding program |
WO2014190468A1 (en) | 2013-05-27 | 2014-12-04 | Microsoft Corporation | Video encoder for images |
BR112015030508B1 (en) * | 2013-06-12 | 2023-11-07 | Mitsubishi Electric Corporation | IMAGE CODING AND IMAGE DECODING DEVICES AND METHODS |
US9813737B2 (en) | 2013-09-19 | 2017-11-07 | Blackberry Limited | Transposing a block of transform coefficients, based upon an intra-prediction mode |
US9215464B2 (en) | 2013-09-19 | 2015-12-15 | Blackberry Limited | Coding position data for the last non-zero transform coefficient in a coefficient group |
FR3011429A1 (en) * | 2013-09-27 | 2015-04-03 | Orange | VIDEO CODING AND DECODING BY HERITAGE OF A FIELD OF MOTION VECTORS |
US9473778B2 (en) | 2013-09-27 | 2016-10-18 | Apple Inc. | Skip thresholding in pipelined video encoders |
CA2925183C (en) | 2013-10-14 | 2020-03-10 | Microsoft Technology Licensing, Llc | Features of base color index map mode for video and image coding and decoding |
CN105765974B (en) | 2013-10-14 | 2019-07-02 | 微软技术许可有限责任公司 | Feature for the intra block of video and image coding and decoding duplication prediction mode |
WO2015054813A1 (en) | 2013-10-14 | 2015-04-23 | Microsoft Technology Licensing, Llc | Encoder-side options for intra block copy prediction mode for video and image coding |
US9330171B1 (en) * | 2013-10-17 | 2016-05-03 | Google Inc. | Video annotation using deep network architectures |
JP6396452B2 (en) | 2013-10-21 | 2018-09-26 | ドルビー・インターナショナル・アーベー | Audio encoder and decoder |
US10397607B2 (en) * | 2013-11-01 | 2019-08-27 | Qualcomm Incorporated | Color residual prediction for video coding |
US10390034B2 (en) | 2014-01-03 | 2019-08-20 | Microsoft Technology Licensing, Llc | Innovations in block vector prediction and estimation of reconstructed sample values within an overlap area |
WO2015100726A1 (en) | 2014-01-03 | 2015-07-09 | Microsoft Corporation | Block vector prediction in video and image coding/decoding |
US9826232B2 (en) * | 2014-01-08 | 2017-11-21 | Qualcomm Incorporated | Support of non-HEVC base layer in HEVC multi-layer extensions |
US11284103B2 (en) | 2014-01-17 | 2022-03-22 | Microsoft Technology Licensing, Llc | Intra block copy prediction with asymmetric partitions and encoder-side search patterns, search ranges and approaches to partitioning |
US10542274B2 (en) | 2014-02-21 | 2020-01-21 | Microsoft Technology Licensing, Llc | Dictionary encoding and decoding of screen content |
EP3114842A4 (en) | 2014-03-04 | 2017-02-22 | Microsoft Technology Licensing, LLC | Block flipping and skip mode in intra block copy prediction |
KR102324004B1 (en) | 2014-03-14 | 2021-11-09 | 브이아이디 스케일, 인크. | Palette coding for screen content coding |
US10136140B2 (en) | 2014-03-17 | 2018-11-20 | Microsoft Technology Licensing, Llc | Encoder-side decisions for screen content encoding |
US9877048B2 (en) * | 2014-06-09 | 2018-01-23 | Qualcomm Incorporated | Entropy coding techniques for display stream compression (DSC) |
KR20230130178A (en) | 2014-06-19 | 2023-09-11 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | Unified intra block copy and inter prediction modes |
US9807410B2 (en) | 2014-07-02 | 2017-10-31 | Apple Inc. | Late-stage mode conversions in pipelined video encoders |
US10102613B2 (en) | 2014-09-25 | 2018-10-16 | Google Llc | Frequency-domain denoising |
KR102330740B1 (en) | 2014-09-30 | 2021-11-23 | 마이크로소프트 테크놀로지 라이센싱, 엘엘씨 | Rules for intra-picture prediction modes when wavefront parallel processing is enabled |
US9591330B2 (en) | 2014-10-28 | 2017-03-07 | Sony Corporation | Image processing system with binary adaptive Golomb coding and method of operation thereof |
US10063889B2 (en) | 2014-10-28 | 2018-08-28 | Sony Corporation | Image processing system with conditional coding and method of operation thereof |
US10356410B2 (en) | 2014-10-28 | 2019-07-16 | Sony Corporation | Image processing system with joint encoding and method of operation thereof |
US9674554B2 (en) | 2014-10-28 | 2017-06-06 | Sony Corporation | Image processing system with coding mode and method of operation thereof |
US9294782B1 (en) | 2014-10-28 | 2016-03-22 | Sony Corporation | Image processing system with artifact reduction mechanism and method of operation thereof |
US9357232B2 (en) | 2014-10-28 | 2016-05-31 | Sony Corporation | Image processing system with binary decomposition and method of operation thereof |
US9854201B2 (en) | 2015-01-16 | 2017-12-26 | Microsoft Technology Licensing, Llc | Dynamically updating quality to higher chroma sampling rate |
US9749646B2 (en) | 2015-01-16 | 2017-08-29 | Microsoft Technology Licensing, Llc | Encoding/decoding of high chroma resolution details |
US9591325B2 (en) | 2015-01-27 | 2017-03-07 | Microsoft Technology Licensing, Llc | Special case handling for merged chroma blocks in intra block copy prediction mode |
EP3254463A4 (en) | 2015-02-06 | 2018-02-21 | Microsoft Technology Licensing, LLC | Skipping evaluation stages during media encoding |
WO2016133504A1 (en) * | 2015-02-18 | 2016-08-25 | Hewlett Packard Enterprise Development Lp | Continuous viewing media |
US11330284B2 (en) | 2015-03-27 | 2022-05-10 | Qualcomm Incorporated | Deriving motion information for sub-blocks in video coding |
WO2016197314A1 (en) | 2015-06-09 | 2016-12-15 | Microsoft Technology Licensing, Llc | Robust encoding/decoding of escape-coded pixels in palette mode |
US10038917B2 (en) | 2015-06-12 | 2018-07-31 | Microsoft Technology Licensing, Llc | Search strategies for intra-picture prediction modes |
US10136132B2 (en) * | 2015-07-21 | 2018-11-20 | Microsoft Technology Licensing, Llc | Adaptive skip or zero block detection combined with transform size decision |
US9769499B2 (en) | 2015-08-11 | 2017-09-19 | Google Inc. | Super-transform video coding |
US10277905B2 (en) | 2015-09-14 | 2019-04-30 | Google Llc | Transform selection for non-baseband signal coding |
US9807423B1 (en) | 2015-11-24 | 2017-10-31 | Google Inc. | Hybrid transform scheme for video coding |
US10756755B2 (en) * | 2016-05-10 | 2020-08-25 | Immersion Networks, Inc. | Adaptive audio codec system, method and article |
US10368080B2 (en) | 2016-10-21 | 2019-07-30 | Microsoft Technology Licensing, Llc | Selective upsampling or refresh of chroma sample values |
US10235763B2 (en) | 2016-12-01 | 2019-03-19 | Google Llc | Determining optical flow |
EP3349451A1 (en) | 2017-01-11 | 2018-07-18 | Thomson Licensing | Method and apparatus for selecting a coding mode used for encoding/decoding a residual block |
CA3059740A1 (en) * | 2017-04-21 | 2018-10-25 | Zenimax Media Inc. | Systems and methods for game-generated motion vectors |
EP3649782A4 (en) * | 2017-07-05 | 2021-04-14 | Telefonaktiebolaget LM Ericsson (PUBL) | Decoding a block of video samples |
US10986349B2 (en) | 2017-12-29 | 2021-04-20 | Microsoft Technology Licensing, Llc | Constraints on locations of reference blocks for intra block copy prediction |
US11012715B2 (en) | 2018-02-08 | 2021-05-18 | Qualcomm Incorporated | Intra block copy for video coding |
US10735025B2 (en) * | 2018-03-02 | 2020-08-04 | Microsoft Technology Licensing, Llc | Use of data prefixes to increase compression ratios |
CN116320411A (en) * | 2018-03-29 | 2023-06-23 | 日本放送协会 | Image encoding device, image decoding device, and program |
CN110324627B (en) * | 2018-03-30 | 2022-04-05 | 杭州海康威视数字技术股份有限公司 | Chroma intra-frame prediction method and device |
US10469869B1 (en) * | 2018-06-01 | 2019-11-05 | Tencent America LLC | Method and apparatus for video coding |
WO2019235849A1 (en) * | 2018-06-06 | 2019-12-12 | 엘지전자 주식회사 | Method for processing overlay media in 360 video system, and device therefor |
JP7096373B2 (en) * | 2018-06-07 | 2022-07-05 | 北京字節跳動網絡技術有限公司 | Partial cost calculation |
US11025946B2 (en) * | 2018-06-14 | 2021-06-01 | Tencent America LLC | Method and apparatus for video coding |
TWI719519B (en) | 2018-07-02 | 2021-02-21 | 大陸商北京字節跳動網絡技術有限公司 | Block size restrictions for dmvr |
EP3843403A4 (en) * | 2018-08-24 | 2022-06-08 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding, and method and apparatus for image decoding |
US11477476B2 (en) * | 2018-10-04 | 2022-10-18 | Qualcomm Incorporated | Affine restrictions for the worst-case bandwidth reduction in video coding |
US11140403B2 (en) * | 2018-12-20 | 2021-10-05 | Tencent America LLC | Identifying tile from network abstraction unit header |
CN113597760B (en) * | 2019-01-02 | 2024-08-16 | 北京字节跳动网络技术有限公司 | Video processing method |
US11019359B2 (en) | 2019-01-15 | 2021-05-25 | Tencent America LLC | Chroma deblock filters for intra picture block compensation |
US11051035B2 (en) * | 2019-02-08 | 2021-06-29 | Qualcomm Incorporated | Processing of illegal motion vectors for intra block copy mode in video coding |
US10687062B1 (en) * | 2019-02-22 | 2020-06-16 | Google Llc | Compression across multiple images |
US11632563B2 (en) | 2019-02-22 | 2023-04-18 | Qualcomm Incorporated | Motion vector derivation in video coding |
CN110175185B (en) * | 2019-04-17 | 2023-04-07 | 上海天数智芯半导体有限公司 | Self-adaptive lossless compression method based on time sequence data distribution characteristics |
US11122297B2 (en) | 2019-05-03 | 2021-09-14 | Google Llc | Using border-aligned block functions for image compression |
WO2021003447A1 (en) * | 2019-07-03 | 2021-01-07 | Futurewei Technologies, Inc. | Types of reference pictures in reference picture lists |
EP4000267A4 (en) | 2019-08-23 | 2023-02-22 | Beijing Bytedance Network Technology Co., Ltd. | Clipping in reference picture resampling |
US11380343B2 (en) | 2019-09-12 | 2022-07-05 | Immersion Networks, Inc. | Systems and methods for processing high frequency audio signal |
WO2021078178A1 (en) | 2019-10-23 | 2021-04-29 | Beijing Bytedance Network Technology Co., Ltd. | Calculation for multiple coding tools |
EP4035356A4 (en) | 2019-10-23 | 2022-11-30 | Beijing Bytedance Network Technology Co., Ltd. | Signaling for reference picture resampling |
US11418792B2 (en) * | 2020-03-27 | 2022-08-16 | Tencent America LLC | Estimating attributes for the classification of adaptive loop filtering based on projection-slice theorem |
WO2022182651A1 (en) * | 2021-02-25 | 2022-09-01 | Qualcomm Incorporated | Machine learning based flow determination for video coding |
US12003734B2 (en) | 2021-02-25 | 2024-06-04 | Qualcomm Incorporated | Machine learning based flow determination for video coding |
US12015801B2 (en) * | 2021-09-13 | 2024-06-18 | Apple Inc. | Systems and methods for streaming extensions for video encoding |
CN115348456B (en) * | 2022-08-11 | 2023-06-06 | 上海久尺网络科技有限公司 | Video image processing method, device, equipment and storage medium |
Citations (93)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4454546A (en) * | 1980-03-13 | 1984-06-12 | Fuji Photo Film Co., Ltd. | Band compression device for shaded image |
US4691329A (en) * | 1985-07-02 | 1987-09-01 | Matsushita Electric Industrial Co., Ltd. | Block encoder |
US4796087A (en) * | 1986-05-29 | 1989-01-03 | Jacques Guichard | Process for coding by transformation for the transmission of picture signals |
US4800432A (en) * | 1986-10-24 | 1989-01-24 | The Grass Valley Group, Inc. | Video Difference key generator |
US4849812A (en) * | 1987-03-10 | 1989-07-18 | U.S. Philips Corporation | Television system in which digitized picture signals subjected to a transform coding are transmitted from an encoding station to a decoding station |
US4999705A (en) * | 1990-05-03 | 1991-03-12 | At&T Bell Laboratories | Three dimensional motion compensated video coding |
US5021879A (en) * | 1987-05-06 | 1991-06-04 | U.S. Philips Corporation | System for transmitting video pictures |
US5089887A (en) * | 1988-09-23 | 1992-02-18 | Thomson Consumer Electronics | Method and device for the estimation of motion in a sequence of moving images |
US5091782A (en) * | 1990-04-09 | 1992-02-25 | General Instrument Corporation | Apparatus and method for adaptively compressing successive blocks of digital video |
US5111292A (en) * | 1991-02-27 | 1992-05-05 | General Electric Company | Priority selection apparatus as for a video signal processor |
US5117287A (en) * | 1990-03-02 | 1992-05-26 | Kokusai Denshin Denwa Co., Ltd. | Hybrid coding system for moving image |
US5155594A (en) * | 1990-05-11 | 1992-10-13 | Picturetel Corporation | Hierarchical encoding method and apparatus employing background references for efficiently communicating image sequences |
US5157490A (en) * | 1990-03-14 | 1992-10-20 | Kabushiki Kaisha Toshiba | Television signal scanning line converting apparatus |
US5193004A (en) * | 1990-12-03 | 1993-03-09 | The Trustees Of Columbia University In The City Of New York | Systems and methods for coding even fields of interlaced video sequences |
US5227878A (en) * | 1991-11-15 | 1993-07-13 | At&T Bell Laboratories | Adaptive coding and decoding of frames and fields of video |
US5287420A (en) * | 1992-04-08 | 1994-02-15 | Supermac Technology | Method for image compression on a personal computer |
US5317397A (en) * | 1991-05-31 | 1994-05-31 | Kabushiki Kaisha Toshiba | Predictive coding using spatial-temporal filtering and plural motion vectors |
US5319463A (en) * | 1991-03-19 | 1994-06-07 | Nec Corporation | Arrangement and method of preprocessing binary picture data prior to run-length encoding |
US5343248A (en) * | 1991-07-26 | 1994-08-30 | Sony Corporation | Moving image compressing and recording medium and moving image data encoder and decoder |
US5347308A (en) * | 1991-10-11 | 1994-09-13 | Matsushita Electric Industrial Co., Ltd. | Adaptive coding method for interlaced scan digital video sequences |
US5379351A (en) * | 1992-02-19 | 1995-01-03 | Integrated Information Technology, Inc. | Video compression/decompression processing and processors |
US5400075A (en) * | 1993-01-13 | 1995-03-21 | Thomson Consumer Electronics, Inc. | Adaptive variable length encoder/decoder |
US5412435A (en) * | 1992-07-03 | 1995-05-02 | Kokusai Denshin Denwa Kabushiki Kaisha | Interlaced video signal motion compensation prediction system |
US5422676A (en) * | 1991-04-25 | 1995-06-06 | Deutsche Thomson-Brandt Gmbh | System for coding an image representative signal |
US5426464A (en) * | 1993-01-14 | 1995-06-20 | Rca Thomson Licensing Corporation | Field elimination apparatus for a video compression/decompression system |
US5448297A (en) * | 1993-06-16 | 1995-09-05 | Intel Corporation | Method and system for encoding images using skip blocks |
US5453799A (en) * | 1993-11-05 | 1995-09-26 | Comsat Corporation | Unified motion estimation architecture |
US5461421A (en) * | 1992-11-30 | 1995-10-24 | Samsung Electronics Co., Ltd. | Encoding and decoding method and apparatus thereof |
US5510840A (en) * | 1991-12-27 | 1996-04-23 | Sony Corporation | Methods and devices for encoding and decoding frame signals and recording medium therefor |
US5517327A (en) * | 1993-06-30 | 1996-05-14 | Minolta Camera Kabushiki Kaisha | Data processor for image data using orthogonal transformation |
US5539466A (en) * | 1991-07-30 | 1996-07-23 | Sony Corporation | Efficient coding apparatus for picture signal and decoding apparatus therefor |
US5544286A (en) * | 1993-01-29 | 1996-08-06 | Microsoft Corporation | Digital video data compression technique |
US5546129A (en) * | 1995-04-29 | 1996-08-13 | Daewoo Electronics Co., Ltd. | Method for encoding a video signal using feature point based motion estimation |
US5550541A (en) * | 1994-04-01 | 1996-08-27 | Dolby Laboratories Licensing Corporation | Compact source coding tables for encoder/decoder system |
US5552832A (en) * | 1994-10-26 | 1996-09-03 | Intel Corporation | Run-length encoding sequence for video signals |
US5594504A (en) * | 1994-07-06 | 1997-01-14 | Lucent Technologies Inc. | Predictive video coding using a motion vector updating routine |
US5598215A (en) * | 1993-05-21 | 1997-01-28 | Nippon Telegraph And Telephone Corporation | Moving image encoder and decoder using contour extraction |
US5598216A (en) * | 1995-03-20 | 1997-01-28 | Daewoo Electronics Co., Ltd | Method and apparatus for encoding/decoding a video signal |
US5617144A (en) * | 1995-03-20 | 1997-04-01 | Daewoo Electronics Co., Ltd. | Image processing system using pixel-by-pixel motion estimation and frame decimation |
US5619281A (en) * | 1994-12-30 | 1997-04-08 | Daewoo Electronics Co., Ltd | Method and apparatus for detecting motion vectors in a frame decimating video encoder |
US5648819A (en) * | 1994-03-30 | 1997-07-15 | U.S. Philips Corporation | Motion estimation using half-pixel refinement of frame and field vectors |
US5666461A (en) * | 1992-06-29 | 1997-09-09 | Sony Corporation | High efficiency encoding and decoding of picture signals and recording medium containing same |
US5668608A (en) * | 1995-07-26 | 1997-09-16 | Daewoo Electronics Co., Ltd. | Motion vector estimation method and apparatus for use in an image signal encoding system |
US5745789A (en) * | 1992-01-23 | 1998-04-28 | Hitachi, Ltd. | Disc system for holding data in a form of a plurality of data blocks dispersed in a plurality of disc units connected by a common data bus |
US5764814A (en) * | 1996-03-22 | 1998-06-09 | Microsoft Corporation | Representation and encoding of general arbitrary shapes |
US5767898A (en) * | 1994-06-23 | 1998-06-16 | Sanyo Electric Co., Ltd. | Three-dimensional image coding by merger of left and right images |
US5784175A (en) * | 1995-10-05 | 1998-07-21 | Microsoft Corporation | Pixel block correlation process |
US5796438A (en) * | 1994-07-05 | 1998-08-18 | Sony Corporation | Methods and apparatus for interpolating picture information |
US5946043A (en) * | 1997-12-31 | 1999-08-31 | Microsoft Corporation | Video coding using adaptive coding of block parameters for coded/uncoded blocks |
US5946042A (en) * | 1993-03-24 | 1999-08-31 | Sony Corporation | Macroblock coding including difference between motion vectors |
US5974184A (en) * | 1997-03-07 | 1999-10-26 | General Instrument Corporation | Intra-macroblock DC and AC coefficient prediction for interlaced digital video |
US5973743A (en) * | 1997-12-02 | 1999-10-26 | Daewoo Electronics Co., Ltd. | Mode coding method and apparatus for use in an interlaced shape coder |
US6026195A (en) * | 1997-03-07 | 2000-02-15 | General Instrument Corporation | Motion estimation and compensation of video object planes for interlaced digital video |
US6035070A (en) * | 1996-09-24 | 2000-03-07 | Moon; Joo-Hee | Encoder/decoder for coding/decoding gray scale shape data and method thereof |
US6052150A (en) * | 1995-03-10 | 2000-04-18 | Kabushiki Kaisha Toshiba | Video data signal including a code string having a plurality of components which are arranged in a descending order of importance |
US6094225A (en) * | 1997-12-02 | 2000-07-25 | Daewoo Electronics, Co., Ltd. | Method and apparatus for encoding mode signals for use in a binary shape coder |
US6122318A (en) * | 1996-10-31 | 2000-09-19 | Kabushiki Kaisha Toshiba | Video encoding apparatus and video decoding apparatus |
US6192081B1 (en) * | 1995-10-26 | 2001-02-20 | Sarnoff Corporation | Apparatus and method for selecting a coding mode in a block-based coding system |
US6208761B1 (en) * | 1995-07-11 | 2001-03-27 | Telefonaktiebolaget Lm Ericsson (Publ) | Video coding |
US6215905B1 (en) * | 1996-09-30 | 2001-04-10 | Hyundai Electronics Ind. Co., Ltd. | Video predictive coding apparatus and method |
US6236806B1 (en) * | 1996-11-06 | 2001-05-22 | Sony Corporation | Field detection apparatus and method, image coding apparatus and method, recording medium, recording method and transmission method |
US6243418B1 (en) * | 1998-03-30 | 2001-06-05 | Daewoo Electronics Co., Ltd. | Method and apparatus for encoding a motion vector of a binary shape signal |
US6271885B2 (en) * | 1998-06-24 | 2001-08-07 | Victor Company Of Japan, Ltd. | Apparatus and method of motion-compensated predictive coding |
US6275531B1 (en) * | 1998-07-23 | 2001-08-14 | Optivision, Inc. | Scalable video coding method and apparatus |
US6275528B1 (en) * | 1997-12-12 | 2001-08-14 | Sony Corporation | Picture encoding method and picture encoding apparatus |
US6292585B1 (en) * | 1995-09-29 | 2001-09-18 | Kabushiki Kaisha Toshiba | Video coding and video decoding apparatus |
US6351563B1 (en) * | 1997-07-09 | 2002-02-26 | Hyundai Electronics Ind. Co., Ltd. | Apparatus and method for coding/decoding scalable shape binary image using mode of lower and current layers |
US6408029B1 (en) * | 1998-04-02 | 2002-06-18 | Intel Corporation | Method and apparatus for simplifying real-time data encoding |
US20020110196A1 (en) * | 1998-06-29 | 2002-08-15 | Xerox Corporation | HVQ compression for image boundaries |
US20020114388A1 (en) * | 2000-04-14 | 2002-08-22 | Mamoru Ueda | Decoder and decoding method, recorded medium, and program |
US6563953B2 (en) * | 1998-11-30 | 2003-05-13 | Microsoft Corporation | Predictive image compression using a single variable length code for both the luminance and chrominance blocks for each macroblock |
US20030099292A1 (en) * | 2001-11-27 | 2003-05-29 | Limin Wang | Macroblock level adaptive frame/field coding for digital video content |
US6573905B1 (en) * | 1999-11-09 | 2003-06-03 | Broadcom Corporation | Video and graphics system with parallel processing of graphics windows |
US20030113026A1 (en) * | 2001-12-17 | 2003-06-19 | Microsoft Corporation | Skip macroblock coding |
US20030138150A1 (en) * | 2001-12-17 | 2003-07-24 | Microsoft Corporation | Spatial extrapolation of pixel values in intraframe video coding and decoding |
US20030142748A1 (en) * | 2002-01-25 | 2003-07-31 | Alexandros Tourapis | Video coding methods and apparatuses |
US20030156643A1 (en) * | 2002-02-19 | 2003-08-21 | Samsung Electronics Co., Ltd. | Method and apparatus to encode a moving image with fixed computational complexity |
US6614442B1 (en) * | 2000-06-26 | 2003-09-02 | S3 Graphics Co., Ltd. | Macroblock tiling format for motion compensation |
US20030179826A1 (en) * | 2002-03-18 | 2003-09-25 | Lg Electronics Inc. | B picture mode determining method and apparatus in video coding system |
US6683987B1 (en) * | 1999-03-25 | 2004-01-27 | Victor Company Of Japan, Ltd. | Method and apparatus for altering the picture updating frequency of a compressed video data stream |
US6704360B2 (en) * | 1997-03-27 | 2004-03-09 | At&T Corp. | Bidirectionally predicted pictures or video object planes for efficient and flexible video coding |
US20040136457A1 (en) * | 2002-10-23 | 2004-07-15 | John Funnell | Method and system for supercompression of compressed digital video |
US6765963B2 (en) * | 2001-01-03 | 2004-07-20 | Nokia Corporation | Video decoder architecture and method for using same |
US20040141651A1 (en) * | 2002-10-25 | 2004-07-22 | Junichi Hara | Modifying wavelet division level before transmitting data stream |
US6778606B2 (en) * | 2000-02-21 | 2004-08-17 | Hyundai Curitel, Inc. | Selective motion estimation method and apparatus |
US6785331B1 (en) * | 1997-02-14 | 2004-08-31 | Nippon Telegraph And Telephone Corporation | Predictive encoding and decoding methods of video data |
US20040179601A1 (en) * | 2001-11-16 | 2004-09-16 | Mitsuru Kobayashi | Image encoding method, image decoding method, image encoder, image decode, program, computer data signal, and image transmission system |
US6795584B2 (en) * | 2002-10-03 | 2004-09-21 | Nokia Corporation | Context-based adaptive variable length coding for adaptive block transforms |
US6862402B2 (en) * | 1997-12-20 | 2005-03-01 | Samsung Electronics Co., Ltd. | Digital recording and playback apparatus having MPEG CODEC and method therefor |
US20050053141A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Joint coding and decoding of a reference field selection and differential motion vector information |
US20050135484A1 (en) * | 2003-12-18 | 2005-06-23 | Daeyang Foundation (Sejong University) | Method of encoding mode determination, method of motion estimation and encoding apparatus |
US20050152457A1 (en) * | 2003-09-07 | 2005-07-14 | Microsoft Corporation | Signaling and repeat padding for skip frames |
US6920175B2 (en) * | 2001-01-03 | 2005-07-19 | Nokia Corporation | Video coding architecture and methods for using same |
Family Cites Families (542)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US519451A (en) * | 1894-05-08 | Paper box | ||
US4420771A (en) | 1981-02-09 | 1983-12-13 | Bell Telephone Laboratories, Incorporated | Technique for encoding multi-level signals |
JPS60158786A (en) | 1984-01-30 | 1985-08-20 | Kokusai Denshin Denwa Co Ltd <Kdd> | Detection system of picture moving quantity |
JPS61205086A (en) | 1985-03-08 | 1986-09-11 | Mitsubishi Electric Corp | Picture encoding and decoding device |
US4754492A (en) | 1985-06-03 | 1988-06-28 | Picturetel Corporation | Method and system for adapting a digitized signal processing system for block processing with minimal blocking artifacts |
US4661849A (en) | 1985-06-03 | 1987-04-28 | Pictel Corporation | Method and apparatus for providing motion estimation signals for communicating image sequences |
JPH0669145B2 (en) | 1985-08-05 | 1994-08-31 | 日本電信電話株式会社 | Predictive coding |
US4661853A (en) | 1985-11-01 | 1987-04-28 | Rca Corporation | Interfield image motion detector for video signals |
ATE108587T1 (en) | 1986-09-13 | 1994-07-15 | Philips Nv | METHOD AND CIRCUIT ARRANGEMENT FOR BIT RATE REDUCTION. |
US4730348A (en) * | 1986-09-19 | 1988-03-08 | Adaptive Computer Technologies | Adaptive data compression system |
US4698672A (en) | 1986-10-27 | 1987-10-06 | Compression Labs, Inc. | Coding system for reducing redundancy |
US4706260A (en) | 1986-11-07 | 1987-11-10 | Rca Corporation | DPCM system with rate-of-fill control of buffer occupancy |
DE3704777C1 (en) | 1987-02-16 | 1988-04-07 | Ant Nachrichtentech | Method of transmitting and playing back television picture sequences |
DE3854171T2 (en) | 1987-06-09 | 1995-12-21 | Sony Corp | Evaluation of motion vectors in television pictures. |
EP0294958B1 (en) | 1987-06-09 | 1995-08-23 | Sony Corporation | Motion compensated interpolation of digital television images |
US4968135A (en) | 1987-08-17 | 1990-11-06 | Digital Equipment Corporation | System for producing pixel image data from CCITT encoded pixel data |
JP2577745B2 (en) | 1987-08-19 | 1997-02-05 | 三菱電機株式会社 | Receiver |
US4792981A (en) | 1987-09-21 | 1988-12-20 | Am International, Inc. | Manipulation of run-length encoded images |
US4813056A (en) | 1987-12-08 | 1989-03-14 | General Electric Company | Modified statistical coding of digital signals |
EP0339589A3 (en) | 1988-04-28 | 1992-01-02 | Sharp Kabushiki Kaisha | Orthogonal transform coding system for image data |
DE68925011T2 (en) | 1988-09-16 | 1996-06-27 | Philips Electronics Nv | High definition television system. |
US5043919A (en) | 1988-12-19 | 1991-08-27 | International Business Machines Corporation | Method of and system for updating a display unit |
US4985768A (en) | 1989-01-20 | 1991-01-15 | Victor Company Of Japan, Ltd. | Inter-frame predictive encoding system with encoded and transmitted prediction error |
US5297236A (en) | 1989-01-27 | 1994-03-22 | Dolby Laboratories Licensing Corporation | Low computational-complexity digital filter bank for encoder, decoder, and encoder/decoder |
CA2000156C (en) | 1989-02-14 | 1995-05-02 | Kohtaro Asai | Picture signal encoding and decoding apparatus |
DE3912605B4 (en) | 1989-04-17 | 2008-09-04 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Digital coding method |
JPH07109990B2 (en) | 1989-04-27 | 1995-11-22 | 日本ビクター株式会社 | Adaptive interframe predictive coding method and decoding method |
JPH0832047B2 (en) * | 1989-04-28 | 1996-03-27 | 日本ビクター株式会社 | Predictive coding device |
USRE35910E (en) | 1989-05-11 | 1998-09-29 | Matsushita Electric Industrial Co., Ltd. | Moving image signal encoding apparatus and decoding apparatus |
FR2646978B1 (en) | 1989-05-11 | 1991-08-23 | France Etat | METHOD AND INSTALLATION FOR ENCODING SOUND SIGNALS |
AU612543B2 (en) | 1989-05-11 | 1991-07-11 | Panasonic Corporation | Moving image signal encoding apparatus and decoding apparatus |
JP2562499B2 (en) | 1989-05-29 | 1996-12-11 | 日本電信電話株式会社 | High-efficiency image encoding device and its decoding device |
US5128758A (en) | 1989-06-02 | 1992-07-07 | North American Philips Corporation | Method and apparatus for digitally processing a high definition television augmentation signal |
US5179442A (en) | 1989-06-02 | 1993-01-12 | North American Philips Corporation | Method and apparatus for digitally processing a high definition television augmentation signal |
JPH0832039B2 (en) | 1989-08-19 | 1996-03-27 | 日本ビクター株式会社 | Variable length coding method and apparatus thereof |
JPH03117991A (en) | 1989-09-29 | 1991-05-20 | Victor Co Of Japan Ltd | Encoding and decoder device for movement vector |
US5144426A (en) | 1989-10-13 | 1992-09-01 | Matsushita Electric Industrial Co., Ltd. | Motion compensated prediction interframe coding system |
EP0713340B1 (en) | 1989-10-14 | 2001-08-22 | Sony Corporation | Video signal coding/decoding method and apparatus |
US5040217A (en) | 1989-10-18 | 1991-08-13 | At&T Bell Laboratories | Perceptual coding of audio signals |
JP2787599B2 (en) | 1989-11-06 | 1998-08-20 | 富士通株式会社 | Image signal coding control method |
NL9000424A (en) | 1990-02-22 | 1991-09-16 | Philips Nv | TRANSFER SYSTEM FOR DIGITALIZED TELEVISION IMAGES. |
US5270832A (en) | 1990-03-14 | 1993-12-14 | C-Cube Microsystems | System for compression and decompression of video data using discrete cosine transform and coding techniques |
US5103306A (en) | 1990-03-28 | 1992-04-07 | Transitions Research Corporation | Digital image compression employing a resolution gradient |
JP2969782B2 (en) | 1990-05-09 | 1999-11-02 | ソニー株式会社 | Encoded data editing method and encoded data editing device |
CA2043670C (en) | 1990-06-05 | 2002-01-08 | Wiebe De Haan | Method of transmitting a picture sequence of a full-motion video scene, and a medium for said transmission |
GB9012538D0 (en) | 1990-06-05 | 1990-07-25 | Philips Nv | Coding of video signals |
US5068724A (en) | 1990-06-15 | 1991-11-26 | General Instrument Corporation | Adaptive motion compensation for digital television |
US5146324A (en) | 1990-07-31 | 1992-09-08 | Ampex Corporation | Data compression using a feedforward quantization estimator |
JP3037383B2 (en) | 1990-09-03 | 2000-04-24 | キヤノン株式会社 | Image processing system and method |
DE69131257T2 (en) | 1990-10-31 | 1999-09-23 | Victor Company Of Japan, Ltd. | Method for compression of moving image signals using the interlaced method |
JPH04199981A (en) | 1990-11-29 | 1992-07-21 | Nec Corp | Prompt processing type one-dimensional coder |
JP3191935B2 (en) | 1990-11-30 | 2001-07-23 | 株式会社日立製作所 | Image encoding method, image encoding device, image decoding method |
JP3303869B2 (en) | 1990-11-30 | 2002-07-22 | 株式会社日立製作所 | Image encoding method, image encoding device, image decoding method |
USRE35093E (en) | 1990-12-03 | 1995-11-21 | The Trustees Of Columbia University In The City Of New York | Systems and methods for coding even fields of interlaced video sequences |
US5266941A (en) | 1991-02-15 | 1993-11-30 | Silicon Graphics, Inc. | Apparatus and method for controlling storage of display information in a computer system |
GB2253318B (en) * | 1991-02-27 | 1994-07-20 | Stc Plc | Image processing |
JPH04297179A (en) | 1991-03-15 | 1992-10-21 | Mitsubishi Electric Corp | Data communication system |
JP3119888B2 (en) | 1991-04-18 | 2000-12-25 | 松下電器産業株式会社 | Signal processing method and recording / reproducing device |
US5212549A (en) | 1991-04-29 | 1993-05-18 | Rca Thomson Licensing Corporation | Error concealment apparatus for a compressed video signal processing system |
JPH04334188A (en) | 1991-05-08 | 1992-11-20 | Nec Corp | Coding system for moving picture signal |
EP0514663A3 (en) | 1991-05-24 | 1993-07-14 | International Business Machines Corporation | An apparatus and method for motion video encoding employing an adaptive quantizer |
EP0540714B1 (en) * | 1991-05-24 | 1998-01-07 | British Broadcasting Corporation | Video image processing |
US5467136A (en) | 1991-05-31 | 1995-11-14 | Kabushiki Kaisha Toshiba | Video decoder for determining a motion vector from a scaled vector and a difference vector |
US5784107A (en) | 1991-06-17 | 1998-07-21 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for picture coding and method and apparatus for picture decoding |
JP2684941B2 (en) | 1992-11-25 | 1997-12-03 | 松下電器産業株式会社 | Image encoding method and image encoding device |
JP2699703B2 (en) | 1991-07-31 | 1998-01-19 | 松下電器産業株式会社 | Motion compensation prediction method and image signal encoding method using the same |
US5428396A (en) | 1991-08-03 | 1995-06-27 | Sony Corporation | Variable length coding/decoding method for motion vectors |
JPH0541862A (en) | 1991-08-03 | 1993-02-19 | Sony Corp | Variable length coding system for motion vector |
JP3001688B2 (en) | 1991-08-05 | 2000-01-24 | 株式会社大一商会 | Pachinko ball circulation controller |
US5291486A (en) | 1991-08-19 | 1994-03-01 | Sony Corporation | Data multiplexing apparatus and multiplexed data demultiplexing apparatus |
ATE148607T1 (en) * | 1991-09-30 | 1997-02-15 | Philips Electronics Nv | MOTION VECTOR ESTIMATION, MOTION IMAGE CODING AND STORAGE |
JP2586260B2 (en) | 1991-10-22 | 1997-02-26 | 三菱電機株式会社 | Adaptive blocking image coding device |
JP3134424B2 (en) | 1991-10-31 | 2001-02-13 | ソニー株式会社 | Variable length encoding method and apparatus |
JP2962012B2 (en) | 1991-11-08 | 1999-10-12 | 日本ビクター株式会社 | Video encoding device and decoding device therefor |
JPH05137131A (en) | 1991-11-13 | 1993-06-01 | Sony Corp | Inter-frame motion predicting method |
JP2549479B2 (en) | 1991-12-06 | 1996-10-30 | 日本電信電話株式会社 | Motion compensation inter-frame band division coding processing method |
DE69228983T2 (en) | 1991-12-18 | 1999-10-28 | Koninklijke Philips Electronics N.V., Eindhoven | System for transmitting and / or storing signals from textured images |
JP2524044B2 (en) | 1992-01-22 | 1996-08-14 | 松下電器産業株式会社 | Image coding method and image coding apparatus |
US6160503A (en) | 1992-02-19 | 2000-12-12 | 8×8, Inc. | Deblocking filter for encoder/decoder arrangement and method with divergence reduction |
US6441842B1 (en) | 1992-02-19 | 2002-08-27 | 8×8, Inc. | Video compression/decompression processing and processors |
US5594813A (en) | 1992-02-19 | 1997-01-14 | Integrated Information Technology, Inc. | Programmable architecture and methods for motion estimation |
JP2882161B2 (en) | 1992-02-20 | 1999-04-12 | 松下電器産業株式会社 | Video signal recording / reproducing device, video signal transmitting device, video signal encoding device, and video signal reproducing device |
US5227788A (en) | 1992-03-02 | 1993-07-13 | At&T Bell Laboratories | Method and apparatus for two-component signal compression |
US5293229A (en) | 1992-03-27 | 1994-03-08 | Matsushita Electric Corporation Of America | Apparatus and method for processing groups of fields in a video data compression system |
US5367385A (en) | 1992-05-07 | 1994-11-22 | Picturetel Corporation | Method and apparatus for processing block coded image data to reduce boundary artifacts between adjacent image blocks |
KR0148130B1 (en) | 1992-05-18 | 1998-09-15 | 강진구 | Apparatus and method for encoding/decoding due to restrain blocking artifact |
KR0166716B1 (en) | 1992-06-18 | 1999-03-20 | 강진구 | Encoding and decoding method and apparatus by using block dpcm |
JP3443867B2 (en) | 1992-06-26 | 2003-09-08 | ソニー株式会社 | Image signal encoding / decoding method and image signal recording medium |
JP2899478B2 (en) | 1992-06-25 | 1999-06-02 | 松下電器産業株式会社 | Image encoding method and image encoding device |
TW241416B (en) * | 1992-06-29 | 1995-02-21 | Sony Co Ltd | |
US6226327B1 (en) | 1992-06-29 | 2001-05-01 | Sony Corporation | Video coding method and apparatus which select between frame-based and field-based predictive modes |
JPH0621830A (en) * | 1992-06-30 | 1994-01-28 | Sony Corp | Two-dimension huffman coding method |
JP3201079B2 (en) | 1992-07-03 | 2001-08-20 | ケイディーディーアイ株式会社 | Motion compensated prediction method, coding method and apparatus for interlaced video signal |
KR950010913B1 (en) | 1992-07-23 | 1995-09-25 | 삼성전자주식회사 | Vlc & vld system |
JPH06153180A (en) | 1992-09-16 | 1994-05-31 | Fujitsu Ltd | Picture data coding method and device |
US5461420A (en) | 1992-09-18 | 1995-10-24 | Sony Corporation | Apparatus for coding and decoding a digital video signal derived from a motion picture film source |
JP3348310B2 (en) | 1992-09-28 | 2002-11-20 | ソニー株式会社 | Moving picture coding method and moving picture coding apparatus |
JPH06113287A (en) | 1992-09-30 | 1994-04-22 | Matsushita Electric Ind Co Ltd | Picture coder and picture decoder |
AU668762B2 (en) | 1992-10-07 | 1996-05-16 | Nec Personal Computers, Ltd | Synchronous compression and reconstruction system |
US5982437A (en) | 1992-10-26 | 1999-11-09 | Sony Corporation | Coding method and system, and decoding method and system |
JP2959916B2 (en) * | 1992-10-28 | 1999-10-06 | 松下電器産業株式会社 | Versatile escape run level coder for digital video coder |
US5365552A (en) | 1992-11-16 | 1994-11-15 | Intel Corporation | Buffer fullness indicator |
JP3358835B2 (en) | 1992-12-14 | 2002-12-24 | ソニー株式会社 | Image coding method and apparatus |
US5467134A (en) | 1992-12-22 | 1995-11-14 | Microsoft Corporation | Method and system for compressing video data |
US5535305A (en) | 1992-12-31 | 1996-07-09 | Apple Computer, Inc. | Sub-partitioned vector quantization of probability density functions |
TW224553B (en) | 1993-03-01 | 1994-06-01 | Sony Co Ltd | Method and apparatus for inverse discrete consine transform and coding/decoding of moving picture |
US5592228A (en) | 1993-03-04 | 1997-01-07 | Kabushiki Kaisha Toshiba | Video encoder using global motion estimation and polygonal patch motion estimation |
US5376968A (en) | 1993-03-11 | 1994-12-27 | General Instrument Corporation | Adaptive compression of digital video data using different modes such as PCM and DPCM |
JP3500634B2 (en) | 1993-04-08 | 2004-02-23 | ソニー株式会社 | Motion vector detection device |
US5815646A (en) * | 1993-04-13 | 1998-09-29 | C-Cube Microsystems | Decompression processor for video applications |
US5442400A (en) | 1993-04-29 | 1995-08-15 | Rca Thomson Licensing Corporation | Error concealment apparatus for MPEG-like video data |
ES2165389T3 (en) | 1993-05-31 | 2002-03-16 | Sony Corp | APPARATUS AND METHOD FOR CODING OR DECODING SIGNS, AND RECORDING MEDIA. |
JPH06343172A (en) | 1993-06-01 | 1994-12-13 | Matsushita Electric Ind Co Ltd | Motion vector detection method and motion vector encoding method |
JPH0730896A (en) | 1993-06-25 | 1995-01-31 | Matsushita Electric Ind Co Ltd | Moving vector coding and decoding method |
US5477272A (en) | 1993-07-22 | 1995-12-19 | Gte Laboratories Incorporated | Variable-block size multi-resolution motion estimation scheme for pyramid coding |
US5719958A (en) * | 1993-11-30 | 1998-02-17 | Polaroid Corporation | System and method for image edge detection using discrete cosine transforms |
JP3050736B2 (en) | 1993-12-13 | 2000-06-12 | シャープ株式会社 | Video encoding device |
KR0155784B1 (en) | 1993-12-16 | 1998-12-15 | 김광호 | Adaptable variable coder/decoder method of image data |
US5473384A (en) | 1993-12-16 | 1995-12-05 | At&T Corp. | Method of and system for enhancing distorted graphical information |
US5465118A (en) | 1993-12-17 | 1995-11-07 | International Business Machines Corporation | Luminance transition coding method for software motion video compression/decompression |
KR100205503B1 (en) | 1993-12-29 | 1999-07-01 | 니시무로 타이죠 | Video data encoding/decoding apparatus |
US5566208A (en) | 1994-03-17 | 1996-10-15 | Philips Electronics North America Corp. | Encoder buffer having an effective size which varies automatically with the channel bit-rate |
TW283289B (en) | 1994-04-11 | 1996-08-11 | Gen Instrument Corp | |
US5541852A (en) | 1994-04-14 | 1996-07-30 | Motorola, Inc. | Device, method and system for variable bit-rate packet video communications |
US5650829A (en) | 1994-04-21 | 1997-07-22 | Sanyo Electric Co., Ltd. | Motion video coding systems with motion vector detection |
US5933451A (en) | 1994-04-22 | 1999-08-03 | Thomson Consumer Electronics, Inc. | Complexity determining apparatus |
US5504591A (en) | 1994-04-25 | 1996-04-02 | Microsoft Corporation | System and method for compressing graphic images |
US5457495A (en) | 1994-05-25 | 1995-10-10 | At&T Ipm Corp. | Adaptive video coder with dynamic bit allocation |
JP3237089B2 (en) | 1994-07-28 | 2001-12-10 | 株式会社日立製作所 | Acoustic signal encoding / decoding method |
KR0126871B1 (en) | 1994-07-30 | 1997-12-29 | 심상철 | HIGH SPEED BMA FOR Bi-DIRECTIONAL MOVING VECTOR ESTIMATION |
US5684538A (en) | 1994-08-18 | 1997-11-04 | Hitachi, Ltd. | System and method for performing video coding/decoding using motion compensation |
US6356663B1 (en) * | 1994-09-09 | 2002-03-12 | Intel Corporation | Processing image signals using spatial decomposition |
US6141446A (en) * | 1994-09-21 | 2000-10-31 | Ricoh Company, Ltd. | Compression and decompression system with reversible wavelets and lossy reconstruction |
US5568167A (en) | 1994-09-23 | 1996-10-22 | C-Cube Microsystems, Inc. | System for providing antialiased video overlays |
FR2725577B1 (en) * | 1994-10-10 | 1996-11-29 | Thomson Consumer Electronics | CODING OR DECODING METHOD OF MOTION VECTORS AND CODING OR DECODING DEVICE USING THE SAME |
US5550847A (en) | 1994-10-11 | 1996-08-27 | Motorola, Inc. | Device and method of signal loss recovery for realtime and/or interactive communications |
JP3474005B2 (en) | 1994-10-13 | 2003-12-08 | 沖電気工業株式会社 | Video coding method and video decoding method |
US5757982A (en) * | 1994-10-18 | 1998-05-26 | Hewlett-Packard Company | Quadrantal scaling of dot matrix data |
US5590064A (en) | 1994-10-26 | 1996-12-31 | Intel Corporation | Post-filtering for decoded video signals |
EP0710033A3 (en) | 1994-10-28 | 1999-06-09 | Matsushita Electric Industrial Co., Ltd. | MPEG video decoder having a high bandwidth memory |
US5623311A (en) | 1994-10-28 | 1997-04-22 | Matsushita Electric Corporation Of America | MPEG video decoder having a high bandwidth memory |
CZ294349B6 (en) | 1994-11-04 | 2004-12-15 | Koninklijke Philips Electronics N.V. | Apparatus for encoding and decoding wideband digital information signal, method for encoding and decoding, coded signal and record carrier |
KR0141875B1 (en) | 1994-11-30 | 1998-06-15 | 배순훈 | Run length decoder |
US5737455A (en) * | 1994-12-12 | 1998-04-07 | Xerox Corporation | Antialiasing with grey masking techniques |
KR100254402B1 (en) * | 1994-12-19 | 2000-05-01 | 전주범 | A method and a device for encoding picture signals by run-length coding |
JP2951861B2 (en) | 1994-12-28 | 1999-09-20 | シャープ株式会社 | Image encoding device and image decoding device |
JP3371590B2 (en) | 1994-12-28 | 2003-01-27 | ソニー株式会社 | High efficiency coding method and high efficiency decoding method |
US5691771A (en) | 1994-12-29 | 1997-11-25 | Sony Corporation | Processing of redundant fields in a moving picture to achieve synchronized system operation |
EP0721287A1 (en) | 1995-01-09 | 1996-07-10 | Daewoo Electronics Co., Ltd | Method and apparatus for encoding a video signal |
JP3351645B2 (en) * | 1995-01-31 | 2002-12-03 | 松下電器産業株式会社 | Video coding method |
JP3674072B2 (en) | 1995-02-16 | 2005-07-20 | 富士ゼロックス株式会社 | Facsimile communication method and facsimile apparatus |
US5574449A (en) | 1995-02-24 | 1996-11-12 | Intel Corporation | Signal processing with hybrid variable-length and entropy encodidng |
US6104754A (en) * | 1995-03-15 | 2000-08-15 | Kabushiki Kaisha Toshiba | Moving picture coding and/or decoding systems, and variable-length coding and/or decoding system |
US5991451A (en) | 1995-03-23 | 1999-11-23 | Intel Corporation | Variable-length encoding using code swapping |
KR100209410B1 (en) * | 1995-03-28 | 1999-07-15 | 전주범 | Apparatus for encoding an image signal |
US5884269A (en) | 1995-04-17 | 1999-03-16 | Merging Technologies | Lossless compression/decompression of digital audio data |
JP3803122B2 (en) | 1995-05-02 | 2006-08-02 | 松下電器産業株式会社 | Image memory device and motion vector detection circuit |
US5654771A (en) | 1995-05-23 | 1997-08-05 | The University Of Rochester | Video compression system using a dense motion vector field and a triangular patch mesh overlay model |
US5982459A (en) | 1995-05-31 | 1999-11-09 | 8×8, Inc. | Integrated multimedia communications processor and codec |
GB2301972B (en) | 1995-06-06 | 1999-10-20 | Sony Uk Ltd | Video compression |
GB2301971B (en) | 1995-06-06 | 1999-10-06 | Sony Uk Ltd | Video compression |
US5835149A (en) | 1995-06-06 | 1998-11-10 | Intel Corporation | Bit allocation in a coded video sequence |
US5731850A (en) | 1995-06-07 | 1998-03-24 | Maturi; Gregory V. | Hybrid hierarchial/full-search MPEG encoder motion estimation |
US5864711A (en) | 1995-07-05 | 1999-01-26 | Microsoft Corporation | System for determining more accurate translation between first and second translator, and providing translated data to second computer if first translator is more accurate |
US5687097A (en) | 1995-07-13 | 1997-11-11 | Zapex Technologies, Inc. | Method and apparatus for efficiently determining a frame motion vector in a video encoder |
FR2737931B1 (en) | 1995-08-17 | 1998-10-02 | Siemens Ag | METHOD FOR PROCESSING DECODED IMAGE BLOCKS OF A BLOCK-BASED IMAGE CODING METHOD |
US5825830A (en) | 1995-08-17 | 1998-10-20 | Kopf; David A. | Method and apparatus for the compression of audio, video or other data |
GB2305797B (en) * | 1995-09-27 | 2000-03-01 | Sony Uk Ltd | Video data compression |
US5883678A (en) | 1995-09-29 | 1999-03-16 | Kabushiki Kaisha Toshiba | Video coding and video decoding apparatus for reducing an alpha-map signal at a controlled reduction ratio |
US5819215A (en) | 1995-10-13 | 1998-10-06 | Dobson; Kurt | Method and apparatus for wavelet based data compression having adaptive bit rate control for compression of digital audio or other sensory data |
US5929940A (en) * | 1995-10-25 | 1999-07-27 | U.S. Philips Corporation | Method and device for estimating motion between images, system for encoding segmented images |
US6571019B1 (en) * | 1995-10-26 | 2003-05-27 | Hyundai Curitel, Inc | Apparatus and method of encoding/decoding a coded block pattern |
KR100211917B1 (en) | 1995-10-26 | 1999-08-02 | 김영환 | Object shape information coding method |
US6064776A (en) | 1995-10-27 | 2000-05-16 | Kabushiki Kaisha Toshiba | Image processing apparatus |
US5991463A (en) | 1995-11-08 | 1999-11-23 | Genesis Microchip Inc. | Source data interpolation method and apparatus |
US5889891A (en) | 1995-11-21 | 1999-03-30 | Regents Of The University Of California | Universal codebook vector quantization with constrained storage |
US5956674A (en) | 1995-12-01 | 1999-09-21 | Digital Theater Systems, Inc. | Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels |
US5850294A (en) | 1995-12-18 | 1998-12-15 | Lucent Technologies Inc. | Method and apparatus for post-processing images |
US5963673A (en) | 1995-12-20 | 1999-10-05 | Sanyo Electric Co., Ltd. | Method and apparatus for adaptively selecting a coding mode for video encoding |
JP2798035B2 (en) | 1996-01-17 | 1998-09-17 | 日本電気株式会社 | Motion compensated inter-frame prediction method using adaptive motion vector interpolation |
US5787203A (en) | 1996-01-19 | 1998-07-28 | Microsoft Corporation | Method and system for filtering compressed video images |
US5692063A (en) | 1996-01-19 | 1997-11-25 | Microsoft Corporation | Method and system for unrestricted motion estimation for video |
US5799113A (en) | 1996-01-19 | 1998-08-25 | Microsoft Corporation | Method for expanding contracted video images |
US5831559A (en) * | 1996-01-24 | 1998-11-03 | Intel Corporation | Encoding/decoding video signals using multiple run-val mapping tables |
US5737019A (en) * | 1996-01-29 | 1998-04-07 | Matsushita Electric Corporation Of America | Method and apparatus for changing resolution by direct DCT mapping |
US6957350B1 (en) | 1996-01-30 | 2005-10-18 | Dolby Laboratories Licensing Corporation | Encrypted and watermarked temporal and resolution layering in advanced television |
JP3130464B2 (en) | 1996-02-02 | 2001-01-31 | ローム株式会社 | Data decryption device |
EP0793389B1 (en) | 1996-02-27 | 2001-08-16 | STMicroelectronics S.r.l. | Memory reduction in the MPEG-2 main profile main level decoder |
US5682152A (en) | 1996-03-19 | 1997-10-28 | Johnson-Grace Company | Data compression using adaptive bit allocation and hybrid lossless entropy encoding |
US5982438A (en) | 1996-03-22 | 1999-11-09 | Microsoft Corporation | Overlapped motion compensation for object coding |
JPH09261266A (en) | 1996-03-26 | 1997-10-03 | Matsushita Electric Ind Co Ltd | Service information communication system |
US6571016B1 (en) * | 1997-05-05 | 2003-05-27 | Microsoft Corporation | Intra compression of pixel blocks using predicted mean |
US6215910B1 (en) | 1996-03-28 | 2001-04-10 | Microsoft Corporation | Table-based compression with embedded coding |
US5805739A (en) | 1996-04-02 | 1998-09-08 | Picturetel Corporation | Lapped orthogonal vector quantization |
EP1835761A3 (en) | 1996-05-28 | 2007-10-03 | Matsushita Electric Industrial Co., Ltd. | Decoding apparatus and method with intra prediction and alternative block scanning |
JPH1070717A (en) * | 1996-06-19 | 1998-03-10 | Matsushita Electric Ind Co Ltd | Image encoding device and image decoding device |
US5847776A (en) | 1996-06-24 | 1998-12-08 | Vdonet Corporation Ltd. | Method for entropy constrained motion estimation and coding of motion vectors with increased search range |
US5771318A (en) | 1996-06-27 | 1998-06-23 | Siemens Corporate Research, Inc. | Adaptive edge-preserving smoothing filter |
JP3628810B2 (en) * | 1996-06-28 | 2005-03-16 | 三菱電機株式会社 | Image encoding device |
US6389177B1 (en) | 1996-07-02 | 2002-05-14 | Apple Computer, Inc. | System and method using edge processing to remove blocking artifacts from decompressed images |
DE19628293C1 (en) | 1996-07-12 | 1997-12-11 | Fraunhofer Ges Forschung | Encoding and decoding audio signals using intensity stereo and prediction |
DE19628292B4 (en) | 1996-07-12 | 2007-08-02 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Method for coding and decoding stereo audio spectral values |
US5796875A (en) * | 1996-08-13 | 1998-08-18 | Sony Electronics, Inc. | Selective de-blocking filter for DCT compressed images |
US5828426A (en) | 1996-08-20 | 1998-10-27 | Samsung Electronics Co., Ltd. | Apparatus for decoding variable length coded data of both MPEG-1 and MPEG-2 standards |
JPH10191360A (en) | 1996-08-22 | 1998-07-21 | Cirrus Logic Inc | Method for obtaining motion estimate vector and method for compressing moving image data by using the motion estimate vector |
JP2907146B2 (en) * | 1996-09-11 | 1999-06-21 | 日本電気株式会社 | Method and apparatus for searching for specific part of memory LSI |
DE19637522A1 (en) | 1996-09-13 | 1998-03-19 | Bosch Gmbh Robert | Process for reducing data in video signals |
US6233017B1 (en) | 1996-09-16 | 2001-05-15 | Microsoft Corporation | Multimedia compression system with adaptive block sizes |
US5835618A (en) | 1996-09-27 | 1998-11-10 | Siemens Corporate Research, Inc. | Uniform and non-uniform dynamic range remapping for optimum image display |
US5952943A (en) | 1996-10-11 | 1999-09-14 | Intel Corporation | Encoding image data for decode rate control |
US5748789A (en) | 1996-10-31 | 1998-05-05 | Microsoft Corporation | Transparent block skipping in object-based video coding systems |
EP1100274B1 (en) | 1996-11-06 | 2006-04-12 | Matsushita Electric Industrial Co., Ltd. | Image decoding method using variable length codes |
ID20168A (en) | 1996-11-07 | 1998-10-15 | Philips Electronics Nv | DATA PROCESSING AT A BIT FLOW SIGNAL |
DE69723959T2 (en) | 1996-11-11 | 2004-06-17 | Koninklijke Philips Electronics N.V. | DATA COMPRESSION AND DECOMPRESSION BY RICE ENCODERS / DECODERS |
US6130963A (en) | 1996-11-22 | 2000-10-10 | C-Cube Semiconductor Ii, Inc. | Memory efficient decoding of video frame chroma |
US5905542A (en) | 1996-12-04 | 1999-05-18 | C-Cube Microsystems, Inc. | Simplified dual prime video motion estimation |
DE69735437T2 (en) * | 1996-12-12 | 2006-08-10 | Matsushita Electric Industrial Co., Ltd., Kadoma | IMAGE CODERS AND IMAGE DECODERS |
US6377628B1 (en) * | 1996-12-18 | 2002-04-23 | Thomson Licensing S.A. | System for maintaining datastream continuity in the presence of disrupted source data |
US6167090A (en) | 1996-12-26 | 2000-12-26 | Nippon Steel Corporation | Motion vector detecting apparatus |
US6038256A (en) * | 1996-12-31 | 2000-03-14 | C-Cube Microsystems Inc. | Statistical multiplexed video encoding using pre-encoding a priori statistics and a priori and a posteriori statistics |
US6141053A (en) | 1997-01-03 | 2000-10-31 | Saukkonen; Jukka I. | Method of optimizing bandwidth for transmitting compressed video data streams |
JP3484310B2 (en) | 1997-01-17 | 2004-01-06 | 松下電器産業株式会社 | Variable length encoder |
NL1005084C2 (en) | 1997-01-24 | 1998-07-27 | Oce Tech Bv | A method for performing an image editing operation on run-length encoded bitmaps. |
EP0786907A3 (en) | 1997-01-24 | 2001-06-13 | Texas Instruments Incorporated | Video encoder |
ES2162411T3 (en) | 1997-01-30 | 2001-12-16 | Matsushita Electric Ind Co Ltd | DIGITAL IMAGE FILLING PROCEDURE, IMAGE PROCESSING DEVICE AND DATA RECORDING MEDIA. |
US6038536A (en) | 1997-01-31 | 2000-03-14 | Texas Instruments Incorporated | Data compression using bit change statistics |
US6188799B1 (en) * | 1997-02-07 | 2001-02-13 | Matsushita Electric Industrial Co., Ltd. | Method and apparatus for removing noise in still and moving pictures |
US6272175B1 (en) | 1997-02-13 | 2001-08-07 | Conexant Systems, Inc. | Video signal coding systems and processes using adaptive quantization |
US6201927B1 (en) * | 1997-02-18 | 2001-03-13 | Mary Lafuze Comer | Trick play reproduction of MPEG encoded signals |
US5991447A (en) | 1997-03-07 | 1999-11-23 | General Instrument Corporation | Prediction and coding of bi-directionally predicted video object planes for interlaced digital video |
JP3095140B2 (en) | 1997-03-10 | 2000-10-03 | 三星電子株式会社 | One-dimensional signal adaptive filter and filtering method for reducing blocking effect |
FI106071B (en) * | 1997-03-13 | 2000-11-15 | Nokia Mobile Phones Ltd | Adaptive filter |
FI114248B (en) | 1997-03-14 | 2004-09-15 | Nokia Corp | Method and apparatus for audio coding and audio decoding |
US6728775B1 (en) | 1997-03-17 | 2004-04-27 | Microsoft Corporation | Multiple multicasting of multimedia streams |
US5844613A (en) | 1997-03-17 | 1998-12-01 | Microsoft Corporation | Global motion estimator for motion video signal encoding |
US6263065B1 (en) | 1997-03-18 | 2001-07-17 | At&T Corp. | Method and apparatus for simulating central queue for distributing call in distributed arrangement of automatic call distributors |
JP3217987B2 (en) | 1997-03-31 | 2001-10-15 | 松下電器産業株式会社 | Video signal decoding method and encoding method |
WO1998044479A1 (en) | 1997-03-31 | 1998-10-08 | Matsushita Electric Industrial Co., Ltd. | Dynamic image display method and device therefor |
US5973755A (en) | 1997-04-04 | 1999-10-26 | Microsoft Corporation | Video encoder and decoder using bilinear motion compensation and lapped orthogonal transforms |
SG65064A1 (en) | 1997-04-09 | 1999-05-25 | Matsushita Electric Ind Co Ltd | Image predictive decoding method image predictive decoding apparatus image predictive coding method image predictive coding apparatus and data storage media |
US6259810B1 (en) | 1997-04-15 | 2001-07-10 | Microsoft Corporation | Method and system of decoding compressed image data |
US5883633A (en) | 1997-04-15 | 1999-03-16 | Microsoft Corporation | Method and system of variable run length image encoding using sub-palette |
US6441813B1 (en) | 1997-05-16 | 2002-08-27 | Kabushiki Kaisha Toshiba | Computer system, and video decoder used in the system |
US6101195A (en) | 1997-05-28 | 2000-08-08 | Sarnoff Corporation | Timing correction method and apparatus |
US6580834B2 (en) | 1997-05-30 | 2003-06-17 | Competitive Technologies Of Pa, Inc. | Method and apparatus for encoding and decoding signals |
JP3164031B2 (en) * | 1997-05-30 | 2001-05-08 | 日本ビクター株式会社 | Moving image encoding / decoding device, moving image encoding / decoding method, and moving image encoded recording medium |
JP2002507339A (en) | 1997-05-30 | 2002-03-05 | サーノフ コーポレイション | Hierarchical motion estimation execution method and apparatus using nonlinear pyramid |
US6067322A (en) * | 1997-06-04 | 2000-05-23 | Microsoft Corporation | Half pixel motion estimation in motion video signal encoding |
US6057884A (en) | 1997-06-05 | 2000-05-02 | General Instrument Corporation | Temporal and spatial scaleable coding for video object planes |
AU8055798A (en) * | 1997-06-05 | 1998-12-21 | Wisconsin Alumni Research Foundation | Image compression system using block transforms and tree-type coefficient truncation |
ES2545109T3 (en) | 1997-06-09 | 2015-09-08 | Hitachi, Ltd. | Image decoding procedure |
US6574371B2 (en) | 1997-06-09 | 2003-06-03 | Hitachi, Ltd. | Image decoding method |
SE512719C2 (en) | 1997-06-10 | 2000-05-02 | Lars Gustaf Liljeryd | A method and apparatus for reducing data flow based on harmonic bandwidth expansion |
JPH1169345A (en) | 1997-06-11 | 1999-03-09 | Fujitsu Ltd | Inter-frame predictive dynamic image encoding device and decoding device, inter-frame predictive dynamic image encoding method and decoding method |
FI103003B (en) * | 1997-06-13 | 1999-03-31 | Nokia Corp | Filtering procedure, filter and mobile terminal |
GB9712651D0 (en) | 1997-06-18 | 1997-08-20 | Nds Ltd | Improvements in or relating to encoding digital signals |
US6064771A (en) | 1997-06-23 | 2000-05-16 | Real-Time Geometry Corp. | System and method for asynchronous, adaptive moving picture compression, and decompression |
DE19730129C2 (en) | 1997-07-14 | 2002-03-07 | Fraunhofer Ges Forschung | Method for signaling noise substitution when encoding an audio signal |
US6421738B1 (en) | 1997-07-15 | 2002-07-16 | Microsoft Corporation | Method and system for capturing and encoding full-screen video graphics |
JP2897763B2 (en) | 1997-07-28 | 1999-05-31 | 日本ビクター株式会社 | Motion compensation coding device, decoding device, coding method and decoding method |
KR100244291B1 (en) | 1997-07-30 | 2000-02-01 | 구본준 | Method for motion vector coding of moving picture |
KR100281099B1 (en) * | 1997-07-30 | 2001-04-02 | 구자홍 | Method for removing block phenomenon presented by cording of moving picture |
US6266091B1 (en) | 1997-07-31 | 2001-07-24 | Lsi Logic Corporation | System and method for low delay mode operation video decoding |
US6310918B1 (en) | 1997-07-31 | 2001-10-30 | Lsi Logic Corporation | System and method for motion vector extraction and computation meeting 2-frame store and letterboxing requirements |
FR2766946B1 (en) | 1997-08-04 | 2000-08-11 | Thomson Multimedia Sa | PRETREATMENT METHOD AND DEVICE FOR MOTION ESTIMATION |
US6281942B1 (en) | 1997-08-11 | 2001-08-28 | Microsoft Corporation | Spatial and temporal filtering mechanism for digital motion video signals |
KR100252342B1 (en) | 1997-08-12 | 2000-04-15 | 전주범 | Motion vector coding method and apparatus |
AR016812A1 (en) * | 1997-08-14 | 2001-08-01 | Samsung Electronics Co Ltd | METHOD FOR TRANSMITTING COMPRESSED VIDEO INFORMATION, COMPRESSION AND VIDEO RECORDING PROVISIONS AND VIDEO PLAYBACK |
US5859788A (en) | 1997-08-15 | 1999-01-12 | The Aerospace Corporation | Modulated lapped transform method |
KR100244290B1 (en) * | 1997-09-09 | 2000-02-01 | 구자홍 | Method for deblocking filtering for low bit rate video |
DE69838869T2 (en) | 1997-10-03 | 2008-12-04 | Sony Corp. | Device and method for splicing coded data streams and device and method for generating coded data streams |
KR100262500B1 (en) * | 1997-10-16 | 2000-08-01 | 이형도 | Adaptive block effect reduction decoder |
US6493385B1 (en) | 1997-10-23 | 2002-12-10 | Mitsubishi Denki Kabushiki Kaisha | Image encoding method, image encoder, image decoding method, and image decoder |
SG116400A1 (en) | 1997-10-24 | 2005-11-28 | Matsushita Electric Ind Co Ltd | A method for computational graceful degradation inan audiovisual compression system. |
US6060997A (en) | 1997-10-27 | 2000-05-09 | Motorola, Inc. | Selective call device and method for providing a stream of information |
US6148033A (en) | 1997-11-20 | 2000-11-14 | Hitachi America, Ltd. | Methods and apparatus for improving picture quality in reduced resolution video decoders |
JPH11161782A (en) | 1997-11-27 | 1999-06-18 | Seiko Epson Corp | Method and device for encoding color picture, and method and device for decoding color picture |
CN1220388C (en) | 1997-12-01 | 2005-09-21 | 三星电子株式会社 | Sports vector predicting method |
US6111914A (en) | 1997-12-01 | 2000-08-29 | Conexant Systems, Inc. | Adaptive entropy coding in adaptive quantization framework for video signal coding systems and processes |
KR100523908B1 (en) * | 1997-12-12 | 2006-01-27 | 주식회사 팬택앤큐리텔 | Apparatus and method for encoding video signal for progressive scan image |
US6178205B1 (en) * | 1997-12-12 | 2001-01-23 | Vtel Corporation | Video postfiltering with motion-compensated temporal filtering and/or spatial-adaptive filtering |
US6198773B1 (en) * | 1997-12-18 | 2001-03-06 | Zoran Corporation | Video memory management for MPEG video decode and display system |
US6775840B1 (en) | 1997-12-19 | 2004-08-10 | Cisco Technology, Inc. | Method and apparatus for using a spectrum analyzer for locating ingress noise gaps |
US6339656B1 (en) | 1997-12-25 | 2002-01-15 | Matsushita Electric Industrial Co., Ltd. | Moving picture encoding decoding processing apparatus |
KR100301826B1 (en) | 1997-12-29 | 2001-10-27 | 구자홍 | Video decoder |
US6393156B1 (en) | 1998-01-07 | 2002-05-21 | Truong Q. Nguyen | Enhanced transform compatibility for standardized data compression |
US6501798B1 (en) | 1998-01-22 | 2002-12-31 | International Business Machines Corporation | Device for generating multiple quality level bit-rates in a video encoder |
JPH11275592A (en) | 1998-01-22 | 1999-10-08 | Victor Co Of Japan Ltd | Moving image code stream converter and its method |
US6122017A (en) * | 1998-01-22 | 2000-09-19 | Hewlett-Packard Company | Method for providing motion-compensated multi-field enhancement of still images from video |
WO1999041697A1 (en) | 1998-02-13 | 1999-08-19 | Quvis, Inc. | Apparatus and method for optimized compression of interlaced motion images |
KR100328417B1 (en) * | 1998-03-05 | 2002-03-16 | 마츠시타 덴끼 산교 가부시키가이샤 | Image enconding/decoding apparatus, image encoding/decoding method, and data recording medium |
US6226407B1 (en) | 1998-03-18 | 2001-05-01 | Microsoft Corporation | Method and apparatus for analyzing computer screens |
DE69801209T2 (en) | 1998-03-20 | 2001-11-08 | Stmicroelectronics S.R.L., Agrate Brianza | Hierarchical recursive motion estimator for motion picture encoders |
US7016413B2 (en) * | 1998-03-20 | 2006-03-21 | International Business Machines Corporation | Adaptively encoding a picture of contrasted complexity having normal video and noisy video portions |
US6054943A (en) | 1998-03-25 | 2000-04-25 | Lawrence; John Clifton | Multilevel digital information compression based on lawrence algorithm |
JP2002510947A (en) | 1998-04-02 | 2002-04-09 | サーノフ コーポレイション | Burst data transmission of compressed video data |
US6393061B1 (en) | 1998-05-15 | 2002-05-21 | Hughes Electronics Corporation | Method for reducing blocking artifacts in digital images |
US6115689A (en) | 1998-05-27 | 2000-09-05 | Microsoft Corporation | Scalable audio coder and decoder |
US6029126A (en) | 1998-06-30 | 2000-02-22 | Microsoft Corporation | Scalable audio coder and decoder |
US6285801B1 (en) * | 1998-05-29 | 2001-09-04 | Stmicroelectronics, Inc. | Non-linear adaptive image filter for filtering noise such as blocking artifacts |
US6073153A (en) | 1998-06-03 | 2000-06-06 | Microsoft Corporation | Fast system and method for computing modulated lapped transforms |
US6154762A (en) | 1998-06-03 | 2000-11-28 | Microsoft Corporation | Fast system and method for computing modulated lapped transforms |
JP3097665B2 (en) | 1998-06-19 | 2000-10-10 | 日本電気株式会社 | Time-lapse recorder with anomaly detection function |
WO1999066449A1 (en) * | 1998-06-19 | 1999-12-23 | Equator Technologies, Inc. | Decoding an encoded image having a first resolution directly into a decoded image having a second resolution |
JP3413720B2 (en) | 1998-06-26 | 2003-06-09 | ソニー株式会社 | Image encoding method and apparatus, and image decoding method and apparatus |
US6253165B1 (en) | 1998-06-30 | 2001-06-26 | Microsoft Corporation | System and method for modeling probability distribution functions of transform coefficients of encoded signal |
US20020027954A1 (en) * | 1998-06-30 | 2002-03-07 | Kenneth S. Singh | Method and device for gathering block statistics during inverse quantization and iscan |
US6320905B1 (en) | 1998-07-08 | 2001-11-20 | Stream Machine Company | Postprocessing system for removing blocking artifacts in block-based codecs |
US6519287B1 (en) | 1998-07-13 | 2003-02-11 | Motorola, Inc. | Method and apparatus for encoding and decoding video signals by using storage and retrieval of motion vectors |
JP4026238B2 (en) * | 1998-07-23 | 2007-12-26 | ソニー株式会社 | Image decoding apparatus and image decoding method |
AU717480B2 (en) | 1998-08-01 | 2000-03-30 | Korea Advanced Institute Of Science And Technology | Loop-filtering method for image data and apparatus therefor |
CA2246532A1 (en) | 1998-09-04 | 2000-03-04 | Northern Telecom Limited | Perceptual audio coding |
DE19840835C2 (en) | 1998-09-07 | 2003-01-09 | Fraunhofer Ges Forschung | Apparatus and method for entropy coding information words and apparatus and method for decoding entropy coded information words |
US6380985B1 (en) * | 1998-09-14 | 2002-04-30 | Webtv Networks, Inc. | Resizing and anti-flicker filtering in reduced-size video images |
TW379509B (en) | 1998-09-15 | 2000-01-11 | Acer Inc | Adaptive post-filtering of compressed video images to remove artifacts |
US6219070B1 (en) * | 1998-09-30 | 2001-04-17 | Webtv Networks, Inc. | System and method for adjusting pixel parameters by subpixel positioning |
US6420980B1 (en) | 1998-10-06 | 2002-07-16 | Matsushita Electric Industrial Co., Ltd. | Lossless compression encoding method and device, and lossless compression decoding method and device |
US6466624B1 (en) | 1998-10-28 | 2002-10-15 | Pixonics, Llc | Video decoder with bit stream based enhancements |
GB2343579A (en) | 1998-11-07 | 2000-05-10 | Ibm | Hybrid-linear-bicubic interpolation method and apparatus |
US6768774B1 (en) * | 1998-11-09 | 2004-07-27 | Broadcom Corporation | Video and graphics system with video scaling |
US6081209A (en) * | 1998-11-12 | 2000-06-27 | Hewlett-Packard Company | Search system for use in compression |
US6629318B1 (en) | 1998-11-18 | 2003-09-30 | Koninklijke Philips Electronics N.V. | Decoder buffer for streaming video receiver and method of operation |
US6236764B1 (en) * | 1998-11-30 | 2001-05-22 | Equator Technologies, Inc. | Image processing circuit and method for reducing a difference between pixel values across an image boundary |
US6983018B1 (en) | 1998-11-30 | 2006-01-03 | Microsoft Corporation | Efficient motion vector coding for video compression |
US6418166B1 (en) * | 1998-11-30 | 2002-07-09 | Microsoft Corporation | Motion estimation and block matching pattern |
US6404931B1 (en) | 1998-12-14 | 2002-06-11 | Microsoft Corporation | Code book construction for variable to variable length entropy encoding |
US6300888B1 (en) | 1998-12-14 | 2001-10-09 | Microsoft Corporation | Entrophy code mode switching for frequency-domain audio coding |
US6377930B1 (en) | 1998-12-14 | 2002-04-23 | Microsoft Corporation | Variable to variable length entropy encoding |
US6233226B1 (en) | 1998-12-14 | 2001-05-15 | Verizon Laboratories Inc. | System and method for analyzing and transmitting video over a switched network |
US6223162B1 (en) | 1998-12-14 | 2001-04-24 | Microsoft Corporation | Multi-level run length coding for frequency-domain audio coding |
US6421464B1 (en) * | 1998-12-16 | 2002-07-16 | Fastvdo Llc | Fast lapped image transforms using lifting steps |
JP3580777B2 (en) | 1998-12-28 | 2004-10-27 | フラウンホーファー−ゲゼルシャフト・ツール・フェルデルング・デル・アンゲヴァンテン・フォルシュング・アインゲトラーゲネル・フェライン | Method and apparatus for encoding or decoding an audio signal or bit stream |
US6100825A (en) | 1998-12-31 | 2000-08-08 | Microsoft Corporation | Cluster-based data compression system and method |
US6496608B1 (en) | 1999-01-15 | 2002-12-17 | Picsurf, Inc. | Image data interpolation system and method |
US6529638B1 (en) * | 1999-02-01 | 2003-03-04 | Sharp Laboratories Of America, Inc. | Block boundary artifact reduction for block-based image compression |
US6671323B1 (en) | 1999-02-05 | 2003-12-30 | Sony Corporation | Encoding device, encoding method, decoding device, decoding method, coding system and coding method |
US6259741B1 (en) * | 1999-02-18 | 2001-07-10 | General Instrument Corporation | Method of architecture for converting MPEG-2 4:2:2-profile bitstreams into main-profile bitstreams |
US6473409B1 (en) | 1999-02-26 | 2002-10-29 | Microsoft Corp. | Adaptive filtering system and method for adaptively canceling echoes and reducing noise in digital signals |
US6496795B1 (en) | 1999-05-05 | 2002-12-17 | Microsoft Corporation | Modulated complex lapped transform for integrated signal enhancement and coding |
US6487574B1 (en) | 1999-02-26 | 2002-11-26 | Microsoft Corp. | System and method for producing modulated complex lapped transforms |
US6499060B1 (en) | 1999-03-12 | 2002-12-24 | Microsoft Corporation | Media coding for loss recovery with remotely predicted data units |
JP3778721B2 (en) | 1999-03-18 | 2006-05-24 | 富士通株式会社 | Video coding method and apparatus |
US6678419B1 (en) | 1999-03-26 | 2004-01-13 | Microsoft Corporation | Reordering wavelet coefficients for improved encoding |
US6477280B1 (en) | 1999-03-26 | 2002-11-05 | Microsoft Corporation | Lossless adaptive encoding of finite alphabet data |
JP2000286865A (en) | 1999-03-31 | 2000-10-13 | Toshiba Corp | Continuous media data transmission system |
KR100319557B1 (en) * | 1999-04-16 | 2002-01-09 | 윤종용 | Methode Of Removing Block Boundary Noise Components In Block-Coded Images |
US6320593B1 (en) | 1999-04-20 | 2001-11-20 | Agilent Technologies, Inc. | Method of fast bi-cubic interpolation of image information |
US6519005B2 (en) | 1999-04-30 | 2003-02-11 | Koninklijke Philips Electronics N.V. | Method of concurrent multiple-mode motion estimation for digital video |
EP1092321A1 (en) | 1999-04-30 | 2001-04-18 | Koninklijke Philips Electronics N.V. | Video encoding method with selection of b-frame encoding mode |
US6370502B1 (en) | 1999-05-27 | 2002-04-09 | America Online, Inc. | Method and system for reduction of quantization-induced block-discontinuities and general purpose audio codec |
US6968008B1 (en) | 1999-07-27 | 2005-11-22 | Sharp Laboratories Of America, Inc. | Methods for motion estimation with adaptive motion accuracy |
US6831948B1 (en) | 1999-07-30 | 2004-12-14 | Koninklijke Philips Electronics N.V. | System and method for motion compensation of image planes in color sequential displays |
US6735249B1 (en) * | 1999-08-11 | 2004-05-11 | Nokia Corporation | Apparatus, and associated method, for forming a compressed motion vector field utilizing predictive motion coding |
US6748113B1 (en) * | 1999-08-25 | 2004-06-08 | Matsushita Electric Insdustrial Co., Ltd. | Noise detecting method, noise detector and image decoding apparatus |
JP4283950B2 (en) | 1999-10-06 | 2009-06-24 | パナソニック株式会社 | Network management system |
US6771829B1 (en) * | 1999-10-23 | 2004-08-03 | Fastvdo Llc | Method for local zerotree image coding |
KR100636110B1 (en) | 1999-10-29 | 2006-10-18 | 삼성전자주식회사 | Terminal supporting signaling for MPEG-4 tranceiving |
KR20010101329A (en) | 1999-10-29 | 2001-11-14 | 요트.게.아. 롤페즈 | Video encoding-method |
GB9928022D0 (en) | 1999-11-26 | 2000-01-26 | British Telecomm | Video coding and decording |
JP3694888B2 (en) * | 1999-12-03 | 2005-09-14 | ソニー株式会社 | Decoding device and method, encoding device and method, information processing device and method, and recording medium |
US6573915B1 (en) | 1999-12-08 | 2003-06-03 | International Business Machines Corporation | Efficient capture of computer screens |
US6865229B1 (en) | 1999-12-14 | 2005-03-08 | Koninklijke Philips Electronics N.V. | Method and apparatus for reducing the “blocky picture” effect in MPEG decoded images |
US6493392B1 (en) * | 1999-12-27 | 2002-12-10 | Hyundai Electronics Industries Co., Ltd. | Method for coding digital interlaced moving video |
US6567781B1 (en) | 1999-12-30 | 2003-05-20 | Quikcat.Com, Inc. | Method and apparatus for compressing audio data using a dynamical system having a multi-state dynamical rule set and associated transform basis function |
GB9930788D0 (en) | 1999-12-30 | 2000-02-16 | Koninkl Philips Electronics Nv | Method and apparatus for converting data streams |
US6499010B1 (en) | 2000-01-04 | 2002-12-24 | Agere Systems Inc. | Perceptual audio coder bit allocation scheme providing improved perceptual quality consistency |
FI117533B (en) * | 2000-01-20 | 2006-11-15 | Nokia Corp | Procedure for filtering digital video images |
JP2001218172A (en) * | 2000-01-31 | 2001-08-10 | Nec Corp | Device and method for converting frame rate in moving picture decoder, its recording medium and integrated circuit device |
JP4378824B2 (en) * | 2000-02-22 | 2009-12-09 | ソニー株式会社 | Image processing apparatus and method |
KR100619377B1 (en) | 2000-02-22 | 2006-09-08 | 주식회사 팬택앤큐리텔 | Motion estimation method and device |
US6771828B1 (en) | 2000-03-03 | 2004-08-03 | Microsoft Corporation | System and method for progessively transform coding digital data |
TW526666B (en) | 2000-03-29 | 2003-04-01 | Matsushita Electric Ind Co Ltd | Reproducing method for compression coded data and device for the same |
US7634011B2 (en) * | 2000-04-21 | 2009-12-15 | Microsoft Corporation | Application program interface (API) facilitating decoder control of accelerator resources |
CN1322759C (en) * | 2000-04-27 | 2007-06-20 | 三菱电机株式会社 | Coding apparatus and coding method |
DE10022331A1 (en) * | 2000-05-10 | 2001-11-15 | Bosch Gmbh Robert | Method for transformation coding of moving image sequences e.g. for audio-visual objects, involves block-wise assessing movement vectors between reference- and actual- image signals of image sequence |
JP3573735B2 (en) * | 2000-05-23 | 2004-10-06 | 松下電器産業株式会社 | Variable length encoding method and variable length encoding device |
JP3662171B2 (en) | 2000-06-05 | 2005-06-22 | 三菱電機株式会社 | Encoding apparatus and encoding method |
US6449312B1 (en) | 2000-06-08 | 2002-09-10 | Motorola, Inc. | Method of estimating motion in interlaced video |
US6647061B1 (en) | 2000-06-09 | 2003-11-11 | General Instrument Corporation | Video size conversion and transcoding from MPEG-2 to MPEG-4 |
US6542863B1 (en) | 2000-06-14 | 2003-04-01 | Intervideo, Inc. | Fast codebook search method for MPEG audio encoding |
JP3846771B2 (en) | 2000-06-26 | 2006-11-15 | 三菱電機株式会社 | Decoder and playback device |
KR100353851B1 (en) | 2000-07-07 | 2002-09-28 | 한국전자통신연구원 | Water ring scan apparatus and method, video coding/decoding apparatus and method using that |
WO2002007438A1 (en) | 2000-07-17 | 2002-01-24 | Trustees Of Boston University | Generalized lapped biorthogonal transform embedded inverse discrete cosine transform and low bit rate video sequence coding artifact removal |
AU2001279008A1 (en) | 2000-07-25 | 2002-02-05 | Agilevision, L.L.C. | Splicing compressed, local video segments into fixed time slots in a network feed |
GB2365647A (en) | 2000-08-04 | 2002-02-20 | Snell & Wilcox Ltd | Deriving parameters for post-processing from an encoded signal |
CN1266649C (en) | 2000-09-12 | 2006-07-26 | 皇家菲利浦电子有限公司 | Video coding method |
EP1199812A1 (en) | 2000-10-20 | 2002-04-24 | Telefonaktiebolaget Lm Ericsson | Perceptually improved encoding of acoustic signals |
US6735339B1 (en) | 2000-10-27 | 2004-05-11 | Dolby Laboratories Licensing Corporation | Multi-stage encoding of signal components that are classified according to component value |
US7454222B2 (en) | 2000-11-22 | 2008-11-18 | Dragonwave, Inc. | Apparatus and method for controlling wireless communication signals |
KR100355831B1 (en) * | 2000-12-06 | 2002-10-19 | 엘지전자 주식회사 | Motion vector coding method based on 2-demension least bits prediction |
US7227895B1 (en) | 2000-12-12 | 2007-06-05 | Sony Corporation | System and method for generating decoded digital video image data |
US6757439B2 (en) | 2000-12-15 | 2004-06-29 | International Business Machines Corporation | JPEG packed block structure |
US20020168066A1 (en) | 2001-01-22 | 2002-11-14 | Weiping Li | Video encoding and decoding techniques and apparatus |
US6766063B2 (en) * | 2001-02-02 | 2004-07-20 | Avid Technology, Inc. | Generation adaptive filtering for subsampling component video as input to a nonlinear editing system |
KR100887524B1 (en) * | 2001-02-13 | 2009-03-09 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Motion information coding and decoding method |
US6778610B2 (en) | 2001-03-02 | 2004-08-17 | Redrock Semiconductor, Ltd. | Simultaneous search for different resync-marker patterns to recover from corrupted MPEG-4 bitstreams |
US20020150166A1 (en) | 2001-03-02 | 2002-10-17 | Johnson Andrew W. | Edge adaptive texture discriminating filtering |
US7110452B2 (en) * | 2001-03-05 | 2006-09-19 | Intervideo, Inc. | Systems and methods for detecting scene changes in a video data stream |
US7929610B2 (en) * | 2001-03-26 | 2011-04-19 | Sharp Kabushiki Kaisha | Methods and systems for reducing blocking artifacts with reduced complexity for spatially-scalable video coding |
US6931063B2 (en) * | 2001-03-26 | 2005-08-16 | Sharp Laboratories Of America, Inc. | Method and apparatus for controlling loop filtering or post filtering in block based motion compensationed video coding |
US7450641B2 (en) | 2001-09-14 | 2008-11-11 | Sharp Laboratories Of America, Inc. | Adaptive filtering based upon boundary strength |
US7675994B2 (en) | 2001-04-02 | 2010-03-09 | Koninklijke Philips Electronics N.V. | Packet identification mechanism at the transmitter and receiver for an enhanced ATSC 8-VSB system |
US6925126B2 (en) | 2001-04-18 | 2005-08-02 | Koninklijke Philips Electronics N.V. | Dynamic complexity prediction and regulation of MPEG2 decoding in a media processor |
EP1391065A4 (en) | 2001-05-02 | 2009-11-18 | Strix Systems Inc | Method and system for indicating link quality among neighboring wireless base stations |
US7206453B2 (en) | 2001-05-03 | 2007-04-17 | Microsoft Corporation | Dynamic filtering for lossy compression |
US6859235B2 (en) * | 2001-05-14 | 2005-02-22 | Webtv Networks Inc. | Adaptively deinterlacing video on a per pixel basis |
US6704718B2 (en) * | 2001-06-05 | 2004-03-09 | Microsoft Corporation | System and method for trainable nonlinear prediction of transform coefficients in data compression |
WO2002102086A2 (en) | 2001-06-12 | 2002-12-19 | Miranda Technologies Inc. | Apparatus and method for adaptive spatial segmentation-based noise reducing for encoded image signal |
JP4458714B2 (en) * | 2001-06-20 | 2010-04-28 | 富士通マイクロエレクトロニクス株式会社 | Image decoding apparatus, image decoding method, and program |
US6593392B2 (en) | 2001-06-22 | 2003-07-15 | Corning Incorporated | Curable halogenated compositions |
US6650784B2 (en) | 2001-07-02 | 2003-11-18 | Qualcomm, Incorporated | Lossless intraframe encoding using Golomb-Rice |
US7003174B2 (en) * | 2001-07-02 | 2006-02-21 | Corel Corporation | Removal of block encoding artifacts |
JP4145586B2 (en) * | 2001-07-24 | 2008-09-03 | セイコーエプソン株式会社 | Image processing apparatus, image processing program, and image processing method |
US20030033143A1 (en) * | 2001-08-13 | 2003-02-13 | Hagai Aronowitz | Decreasing noise sensitivity in speech processing under adverse conditions |
US7426315B2 (en) * | 2001-09-05 | 2008-09-16 | Zoran Microelectronics Ltd. | Method for reducing blocking artifacts |
US6950469B2 (en) * | 2001-09-17 | 2005-09-27 | Nokia Corporation | Method for sub-pixel value interpolation |
US6968091B2 (en) * | 2001-09-18 | 2005-11-22 | Emc Corporation | Insertion of noise for reduction in the number of bits for variable-length coding of (run, level) pairs |
US7646816B2 (en) | 2001-09-19 | 2010-01-12 | Microsoft Corporation | Generalized reference decoder for image or video processing |
US6983079B2 (en) * | 2001-09-20 | 2006-01-03 | Seiko Epson Corporation | Reducing blocking and ringing artifacts in low-bit-rate coding |
US9042445B2 (en) * | 2001-09-24 | 2015-05-26 | Broadcom Corporation | Method for deblocking field-frame video |
US7440504B2 (en) * | 2001-09-24 | 2008-10-21 | Broadcom Corporation | Method and apparatus for performing deblocking filtering with interlace capability |
JP3834495B2 (en) | 2001-09-27 | 2006-10-18 | 株式会社東芝 | Fine pattern inspection apparatus, CD-SEM apparatus management apparatus, fine pattern inspection method, CD-SEM apparatus management method, program, and computer-readable recording medium |
US20030095603A1 (en) * | 2001-11-16 | 2003-05-22 | Koninklijke Philips Electronics N.V. | Reduced-complexity video decoding using larger pixel-grid motion compensation |
US20030099294A1 (en) | 2001-11-27 | 2003-05-29 | Limin Wang | Picture level adaptive frame/field coding for digital video content |
EP2938072A1 (en) * | 2001-11-29 | 2015-10-28 | Godo Kaisha IP Bridge 1 | Coding distortion removal method |
US6825847B1 (en) | 2001-11-30 | 2004-11-30 | Nvidia Corporation | System and method for real-time compression of pixel colors |
US7165028B2 (en) * | 2001-12-12 | 2007-01-16 | Texas Instruments Incorporated | Method of speech recognition resistant to convolutive distortion and additive distortion |
US6934677B2 (en) | 2001-12-14 | 2005-08-23 | Microsoft Corporation | Quantization matrices based on critical band pattern information for digital audio wherein quantization bands differ from critical bands |
AU2002351417A1 (en) | 2001-12-21 | 2003-07-30 | Polycom, Inc. | Motion wake identification and control mechanism |
EP1335607A3 (en) * | 2001-12-28 | 2003-10-22 | Ricoh Company, Ltd. | Image smoothing apparatus and method |
US6763068B2 (en) | 2001-12-28 | 2004-07-13 | Nokia Corporation | Method and apparatus for selecting macroblock quantization parameters in a video encoder |
CA2574047C (en) | 2002-01-18 | 2008-01-08 | Kabushiki Kaisha Toshiba | Video encoding method and apparatus and video decoding method and apparatus |
EP1472882A1 (en) * | 2002-01-22 | 2004-11-03 | Koninklijke Philips Electronics N.V. | Reducing bit rate of already compressed multimedia |
US7236207B2 (en) * | 2002-01-22 | 2007-06-26 | Broadcom Corporation | System and method of transmission and reception of progressive content with isolated fields for conversion to interlaced display |
US6690307B2 (en) | 2002-01-22 | 2004-02-10 | Nokia Corporation | Adaptive variable length coding of digital video |
EP1333681A3 (en) | 2002-01-31 | 2004-12-08 | Samsung Electronics Co., Ltd. | Filtering method and apparatus for reducing block artifacts or ringing noise |
US6947886B2 (en) | 2002-02-21 | 2005-09-20 | The Regents Of The University Of California | Scalable compression of audio and other signals |
AU2003225751A1 (en) * | 2002-03-22 | 2003-10-13 | Realnetworks, Inc. | Video picture compression artifacts reduction via filtering and dithering |
US7099387B2 (en) | 2002-03-22 | 2006-08-29 | Realnetorks, Inc. | Context-adaptive VLC video transform coefficients encoding/decoding methods and apparatuses |
US7006699B2 (en) | 2002-03-27 | 2006-02-28 | Microsoft Corporation | System and method for progressively transforming and coding digital data |
US7155065B1 (en) | 2002-03-27 | 2006-12-26 | Microsoft Corporation | System and method for progressively transforming and coding digital data |
US7034897B2 (en) | 2002-04-01 | 2006-04-25 | Broadcom Corporation | Method of operating a video decoding system |
US8284844B2 (en) | 2002-04-01 | 2012-10-09 | Broadcom Corporation | Video decoding system supporting multiple standards |
HUE044616T2 (en) | 2002-04-19 | 2019-11-28 | Panasonic Ip Corp America | Motion vector calculating method |
US7277587B2 (en) | 2002-04-26 | 2007-10-02 | Sharp Laboratories Of America, Inc. | System and method for lossless video coding |
EP1515446A4 (en) * | 2002-04-26 | 2009-12-23 | Ntt Docomo Inc | Signal encoding method, signal decoding method, signal encoding device, signal decoding device, signal encoding program, and signal decoding program |
US20030202590A1 (en) | 2002-04-30 | 2003-10-30 | Qunshan Gu | Video encoding using direct mode predicted frames |
US7242713B2 (en) | 2002-05-02 | 2007-07-10 | Microsoft Corporation | 2-D transforms for image and video coding |
US7010046B2 (en) | 2002-05-02 | 2006-03-07 | Lsi Logic Corporation | Method and/or architecture for implementing MPEG frame display using four frame stores |
JP2004048711A (en) | 2002-05-22 | 2004-02-12 | Matsushita Electric Ind Co Ltd | Method for coding and decoding moving picture and data recording medium |
US7474668B2 (en) | 2002-06-04 | 2009-01-06 | Alcatel-Lucent Usa Inc. | Flexible multilevel output traffic control |
US7302387B2 (en) | 2002-06-04 | 2007-11-27 | Texas Instruments Incorporated | Modification of fixed codebook search in G.729 Annex E audio coding |
US6950473B2 (en) | 2002-06-21 | 2005-09-27 | Seiko Epson Corporation | Hybrid technique for reducing blocking and ringing artifacts in low-bit-rate coding |
US20030235250A1 (en) * | 2002-06-24 | 2003-12-25 | Ankur Varma | Video deblocking |
US7016547B1 (en) | 2002-06-28 | 2006-03-21 | Microsoft Corporation | Adaptive entropy encoding/decoding for screen capture content |
US7136417B2 (en) * | 2002-07-15 | 2006-11-14 | Scientific-Atlanta, Inc. | Chroma conversion optimization |
US6728315B2 (en) | 2002-07-24 | 2004-04-27 | Apple Computer, Inc. | Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding with reduced requirements for division operations |
US7020200B2 (en) | 2002-08-13 | 2006-03-28 | Lsi Logic Corporation | System and method for direct motion vector prediction in bi-predictive video frames and fields |
US7072394B2 (en) * | 2002-08-27 | 2006-07-04 | National Chiao Tung University | Architecture and method for fine granularity scalable video coding |
US7328150B2 (en) | 2002-09-04 | 2008-02-05 | Microsoft Corporation | Innovations in pure lossless audio compression |
US7433824B2 (en) | 2002-09-04 | 2008-10-07 | Microsoft Corporation | Entropy coding by adapting coding between level and run-length/level modes |
US7502743B2 (en) | 2002-09-04 | 2009-03-10 | Microsoft Corporation | Multi-channel audio encoding and decoding with multi-channel transform selection |
US7424434B2 (en) * | 2002-09-04 | 2008-09-09 | Microsoft Corporation | Unified lossy and lossless audio compression |
US7068722B2 (en) | 2002-09-25 | 2006-06-27 | Lsi Logic Corporation | Content adaptive video processor using motion compensation |
KR100506864B1 (en) | 2002-10-04 | 2005-08-05 | 엘지전자 주식회사 | Method of determining motion vector |
US6729316B1 (en) * | 2002-10-12 | 2004-05-04 | Vortex Automotive Corporation | Method and apparatus for treating crankcase emissions |
US7079703B2 (en) | 2002-10-21 | 2006-07-18 | Sharp Laboratories Of America, Inc. | JPEG artifact removal |
JP3878591B2 (en) | 2002-11-01 | 2007-02-07 | 松下電器産業株式会社 | Video encoding method and video decoding method |
US6957157B2 (en) * | 2002-11-12 | 2005-10-18 | Flow Metrix, Inc. | Tracking vibrations in a pipeline network |
US7227901B2 (en) | 2002-11-21 | 2007-06-05 | Ub Video Inc. | Low-complexity deblocking filter |
US6646578B1 (en) | 2002-11-22 | 2003-11-11 | Ub Video Inc. | Context adaptive variable length decoding system and method |
US7050088B2 (en) * | 2003-01-06 | 2006-05-23 | Silicon Integrated Systems Corp. | Method for 3:2 pull-down film source detection |
US7463688B2 (en) | 2003-01-16 | 2008-12-09 | Samsung Electronics Co., Ltd. | Methods and apparatus for removing blocking artifacts of MPEG signals in real-time video reception |
US8111753B2 (en) * | 2003-02-06 | 2012-02-07 | Samsung Electronics Co., Ltd. | Video encoding method and video encoder for improving performance |
US7167522B2 (en) | 2003-02-27 | 2007-01-23 | Texas Instruments Incorporated | Video deblocking filter |
US7995849B2 (en) | 2003-03-17 | 2011-08-09 | Qualcomm, Incorporated | Method and apparatus for improving video quality of low bit-rate video |
SG115540A1 (en) | 2003-05-17 | 2005-10-28 | St Microelectronics Asia | An edge enhancement process and system |
JP2005005844A (en) | 2003-06-10 | 2005-01-06 | Hitachi Ltd | Computation apparatus and coding processing program |
US7380028B2 (en) | 2003-06-13 | 2008-05-27 | Microsoft Corporation | Robust delivery of video data |
JP4207684B2 (en) | 2003-06-27 | 2009-01-14 | 富士電機デバイステクノロジー株式会社 | Magnetic recording medium manufacturing method and manufacturing apparatus |
US7471726B2 (en) | 2003-07-15 | 2008-12-30 | Microsoft Corporation | Spatial-domain lapped transform in digital media compression |
US7830963B2 (en) * | 2003-07-18 | 2010-11-09 | Microsoft Corporation | Decoding jointly coded transform type and subblock pattern information |
US7426308B2 (en) | 2003-07-18 | 2008-09-16 | Microsoft Corporation | Intraframe and interframe interlace coding and decoding |
US20050013498A1 (en) | 2003-07-18 | 2005-01-20 | Microsoft Corporation | Coding of motion vector information |
US20050013494A1 (en) * | 2003-07-18 | 2005-01-20 | Microsoft Corporation | In-loop deblocking filter |
US7609762B2 (en) | 2003-09-07 | 2009-10-27 | Microsoft Corporation | Signaling for entry point frames with predicted first field |
US7577200B2 (en) | 2003-09-07 | 2009-08-18 | Microsoft Corporation | Extended range variable length coding/decoding of differential motion vector information |
US7961786B2 (en) | 2003-09-07 | 2011-06-14 | Microsoft Corporation | Signaling field type information |
US8345754B2 (en) | 2003-09-07 | 2013-01-01 | Microsoft Corporation | Signaling buffer fullness |
US7822123B2 (en) * | 2004-10-06 | 2010-10-26 | Microsoft Corporation | Efficient repeat padding for hybrid video sequence with arbitrary video resolution |
US7616692B2 (en) * | 2003-09-07 | 2009-11-10 | Microsoft Corporation | Hybrid motion vector prediction for interlaced forward-predicted fields |
US7317839B2 (en) | 2003-09-07 | 2008-01-08 | Microsoft Corporation | Chroma motion vector derivation for interlaced forward-predicted fields |
US7623574B2 (en) * | 2003-09-07 | 2009-11-24 | Microsoft Corporation | Selecting between dominant and non-dominant motion vector predictor polarities |
US7567617B2 (en) * | 2003-09-07 | 2009-07-28 | Microsoft Corporation | Predicting motion vectors for fields of forward-predicted interlaced video frames |
US8064520B2 (en) | 2003-09-07 | 2011-11-22 | Microsoft Corporation | Advanced bi-directional predictive coding of interlaced video |
US7616829B1 (en) | 2003-10-29 | 2009-11-10 | Apple Inc. | Reducing undesirable block based image processing artifacts by DC image filtering |
US20050094003A1 (en) | 2003-11-05 | 2005-05-05 | Per Thorell | Methods of processing digital image and/or video data including luminance filtering based on chrominance data and related systems and computer program products |
US7295616B2 (en) | 2003-11-17 | 2007-11-13 | Eastman Kodak Company | Method and system for video filtering with joint motion and noise estimation |
US7551793B2 (en) * | 2004-01-14 | 2009-06-23 | Samsung Electronics Co., Ltd. | Methods and apparatuses for adaptive loop filtering for reducing blocking artifacts |
US7283176B2 (en) | 2004-03-12 | 2007-10-16 | Broadcom Corporation | Method and system for detecting field ID |
US8503542B2 (en) | 2004-03-18 | 2013-08-06 | Sony Corporation | Methods and apparatus to reduce blocking noise and contouring effect in motion compensated compressed video |
KR100586882B1 (en) | 2004-04-13 | 2006-06-08 | 삼성전자주식회사 | Method and Apparatus for supporting motion scalability |
US7397853B2 (en) | 2004-04-29 | 2008-07-08 | Mediatek Incorporation | Adaptive de-blocking filtering apparatus and method for MPEG video decoder |
US7539248B2 (en) | 2004-04-29 | 2009-05-26 | Mediatek Incorporation | Adaptive de-blocking filtering apparatus and method for MPEG video decoder |
US7460596B2 (en) | 2004-04-29 | 2008-12-02 | Mediatek Incorporation | Adaptive de-blocking filtering apparatus and method for MPEG video decoder |
US7496141B2 (en) | 2004-04-29 | 2009-02-24 | Mediatek Incorporation | Adaptive de-blocking filtering apparatus and method for MPEG video decoder |
US7397854B2 (en) | 2004-04-29 | 2008-07-08 | Mediatek Incorporation | Adaptive de-blocking filtering apparatus and method for MPEG video decoder |
US7400679B2 (en) | 2004-04-29 | 2008-07-15 | Mediatek Incorporation | Adaptive de-blocking filtering apparatus and method for MPEG video decoder |
US20050243914A1 (en) | 2004-04-29 | 2005-11-03 | Do-Kyoung Kwon | Adaptive de-blocking filtering apparatus and method for mpeg video decoder |
US7430336B2 (en) * | 2004-05-06 | 2008-09-30 | Qualcomm Incorporated | Method and apparatus for image enhancement for low bit rate video compression |
FR2872973A1 (en) | 2004-07-06 | 2006-01-13 | Thomson Licensing Sa | METHOD OR DEVICE FOR CODING A SEQUENCE OF SOURCE IMAGES |
US8600217B2 (en) | 2004-07-14 | 2013-12-03 | Arturo A. Rodriguez | System and method for improving quality of displayed picture during trick modes |
WO2006010276A1 (en) | 2004-07-30 | 2006-02-02 | Algolith Inc | Apparatus and method for adaptive 3d artifact reducing for encoded image signal |
US7839933B2 (en) * | 2004-10-06 | 2010-11-23 | Microsoft Corporation | Adaptive vertical macroblock alignment for mixed frame video sequences |
US8116379B2 (en) * | 2004-10-08 | 2012-02-14 | Stmicroelectronics, Inc. | Method and apparatus for parallel processing of in-loop deblocking filter for H.264 video compression standard |
US7620261B2 (en) * | 2004-11-23 | 2009-11-17 | Stmicroelectronics Asia Pacific Pte. Ltd. | Edge adaptive filtering system for reducing artifacts and method |
US7961357B2 (en) | 2004-12-08 | 2011-06-14 | Electronics And Telecommunications Research Institute | Block artifact phenomenon eliminating device and eliminating method thereof |
US20060143678A1 (en) | 2004-12-10 | 2006-06-29 | Microsoft Corporation | System and process for controlling the coding bit rate of streaming media data employing a linear quadratic control technique and leaky bucket model |
US7305139B2 (en) | 2004-12-17 | 2007-12-04 | Microsoft Corporation | Reversible 2-dimensional pre-/post-filtering for lapped biorthogonal transform |
CN1293868C (en) | 2004-12-29 | 2007-01-10 | 朱旭祥 | Application of alpha cyclo-alanine in the process for preparing medicine to treat cerebrovascular and cardiovascular disease |
US20060215754A1 (en) * | 2005-03-24 | 2006-09-28 | Intel Corporation | Method and apparatus for performing video decoding in a multi-thread environment |
DE102005025629A1 (en) | 2005-06-03 | 2007-03-22 | Micronas Gmbh | Image processing method for reducing blocking artifacts |
US8190425B2 (en) | 2006-01-20 | 2012-05-29 | Microsoft Corporation | Complex cross-correlation parameters for multi-channel audio |
US7831434B2 (en) | 2006-01-20 | 2010-11-09 | Microsoft Corporation | Complex-transform channel coding with extended-band frequency coding |
US7911538B2 (en) | 2006-04-06 | 2011-03-22 | Samsung Electronics Co., Ltd. | Estimation of block artifact strength based on edge statistics |
US20070280552A1 (en) | 2006-06-06 | 2007-12-06 | Samsung Electronics Co., Ltd. | Method and device for measuring MPEG noise strength of compressed digital image |
US8243815B2 (en) | 2006-06-16 | 2012-08-14 | Via Technologies, Inc. | Systems and methods of video compression deblocking |
US20080084932A1 (en) * | 2006-10-06 | 2008-04-10 | Microsoft Corporation | Controlling loop filtering for interlaced video frames |
JP3129986U (en) | 2006-12-26 | 2007-03-08 | ライオン株式会社 | Plate cushioning material |
JP5270573B2 (en) | 2006-12-28 | 2013-08-21 | トムソン ライセンシング | Method and apparatus for detecting block artifacts |
US20080159407A1 (en) * | 2006-12-28 | 2008-07-03 | Yang Nick Y | Mechanism for a parallel processing in-loop deblock filter |
JP5467637B2 (en) | 2007-01-04 | 2014-04-09 | トムソン ライセンシング | Method and apparatus for reducing coding artifacts for illumination compensation and / or color compensation in multi-view coded video |
US8411734B2 (en) * | 2007-02-06 | 2013-04-02 | Microsoft Corporation | Scalable multi-thread video decoding |
JP5269877B2 (en) | 2007-04-09 | 2013-08-21 | テクトロニクス・インコーポレイテッド | Test video frame measurement method |
JP5345139B2 (en) | 2007-06-08 | 2013-11-20 | トムソン ライセンシング | Method and apparatus for in-loop de-artifact filtering based on multi-grid sparsity-based filtering |
US8254455B2 (en) | 2007-06-30 | 2012-08-28 | Microsoft Corporation | Computing collocated macroblock information for direct mode macroblocks |
US8200028B2 (en) | 2007-12-07 | 2012-06-12 | Csr Technology Inc. | System and method for detecting edges in a video signal |
US8285068B2 (en) | 2008-06-25 | 2012-10-09 | Cisco Technology, Inc. | Combined deblocking and denoising filter |
KR101590500B1 (en) | 2008-10-23 | 2016-02-01 | 에스케이텔레콤 주식회사 | / Video encoding/decoding apparatus Deblocking filter and deblocing filtering method based intra prediction direction and Recording Medium therefor |
US9596485B2 (en) | 2008-10-27 | 2017-03-14 | Sk Telecom Co., Ltd. | Motion picture encoding/decoding apparatus, adaptive deblocking filtering apparatus and filtering method for same, and recording medium |
CN102292990B (en) | 2008-11-25 | 2016-10-05 | 汤姆森特许公司 | Video coding and decoding are carried out method and apparatus based on openness de-artifact filtering |
US8787443B2 (en) | 2010-10-05 | 2014-07-22 | Microsoft Corporation | Content adaptive deblocking during video encoding and decoding |
-
2004
- 2004-04-15 US US10/826,971 patent/US7724827B2/en active Active
- 2004-08-31 US US10/931,695 patent/US7412102B2/en active Active
- 2004-09-02 US US10/933,910 patent/US7469011B2/en active Active
- 2004-09-02 US US10/933,908 patent/US7352905B2/en active Active
- 2004-09-02 US US10/933,883 patent/US7099515B2/en not_active Expired - Lifetime
- 2004-09-02 US US10/933,882 patent/US7924920B2/en active Active
- 2004-09-02 US US10/934,929 patent/US7606311B2/en active Active
- 2004-09-03 CN CN200710142211A patent/CN100586183C/en not_active Expired - Lifetime
- 2004-09-03 EP EP10014708.1A patent/EP2285113B1/en not_active Expired - Lifetime
- 2004-09-03 CN CN2004800255880A patent/CN100407224C/en not_active Expired - Lifetime
- 2004-09-03 CN CNB2004800254549A patent/CN100534164C/en not_active Expired - Lifetime
- 2004-09-03 CN CNB200480023141XA patent/CN100456833C/en not_active Expired - Lifetime
- 2004-09-03 CN CN2007100063566A patent/CN101001374B/en not_active Expired - Lifetime
- 2004-09-03 EP EP04783325.6A patent/EP1658726B1/en not_active Expired - Lifetime
- 2004-09-04 US US10/934,116 patent/US8687709B2/en active Active
- 2004-09-04 US US10/934,117 patent/US8116380B2/en active Active
Patent Citations (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4454546A (en) * | 1980-03-13 | 1984-06-12 | Fuji Photo Film Co., Ltd. | Band compression device for shaded image |
US4691329A (en) * | 1985-07-02 | 1987-09-01 | Matsushita Electric Industrial Co., Ltd. | Block encoder |
US4796087A (en) * | 1986-05-29 | 1989-01-03 | Jacques Guichard | Process for coding by transformation for the transmission of picture signals |
US4800432A (en) * | 1986-10-24 | 1989-01-24 | The Grass Valley Group, Inc. | Video Difference key generator |
US4849812A (en) * | 1987-03-10 | 1989-07-18 | U.S. Philips Corporation | Television system in which digitized picture signals subjected to a transform coding are transmitted from an encoding station to a decoding station |
US5021879A (en) * | 1987-05-06 | 1991-06-04 | U.S. Philips Corporation | System for transmitting video pictures |
US5089887A (en) * | 1988-09-23 | 1992-02-18 | Thomson Consumer Electronics | Method and device for the estimation of motion in a sequence of moving images |
US5117287A (en) * | 1990-03-02 | 1992-05-26 | Kokusai Denshin Denwa Co., Ltd. | Hybrid coding system for moving image |
US5157490A (en) * | 1990-03-14 | 1992-10-20 | Kabushiki Kaisha Toshiba | Television signal scanning line converting apparatus |
US5091782A (en) * | 1990-04-09 | 1992-02-25 | General Instrument Corporation | Apparatus and method for adaptively compressing successive blocks of digital video |
US4999705A (en) * | 1990-05-03 | 1991-03-12 | At&T Bell Laboratories | Three dimensional motion compensated video coding |
US5155594A (en) * | 1990-05-11 | 1992-10-13 | Picturetel Corporation | Hierarchical encoding method and apparatus employing background references for efficiently communicating image sequences |
US5193004A (en) * | 1990-12-03 | 1993-03-09 | The Trustees Of Columbia University In The City Of New York | Systems and methods for coding even fields of interlaced video sequences |
US5111292A (en) * | 1991-02-27 | 1992-05-05 | General Electric Company | Priority selection apparatus as for a video signal processor |
US5319463A (en) * | 1991-03-19 | 1994-06-07 | Nec Corporation | Arrangement and method of preprocessing binary picture data prior to run-length encoding |
US5422676A (en) * | 1991-04-25 | 1995-06-06 | Deutsche Thomson-Brandt Gmbh | System for coding an image representative signal |
US5317397A (en) * | 1991-05-31 | 1994-05-31 | Kabushiki Kaisha Toshiba | Predictive coding using spatial-temporal filtering and plural motion vectors |
US5343248A (en) * | 1991-07-26 | 1994-08-30 | Sony Corporation | Moving image compressing and recording medium and moving image data encoder and decoder |
US5539466A (en) * | 1991-07-30 | 1996-07-23 | Sony Corporation | Efficient coding apparatus for picture signal and decoding apparatus therefor |
US5347308A (en) * | 1991-10-11 | 1994-09-13 | Matsushita Electric Industrial Co., Ltd. | Adaptive coding method for interlaced scan digital video sequences |
US5227878A (en) * | 1991-11-15 | 1993-07-13 | At&T Bell Laboratories | Adaptive coding and decoding of frames and fields of video |
US5510840A (en) * | 1991-12-27 | 1996-04-23 | Sony Corporation | Methods and devices for encoding and decoding frame signals and recording medium therefor |
US5745789A (en) * | 1992-01-23 | 1998-04-28 | Hitachi, Ltd. | Disc system for holding data in a form of a plurality of data blocks dispersed in a plurality of disc units connected by a common data bus |
US5379351A (en) * | 1992-02-19 | 1995-01-03 | Integrated Information Technology, Inc. | Video compression/decompression processing and processors |
US5287420A (en) * | 1992-04-08 | 1994-02-15 | Supermac Technology | Method for image compression on a personal computer |
US5666461A (en) * | 1992-06-29 | 1997-09-09 | Sony Corporation | High efficiency encoding and decoding of picture signals and recording medium containing same |
US5412435A (en) * | 1992-07-03 | 1995-05-02 | Kokusai Denshin Denwa Kabushiki Kaisha | Interlaced video signal motion compensation prediction system |
US5461421A (en) * | 1992-11-30 | 1995-10-24 | Samsung Electronics Co., Ltd. | Encoding and decoding method and apparatus thereof |
US5400075A (en) * | 1993-01-13 | 1995-03-21 | Thomson Consumer Electronics, Inc. | Adaptive variable length encoder/decoder |
US5426464A (en) * | 1993-01-14 | 1995-06-20 | Rca Thomson Licensing Corporation | Field elimination apparatus for a video compression/decompression system |
US5544286A (en) * | 1993-01-29 | 1996-08-06 | Microsoft Corporation | Digital video data compression technique |
US5668932A (en) * | 1993-01-29 | 1997-09-16 | Microsoft Corporation | Digital video data compression technique |
US5946042A (en) * | 1993-03-24 | 1999-08-31 | Sony Corporation | Macroblock coding including difference between motion vectors |
US6040863A (en) * | 1993-03-24 | 2000-03-21 | Sony Corporation | Method of coding and decoding motion vector and apparatus therefor, and method of coding and decoding picture signal and apparatus therefor |
US5598215A (en) * | 1993-05-21 | 1997-01-28 | Nippon Telegraph And Telephone Corporation | Moving image encoder and decoder using contour extraction |
US5448297A (en) * | 1993-06-16 | 1995-09-05 | Intel Corporation | Method and system for encoding images using skip blocks |
US5517327A (en) * | 1993-06-30 | 1996-05-14 | Minolta Camera Kabushiki Kaisha | Data processor for image data using orthogonal transformation |
US5453799A (en) * | 1993-11-05 | 1995-09-26 | Comsat Corporation | Unified motion estimation architecture |
US5648819A (en) * | 1994-03-30 | 1997-07-15 | U.S. Philips Corporation | Motion estimation using half-pixel refinement of frame and field vectors |
US5550541A (en) * | 1994-04-01 | 1996-08-27 | Dolby Laboratories Licensing Corporation | Compact source coding tables for encoder/decoder system |
US5767898A (en) * | 1994-06-23 | 1998-06-16 | Sanyo Electric Co., Ltd. | Three-dimensional image coding by merger of left and right images |
US5796438A (en) * | 1994-07-05 | 1998-08-18 | Sony Corporation | Methods and apparatus for interpolating picture information |
US5594504A (en) * | 1994-07-06 | 1997-01-14 | Lucent Technologies Inc. | Predictive video coding using a motion vector updating routine |
US5552832A (en) * | 1994-10-26 | 1996-09-03 | Intel Corporation | Run-length encoding sequence for video signals |
US5619281A (en) * | 1994-12-30 | 1997-04-08 | Daewoo Electronics Co., Ltd | Method and apparatus for detecting motion vectors in a frame decimating video encoder |
US6052150A (en) * | 1995-03-10 | 2000-04-18 | Kabushiki Kaisha Toshiba | Video data signal including a code string having a plurality of components which are arranged in a descending order of importance |
US5617144A (en) * | 1995-03-20 | 1997-04-01 | Daewoo Electronics Co., Ltd. | Image processing system using pixel-by-pixel motion estimation and frame decimation |
US5598216A (en) * | 1995-03-20 | 1997-01-28 | Daewoo Electronics Co., Ltd | Method and apparatus for encoding/decoding a video signal |
US5546129A (en) * | 1995-04-29 | 1996-08-13 | Daewoo Electronics Co., Ltd. | Method for encoding a video signal using feature point based motion estimation |
US6208761B1 (en) * | 1995-07-11 | 2001-03-27 | Telefonaktiebolaget Lm Ericsson (Publ) | Video coding |
US5668608A (en) * | 1995-07-26 | 1997-09-16 | Daewoo Electronics Co., Ltd. | Motion vector estimation method and apparatus for use in an image signal encoding system |
US6292585B1 (en) * | 1995-09-29 | 2001-09-18 | Kabushiki Kaisha Toshiba | Video coding and video decoding apparatus |
US5784175A (en) * | 1995-10-05 | 1998-07-21 | Microsoft Corporation | Pixel block correlation process |
US5970173A (en) * | 1995-10-05 | 1999-10-19 | Microsoft Corporation | Image compression and affine transformation for image motion compensation |
US5959673A (en) * | 1995-10-05 | 1999-09-28 | Microsoft Corporation | Transform coding of dense motion vector fields for frame and object based video coding applications |
US6192081B1 (en) * | 1995-10-26 | 2001-02-20 | Sarnoff Corporation | Apparatus and method for selecting a coding mode in a block-based coding system |
US5764814A (en) * | 1996-03-22 | 1998-06-09 | Microsoft Corporation | Representation and encoding of general arbitrary shapes |
US6035070A (en) * | 1996-09-24 | 2000-03-07 | Moon; Joo-Hee | Encoder/decoder for coding/decoding gray scale shape data and method thereof |
US6215905B1 (en) * | 1996-09-30 | 2001-04-10 | Hyundai Electronics Ind. Co., Ltd. | Video predictive coding apparatus and method |
US6122318A (en) * | 1996-10-31 | 2000-09-19 | Kabushiki Kaisha Toshiba | Video encoding apparatus and video decoding apparatus |
US6236806B1 (en) * | 1996-11-06 | 2001-05-22 | Sony Corporation | Field detection apparatus and method, image coding apparatus and method, recording medium, recording method and transmission method |
US6785331B1 (en) * | 1997-02-14 | 2004-08-31 | Nippon Telegraph And Telephone Corporation | Predictive encoding and decoding methods of video data |
US5974184A (en) * | 1997-03-07 | 1999-10-26 | General Instrument Corporation | Intra-macroblock DC and AC coefficient prediction for interlaced digital video |
US6026195A (en) * | 1997-03-07 | 2000-02-15 | General Instrument Corporation | Motion estimation and compensation of video object planes for interlaced digital video |
US6704360B2 (en) * | 1997-03-27 | 2004-03-09 | At&T Corp. | Bidirectionally predicted pictures or video object planes for efficient and flexible video coding |
US6351563B1 (en) * | 1997-07-09 | 2002-02-26 | Hyundai Electronics Ind. Co., Ltd. | Apparatus and method for coding/decoding scalable shape binary image using mode of lower and current layers |
US5973743A (en) * | 1997-12-02 | 1999-10-26 | Daewoo Electronics Co., Ltd. | Mode coding method and apparatus for use in an interlaced shape coder |
US6094225A (en) * | 1997-12-02 | 2000-07-25 | Daewoo Electronics, Co., Ltd. | Method and apparatus for encoding mode signals for use in a binary shape coder |
US6275528B1 (en) * | 1997-12-12 | 2001-08-14 | Sony Corporation | Picture encoding method and picture encoding apparatus |
US6862402B2 (en) * | 1997-12-20 | 2005-03-01 | Samsung Electronics Co., Ltd. | Digital recording and playback apparatus having MPEG CODEC and method therefor |
US5946043A (en) * | 1997-12-31 | 1999-08-31 | Microsoft Corporation | Video coding using adaptive coding of block parameters for coded/uncoded blocks |
US6243418B1 (en) * | 1998-03-30 | 2001-06-05 | Daewoo Electronics Co., Ltd. | Method and apparatus for encoding a motion vector of a binary shape signal |
US6408029B1 (en) * | 1998-04-02 | 2002-06-18 | Intel Corporation | Method and apparatus for simplifying real-time data encoding |
US6271885B2 (en) * | 1998-06-24 | 2001-08-07 | Victor Company Of Japan, Ltd. | Apparatus and method of motion-compensated predictive coding |
US20020110196A1 (en) * | 1998-06-29 | 2002-08-15 | Xerox Corporation | HVQ compression for image boundaries |
US6275531B1 (en) * | 1998-07-23 | 2001-08-14 | Optivision, Inc. | Scalable video coding method and apparatus |
US6563953B2 (en) * | 1998-11-30 | 2003-05-13 | Microsoft Corporation | Predictive image compression using a single variable length code for both the luminance and chrominance blocks for each macroblock |
US7054494B2 (en) * | 1998-11-30 | 2006-05-30 | Microsoft Corporation | Coded block pattern decoding with spatial prediction |
US6735345B2 (en) * | 1998-11-30 | 2004-05-11 | Microsoft Corporation | Efficient macroblock header coding for video compression |
US6683987B1 (en) * | 1999-03-25 | 2004-01-27 | Victor Company Of Japan, Ltd. | Method and apparatus for altering the picture updating frequency of a compressed video data stream |
US6573905B1 (en) * | 1999-11-09 | 2003-06-03 | Broadcom Corporation | Video and graphics system with parallel processing of graphics windows |
US6778606B2 (en) * | 2000-02-21 | 2004-08-17 | Hyundai Curitel, Inc. | Selective motion estimation method and apparatus |
US20020114388A1 (en) * | 2000-04-14 | 2002-08-22 | Mamoru Ueda | Decoder and decoding method, recorded medium, and program |
US6614442B1 (en) * | 2000-06-26 | 2003-09-02 | S3 Graphics Co., Ltd. | Macroblock tiling format for motion compensation |
US6920175B2 (en) * | 2001-01-03 | 2005-07-19 | Nokia Corporation | Video coding architecture and methods for using same |
US6765963B2 (en) * | 2001-01-03 | 2004-07-20 | Nokia Corporation | Video decoder architecture and method for using same |
US20040179601A1 (en) * | 2001-11-16 | 2004-09-16 | Mitsuru Kobayashi | Image encoding method, image decoding method, image encoder, image decode, program, computer data signal, and image transmission system |
US20030099292A1 (en) * | 2001-11-27 | 2003-05-29 | Limin Wang | Macroblock level adaptive frame/field coding for digital video content |
US20030138150A1 (en) * | 2001-12-17 | 2003-07-24 | Microsoft Corporation | Spatial extrapolation of pixel values in intraframe video coding and decoding |
US20030113026A1 (en) * | 2001-12-17 | 2003-06-19 | Microsoft Corporation | Skip macroblock coding |
US20030142748A1 (en) * | 2002-01-25 | 2003-07-31 | Alexandros Tourapis | Video coding methods and apparatuses |
US20030156643A1 (en) * | 2002-02-19 | 2003-08-21 | Samsung Electronics Co., Ltd. | Method and apparatus to encode a moving image with fixed computational complexity |
US20030179826A1 (en) * | 2002-03-18 | 2003-09-25 | Lg Electronics Inc. | B picture mode determining method and apparatus in video coding system |
US6795584B2 (en) * | 2002-10-03 | 2004-09-21 | Nokia Corporation | Context-based adaptive variable length coding for adaptive block transforms |
US20040136457A1 (en) * | 2002-10-23 | 2004-07-15 | John Funnell | Method and system for supercompression of compressed digital video |
US20040141651A1 (en) * | 2002-10-25 | 2004-07-22 | Junichi Hara | Modifying wavelet division level before transmitting data stream |
US20050053141A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Joint coding and decoding of a reference field selection and differential motion vector information |
US20050152457A1 (en) * | 2003-09-07 | 2005-07-14 | Microsoft Corporation | Signaling and repeat padding for skip frames |
US20050135484A1 (en) * | 2003-12-18 | 2005-06-23 | Daeyang Foundation (Sejong University) | Method of encoding mode determination, method of motion estimation and encoding apparatus |
Cited By (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9886982B2 (en) | 2004-05-18 | 2018-02-06 | Avago Technologies General Ip (Singapore) Pte. Ltd. | Index table generation in PVR applications for AVC video streams |
US20050259960A1 (en) * | 2004-05-18 | 2005-11-24 | Wan Wade K | Index table generation in PVR applications for AVC video streams |
US9208824B2 (en) * | 2004-05-18 | 2015-12-08 | Broadcom Corporation | Index table generation in PVR applications for AVC video streams |
US20110110427A1 (en) * | 2005-10-18 | 2011-05-12 | Chia-Yuan Teng | Selective deblock filtering techniques for video coding |
US8681867B2 (en) * | 2005-10-18 | 2014-03-25 | Qualcomm Incorporated | Selective deblock filtering techniques for video coding based on motion compensation resulting in a coded block pattern value |
US20130202041A1 (en) * | 2006-06-27 | 2013-08-08 | Yi-Jen Chiu | Chroma motion vector processing apparatus, system, and method |
US9313491B2 (en) * | 2006-06-27 | 2016-04-12 | Intel Corporation | Chroma motion vector processing apparatus, system, and method |
US20110129015A1 (en) * | 2007-09-04 | 2011-06-02 | The Regents Of The University Of California | Hierarchical motion vector processing method, software and devices |
US8605786B2 (en) * | 2007-09-04 | 2013-12-10 | The Regents Of The University Of California | Hierarchical motion vector processing method, software and devices |
US20090232217A1 (en) * | 2008-03-17 | 2009-09-17 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding and decoding image |
US8625670B2 (en) * | 2008-03-17 | 2014-01-07 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding and decoding image |
US20090238479A1 (en) * | 2008-03-20 | 2009-09-24 | Pawan Jaggi | Flexible frame based energy efficient multimedia processor architecture and method |
US20090238263A1 (en) * | 2008-03-20 | 2009-09-24 | Pawan Jaggi | Flexible field based energy efficient multimedia processor architecture and method |
CN103297784A (en) * | 2008-10-31 | 2013-09-11 | Sk电信有限公司 | Apparatus for encoding image |
USRE47758E1 (en) * | 2009-12-09 | 2019-12-03 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
USRE47243E1 (en) * | 2009-12-09 | 2019-02-12 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
USRE47759E1 (en) * | 2009-12-09 | 2019-12-03 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
USRE47445E1 (en) * | 2009-12-09 | 2019-06-18 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
US20110135000A1 (en) * | 2009-12-09 | 2011-06-09 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
USRE47254E1 (en) * | 2009-12-09 | 2019-02-19 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
US8548052B2 (en) * | 2009-12-09 | 2013-10-01 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding video, and method and apparatus for decoding video |
US9860551B2 (en) | 2011-02-09 | 2018-01-02 | Lg Electronics Inc. | Method for encoding and decoding image and device using same |
US11871027B2 (en) | 2011-02-09 | 2024-01-09 | Lg Electronics Inc. | Method for encoding image and non-transitory computer readable storage medium storing a bitstream generated by a method |
US9866861B2 (en) | 2011-02-09 | 2018-01-09 | Lg Electronics Inc. | Method for encoding and decoding image and device using same |
US11463722B2 (en) | 2011-02-09 | 2022-10-04 | Lg Electronics Inc. | Method for encoding and decoding image and device using same |
US11032564B2 (en) | 2011-02-09 | 2021-06-08 | Lg Electronics Inc. | Method for encoding and decoding image and device using same |
US10516895B2 (en) | 2011-02-09 | 2019-12-24 | Lg Electronics Inc. | Method for encoding and decoding image and device using same |
US10057592B2 (en) | 2011-03-09 | 2018-08-21 | Canon Kabushiki Kaisha | Video encoding and decoding |
US20160057415A1 (en) * | 2011-11-07 | 2016-02-25 | Canon Kabushiki Kaisha | Image encoding method, image encoding apparatus, and related encoding medium, image decoding method, image decoding apparatus, and related decoding medium |
US10469862B2 (en) | 2012-07-02 | 2019-11-05 | Lg Electronics Inc. | Method for decoding image and apparatus using same |
US9621909B2 (en) | 2012-07-02 | 2017-04-11 | Lg Electronics Inc. | Method for decoding image and apparatus using same |
US10931960B2 (en) | 2012-07-02 | 2021-02-23 | Lg Electronics Inc. | Method for decoding image and apparatus using same |
US11252427B2 (en) | 2012-07-02 | 2022-02-15 | Lg Electronics Inc. | Method for decoding image and apparatus using same |
US9769487B2 (en) | 2012-07-02 | 2017-09-19 | Lg Electronics Inc. | Method for decoding image and apparatus using same |
US11653012B2 (en) | 2012-07-02 | 2023-05-16 | Lg Electronics Inc. | Method for decoding image and apparatus using same |
US10045039B2 (en) | 2012-07-02 | 2018-08-07 | Lg Electronics Inc. | Method for decoding image and apparatus using same |
US10469874B2 (en) * | 2013-10-07 | 2019-11-05 | Lg Electronics Inc. | Method for encoding and decoding a media signal and apparatus using the same |
WO2020186060A1 (en) * | 2019-03-12 | 2020-09-17 | Futurewei Technologies, Inc. | Patch data unit coding and decoding for point-cloud data |
US12002243B2 (en) | 2019-03-12 | 2024-06-04 | Huawei Technologies Co., Ltd. | Patch data unit coding and decoding for point-cloud coding |
Also Published As
Publication number | Publication date |
---|---|
EP2285113B1 (en) | 2020-05-06 |
CN101001374A (en) | 2007-07-18 |
EP1658726A2 (en) | 2006-05-24 |
EP1658726B1 (en) | 2020-09-16 |
US8687709B2 (en) | 2014-04-01 |
CN100586183C (en) | 2010-01-27 |
US20050053151A1 (en) | 2005-03-10 |
EP2285113A3 (en) | 2011-08-10 |
CN1950832A (en) | 2007-04-18 |
EP2285113A2 (en) | 2011-02-16 |
US7724827B2 (en) | 2010-05-25 |
US7099515B2 (en) | 2006-08-29 |
US7606311B2 (en) | 2009-10-20 |
CN100407224C (en) | 2008-07-30 |
US20050052294A1 (en) | 2005-03-10 |
CN100456833C (en) | 2009-01-28 |
US20050084012A1 (en) | 2005-04-21 |
US7412102B2 (en) | 2008-08-12 |
US7469011B2 (en) | 2008-12-23 |
CN101155306A (en) | 2008-04-02 |
US20050053293A1 (en) | 2005-03-10 |
US7924920B2 (en) | 2011-04-12 |
US20050053156A1 (en) | 2005-03-10 |
US20050083218A1 (en) | 2005-04-21 |
CN1846437A (en) | 2006-10-11 |
US8116380B2 (en) | 2012-02-14 |
US20050053294A1 (en) | 2005-03-10 |
CN100534164C (en) | 2009-08-26 |
CN1965321A (en) | 2007-05-16 |
CN101001374B (en) | 2011-08-10 |
US7352905B2 (en) | 2008-04-01 |
EP1658726A4 (en) | 2011-11-23 |
US20050053302A1 (en) | 2005-03-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7606311B2 (en) | Macroblock information signaling for interlaced frames | |
US7092576B2 (en) | Bitplane coding for macroblock field/frame coding type information | |
JP4921971B2 (en) | Innovations in encoding and decoding macroblocks and motion information for interlaced and progressive video | |
US7590179B2 (en) | Bitplane coding of prediction mode information in bi-directionally predicted interlaced pictures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HSU, POHSIANG;SRINIVASAN, SRIDHAR;LIN, CHIH-LUNG;AND OTHERS;REEL/FRAME:015770/0617 Effective date: 20040902 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
CC | Certificate of correction | ||
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034541/0477 Effective date: 20141014 |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 12 |