Nothing Special   »   [go: up one dir, main page]

KR101673027B1 - Method and Apparatus for Color Space Prediction and Method and Apparatus for Encoding/Decoding of Video Data Thereof - Google Patents

Method and Apparatus for Color Space Prediction and Method and Apparatus for Encoding/Decoding of Video Data Thereof Download PDF

Info

Publication number
KR101673027B1
KR101673027B1 KR1020100063359A KR20100063359A KR101673027B1 KR 101673027 B1 KR101673027 B1 KR 101673027B1 KR 1020100063359 A KR1020100063359 A KR 1020100063359A KR 20100063359 A KR20100063359 A KR 20100063359A KR 101673027 B1 KR101673027 B1 KR 101673027B1
Authority
KR
South Korea
Prior art keywords
prediction
block
frequency domain
delete delete
frequency
Prior art date
Application number
KR1020100063359A
Other languages
Korean (ko)
Other versions
KR20120002712A (en
Inventor
송진한
임정연
김용구
최윤식
최영호
정진우
Original Assignee
에스케이 텔레콤주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 에스케이 텔레콤주식회사 filed Critical 에스케이 텔레콤주식회사
Priority to KR1020100063359A priority Critical patent/KR101673027B1/en
Priority to PCT/KR2011/004839 priority patent/WO2012002765A2/en
Publication of KR20120002712A publication Critical patent/KR20120002712A/en
Application granted granted Critical
Publication of KR101673027B1 publication Critical patent/KR101673027B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/48Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

An embodiment of the present invention relates to a color space prediction method and apparatus, and a method and apparatus for image encoding / decoding using the same.
One embodiment of the present invention includes a prediction weight calculation unit for determining a weighted block from a base-coded neighboring block of a current block and calculating a prediction weight for each frequency domain from the weighted block; A prediction frequency selector for calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain and selecting a frequency domain to be used for color space prediction using the prediction gain for each frequency domain; And a frequency domain predictor for receiving the transformed residual block and performing color space prediction of the transformed residual block from the predicted weight of the selected frequency domain, A prediction weight calculation unit for determining a weighted block from the weighted block and calculating a prediction weight for each frequency domain from the weighted block; A prediction frequency selector for calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain and selecting a frequency domain to be used for color space prediction using the prediction gain for each frequency domain; And a frequency domain reconstruction unit for reconstructing the residual block transformed from the color space prediction block using the predicted weight values of the selected frequency domain. The present invention also provides a method and apparatus for encoding and decoding images using the method and apparatus.

Description

TECHNICAL FIELD The present invention relates to a color space prediction method and apparatus, and a method and apparatus for encoding / decoding image data using the method and apparatus.

An embodiment of the present invention relates to a color space prediction method and apparatus, and a method and apparatus for image encoding / decoding using the same. More particularly, the present invention relates to a method and apparatus for efficiently compressing video data without performing a conventional color conversion process, and more particularly, to a method and apparatus for efficiently compressing video data by removing redundant information between color components using correlation between image components, And an image encoding / decoding method and apparatus using the same.

Most commercial applications dealing with video signals perform compression encoding in most YCbCr color spaces. Although the general video signal acquisition apparatus operates in the RGB color space, the reason why the encoding is performed in the YCbCr color space is that the RGB format signals have a very high correlation between the color planes, It is because. In the case of the Cb and Cr color planes corresponding to the color difference signals in the YCbCr type signal, the resolution by the human visual system is significantly lower than the luminance signal (Y), so that the subjective image quality is not lost by the additional sub- High compression coding efficiency can be provided. However, in a video application requiring super high image quality such as digital cinema or medical image, image quality deterioration basically occurs due to a rounding error occurring in the conversion between the RGB color space and the YCbCr color space, and compression The efficiency does not show a large difference in the two color spaces or the coding in the YCbCr color space has a worse performance. Even if a YCoCg color space designed to remove color space correlation without rounding error is used, the encoding efficiency in the ultra-high-quality region does not provide significantly improved performance compared to that in the RGB color space Therefore, the increase of the direct compression coding efficiency in the RGB color space corresponding to the image acquisition signal space is very important for the video application requiring super high image quality.

In order to perform such efficient compression encoding of RGB data, various studies have been carried out to date. H.264 / AVC, the international standard for video compression coding, supports compression encoding of RGB color space video data. For this purpose, a common mode for processing each data plane of the RGB color space into the same encoding mode, And provides an independent mode for processing planes independently. The RGB color space data encoding method of the H.264 / AVC standard has an advantage that it can provide a high encoding performance without increasing the amount of computation for a specific color space processing process, but it does not directly use the correlation between color planes Therefore, there is a problem in that there is an inefficient element due to high redundancy between RGB color planes.

In order to solve such a problem, an embodiment of the present invention not only directly compresses video data directly without performing a conventional color conversion process, but also removes redundant information between color components by using correlation between image components, There is a main purpose in further improving efficiency.

According to an aspect of the present invention, there is provided an apparatus for encoding / decoding an image, the apparatus comprising: a prediction block generating unit for generating a prediction block by predicting a current block for each color plane, subtracting the prediction block from the current block, Calculates a prediction weight for each frequency domain using the frequency coefficients of the neighboring blocks of the current block, calculates a prediction gain for each frequency domain from the prediction weight for each frequency domain, An image coder for generating a color space prediction block of the transformed residual block by selecting a frequency domain and performing color space prediction from the prediction weight of the selected frequency domain, and encoding the color space prediction block; And decodes the encoded data to decode the color space prediction block, calculates a prediction weight for each frequency domain using the frequency coefficients of the neighboring blocks of the current block, calculates a prediction gain for each frequency domain from the prediction weight for each frequency domain, A frequency domain to be used for prediction is selected, a residual block transformed from the color space prediction block is restored by using the predicted weight of the selected frequency domain, the residual block is restored by inversely transforming the transformed residual block, And an image decoder for generating a prediction block and restoring the current block by adding the restored residual block and the prediction block.

According to another aspect of the present invention, there is provided an apparatus for encoding an image, the apparatus comprising: a predictor for generating a prediction block by predicting a current block for each color plane; A subtractor for subtracting the prediction block from the current block to generate a residual block; A transformer for transforming the residual block; Calculating a prediction weight for each frequency domain by using a frequency coefficient of a neighboring block of the current block, calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain, selecting a frequency domain to be used for color space prediction, A color space predictor for generating a color space prediction block of the transformed residual block by performing color space prediction from a predicted weight of the transformed residual block; And an encoder for encoding the color space prediction block.

The transformer may quantize the transformed residual block after the transform.

The color space predictor may generate the color space prediction block and then quantize the color space prediction block.

Wherein the color space predictor subtracts a value obtained by applying a predictive weight of the selected frequency domain to a frequency coefficient of a color plane other than the reference color plane at a frequency coefficient of the selected frequency region of the reference color plane of the transformed residual block, A color space prediction block can be generated.

Wherein the color space predictor comprises: a prediction weight calculation unit for determining a weighted block from the base-adjacent neighboring blocks of the current block and calculating a prediction weight for each frequency domain from the weighted block; A prediction frequency selector for calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain and selecting a frequency domain to be used for color space prediction using the prediction gain for each frequency domain; And a frequency domain prediction unit for performing color space prediction of the transformed residual block from the prediction weight of the selected frequency domain.

The prediction frequency selection unit may select a frequency in the case where the gain is greater than a case where the prediction gain for each frequency domain does not predict as a frequency region to be used for color space prediction.

The frequency domain prediction unit may perform color space prediction after quantizing the prediction weights of the selected frequency domain.

According to another aspect of the present invention, there is provided an apparatus for decoding an image, the apparatus comprising: a decoder that decodes encoded data to decode a color space prediction block; Calculating a prediction weight for each frequency domain by using a frequency coefficient of a neighboring block of a current block and calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain to select a frequency region to be used for color space prediction, A color space reconstructor for reconstructing a residual block transformed from the color space prediction block using a prediction weight; An inverse transformer for inversely transforming the transformed residual block to reconstruct a residual block; A predictor for generating a prediction block by predicting a current block; And an adder for adding the restored residual block and the prediction block to reconstruct the current block.

The inverse transformer can inversely transform the transformed residual block after inverse-quantizing the transformed residual block.

The color space reconstructor may dequantize the color space prediction block, and then calculate the prediction weight for each frequency domain.

Wherein the color space reconstructor adds a frequency coefficient of the selected frequency region of the reference color plane of the color space prediction block and a frequency coefficient of a color plane other than the reference color plane to a value obtained by applying a predictive weight of the selected frequency region, The transformed residual block can be restored.

Wherein the color space reconstructor comprises: a prediction weight calculation unit for determining a weighted block from the decoded neighboring block of the current block and calculating a prediction weight for each frequency domain from the weighted block; A prediction frequency selector for calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain and selecting a frequency domain to be used for color space prediction using the prediction gain for each frequency domain; And a frequency domain reconstruction unit for reconstructing the residual block transformed from the color space prediction block using the prediction weight of the selected frequency domain.

The frequency domain reconstruction unit may perform the color space prediction after quantizing the prediction weights of the selected frequency domain.

According to another aspect of the present invention, there is provided an apparatus for predicting a color space of a transformed residual block, the apparatus comprising: a weighting unit that determines a weighted block from a base- A prediction weight calculation unit for calculating a prediction weight for each region; A prediction frequency selector for calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain and selecting a frequency domain to be used for color space prediction using the prediction gain for each frequency domain; And a frequency domain prediction unit for performing a color space prediction of the transformed residual block from the prediction weight of the selected frequency domain.

Wherein the frequency domain predictor subtracts a value obtained by applying a predictive weight of the selected frequency domain to a frequency coefficient of a color plane other than the reference color plane at a frequency coefficient of the selected frequency region of the reference color plane of the transformed residual block, A color space prediction block can be generated.

The weighted block may be a block coded in the same mode as the prediction mode of the current block.

The prediction frequency selection unit may select a frequency in the case where the gain is greater than a case where the prediction gain for each frequency domain does not predict as a frequency region to be used for color space prediction.

The frequency domain prediction unit may perform color space prediction after quantizing the prediction weights of the selected frequency domain.

The prediction weight for each frequency domain can be calculated by the degree of correlation between the reference color plane and the remaining color plane.

The prediction weight for each frequency domain can be calculated in units of a sequence of images, a frame unit, a macroblock unit, and a sub-block unit.

According to another aspect of the present invention, there is provided an apparatus for predicting color space of a color space prediction block, the apparatus comprising: a color space prediction unit for determining a weighted block from a decoded neighboring block of a current block, A prediction weight calculation unit for calculating a prediction weight for each region; A prediction frequency selector for calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain and selecting a frequency domain to be used for color space prediction using the prediction gain for each frequency domain; And a frequency domain reconstruction unit for reconstructing the residual block transformed from the color space prediction block using the prediction weight of the selected frequency domain.

Wherein the frequency domain reconstruction unit adds the frequency coefficients of the selected frequency region of the reference color plane of the color space prediction block and the frequency coefficients of the color planes other than the reference color plane by applying a predictive weight value of the selected frequency region, The transformed residual block can be restored.

The frequency-

The color space prediction may be performed after quantizing the prediction weights of the selected frequency domain.

According to another aspect of the present invention, there is provided a method of encoding / decoding an image, the method comprising: generating a prediction block by predicting a current block for each color plane; A residual block is generated, the residual block is transformed, a prediction weight for each frequency domain is calculated using the frequency coefficients of the neighboring blocks of the current block, a prediction gain for each frequency domain is calculated from the prediction weight for each frequency domain, Selecting a frequency region to be used for spatial prediction and performing a color space prediction from the prediction weight of the selected frequency region to generate a color space prediction block of the transformed residual block and encoding the color space prediction block; And decodes the encoded data to decode the color space prediction block, calculates a prediction weight for each frequency domain using the frequency coefficients of the neighboring blocks of the current block, calculates a prediction gain for each frequency domain from the prediction weight for each frequency domain, A frequency domain to be used for prediction is selected, a residual block transformed from the color space prediction block is restored by using the predicted weight of the selected frequency domain, the residual block is restored by inversely transforming the transformed residual block, And reconstructing the current block by generating a prediction block and adding the restored residual block and the prediction block to each other.

According to another aspect of the present invention, there is provided a method of encoding an image, the method comprising: generating a prediction block by predicting a current block for each color plane; Generating a residual block by subtracting the prediction block from the current block; A transforming step of transforming the residual block; Calculating a prediction weight for each frequency domain by using a frequency coefficient of a neighboring block of the current block, calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain, selecting a frequency domain to be used for color space prediction, A color space prediction step of performing a color space prediction from the predicted weight of the transformed residual block to generate a color space prediction block of the transformed residual block; And encoding the color space prediction block.

And after the transforming step, quantizing the transformed residual block.

And after the color space prediction step, quantizing the color space prediction block.

Wherein the color space prediction step subtracts a value obtained by applying a predictive weight of the selected frequency domain to a frequency coefficient of a color plane other than the reference color plane in a frequency coefficient of the selected frequency region of the reference color plane of the transformed residual block The color space prediction block can be generated.

Wherein the color space prediction step comprises: a prediction weight calculation step of determining a weighted block from the base-neighbored neighboring blocks of the current block and calculating a prediction weight for each frequency domain from the weighted block; A prediction frequency selecting step of calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain and selecting a frequency domain to be used for color space prediction using the prediction gain for each frequency domain; And a frequency domain prediction step of performing the color space prediction of the transformed residual block from the prediction weight of the selected frequency domain.

In the frequency domain prediction step, color space prediction may be performed after quantizing the prediction weights of the selected frequency domain.

According to another aspect of the present invention, there is provided a method of decoding an image, comprising: decoding a color space prediction block by decoding encoded data; Calculating a prediction weight for each frequency domain by using a frequency coefficient of a neighboring block of a current block and calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain to select a frequency region to be used for color space prediction, A color space restoration step of restoring a residual block transformed from the color space prediction block by using a prediction weight; An inverse transform step of inversely transforming the transformed residual block to reconstruct a residual block; Generating a prediction block by predicting a current block; And restoring the current block by adding the restored residual block and the prediction block.

And before the inverse transformation step, dequantizing the transformed residual block.

The method may further include dequantizing the color space prediction block before the color space restoration step.

The color space restoration step may include adding a frequency coefficient of the selected frequency region of the reference color plane of the color space prediction block and a value obtained by applying a predictive weight of the selected frequency region to a frequency coefficient of a color plane other than the reference color plane The transformed residual block can be restored.

The color space restoration step may include: a prediction weight calculation step of determining a weighted block from the decoded neighboring block of the current block and calculating a prediction weight for each frequency domain from the weighted block; A prediction frequency selecting step of calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain and selecting a frequency domain to be used for color space prediction using the prediction gain for each frequency domain; And a frequency domain restoration step of restoring the transformed residual block from the color space prediction block using the predicted weight values of the selected frequency domain.

In the frequency domain restoration step, color space prediction may be performed after quantizing the prediction weights of the selected frequency domain.

According to another aspect of the present invention, there is provided a method of predicting a color space of a transformed residual block, the method comprising: determining a weighted block from a base- A prediction weight calculation step of calculating a prediction weight for each area; A prediction frequency selecting step of calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain and selecting a frequency domain to be used for color space prediction using the prediction gain for each frequency domain; And a frequency domain prediction step of performing color space prediction of the transformed residual block from the prediction weight of the selected frequency domain.

Wherein the frequency domain prediction step subtracts a value obtained by applying a predictive weight of the selected frequency domain to a frequency coefficient of a color plane other than the reference color plane in a frequency coefficient of the selected frequency domain of the reference color plane of the transformed residual block The color space prediction block can be generated.

According to another aspect of the present invention, there is provided a color space prediction method for a color space prediction block, comprising: determining a weighted block from a decoded neighboring block of a current block; A prediction weight calculation step of calculating a prediction weight for each area; A prediction frequency selecting step of calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain and selecting a frequency domain to be used for color space prediction using the prediction gain for each frequency domain; And a frequency domain reconstruction step of reconstructing the residual block transformed from the color space prediction block using the prediction weight of the selected frequency domain.

The weighted block may be a block coded in the same mode as the prediction mode of the current block.

In the prediction frequency selection step, the frequency in the case where the gain is greater than that in the case where the prediction gain for each frequency domain is not predicted can be selected as a frequency region to be used for color space prediction.

In the frequency domain prediction step, color space prediction may be performed after quantizing the prediction weights of the selected frequency domain.

The prediction weight for each frequency domain can be calculated by the degree of correlation between the reference color plane and the remaining color plane.

The prediction weight for each frequency domain can be calculated in units of a sequence of images, a frame unit, a macroblock unit, and a sub-block unit.

In the frequency domain restoration step, color space prediction may be performed after quantizing the prediction weights of the selected frequency domain.

As described above, according to an embodiment of the present invention, the adaptive weighted prediction is selectively performed by using the correlation between frequency bands of residual signals between color planes, so that redundancy between components of a color- And thus provides a higher video coding efficiency than the existing frequency domain prediction scheme.

1 is a block diagram showing a color space prediction apparatus 100 according to a first embodiment of the present invention.
2 is a block diagram illustrating a prediction weight calculation unit.
3 is a block diagram illustrating the predicted frequency selection unit 130. Referring to FIG.
4 is a block diagram illustrating a frequency domain prediction unit.
FIG. 5 is a diagram illustrating that the color space prediction apparatus according to the first embodiment of the present invention performs color space prediction on an RGB input image.
6 is a diagram illustrating a block format and color space prediction for an RGB image when a 4 × 4 DCT is performed on one macroblock in a case where the predicted reference color plane is a G plane.
FIG. 7 is a diagram showing a macroblock to be coded at present and a macroblock already coded.
FIG. 8 is a diagram illustrating a color space prediction apparatus 800 according to the second embodiment of the present invention.
FIG. 9 is a block diagram schematically illustrating an image encoding apparatus according to an embodiment of the present invention.
FIG. 10 is a block diagram of a video decoding apparatus according to an embodiment of the present invention. Referring to FIG.
11 is a flowchart illustrating a color space prediction method according to the first embodiment of the present invention.
12 is a flowchart illustrating a color space prediction method according to a second embodiment of the present invention.
13 is a flowchart illustrating an image encoding method according to an embodiment of the present invention.
FIG. 14 is a flowchart illustrating a video decoding method according to an embodiment of the present invention.

Hereinafter, some embodiments of the present invention will be described in detail with reference to exemplary drawings. It should be noted that, in adding reference numerals to the constituent elements of the drawings, the same constituent elements are denoted by the same reference symbols as possible even if they are shown in different drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear.

In describing the components of the present invention, terms such as first, second, A, B, (a), and (b) may be used. These terms are intended to distinguish the constituent elements from other constituent elements, and the terms do not limit the nature, order or order of the constituent elements. When a component is described as being "connected", "coupled", or "connected" to another component, the component may be directly connected to or connected to the other component, It should be understood that an element may be "connected," "coupled," or "connected."

A video encoding apparatus, a video decoding apparatus and a color space prediction apparatus to be described below may be implemented as a personal computer (PC), a notebook computer, a personal digital assistant (PDA) A user terminal such as a portable multimedia player (PMP), a PlayStation Portable (PSP), a wireless communication terminal, or a server terminal such as an application server and a service server, A communication device such as a communication modem for performing communication with a mobile terminal, a memory for storing data and various programs for encoding or decoding an image, inter prediction for coding or decoding, a microprocessor for executing and controlling a program ≪ RTI ID = 0.0 > There.

In addition, the image encoded by the video encoding apparatus can be transmitted in real time or in non-real time through a wired or wireless communication network such as the Internet, a local area wireless communication network, a wireless LAN network, a WiBro network, a mobile communication network, A serial bus, and the like, and can be decoded and reconstructed into an image and reproduced by an image decoding apparatus.

In general, a moving picture is composed of a series of pictures, and each picture can be divided into predetermined areas such as a block. When an image area is divided into blocks, the divided blocks can be classified into an intra block and an inter block according to a coding method. The intra-block refers to a block that is coded using Intra Prediction Coding (P-Coding) scheme. The intra-prediction coding is performed by using the pixels of previously decoded and decoded blocks in the current picture, A prediction block is generated by predicting the pixels of the block and a difference value between the pixel of the current block and the pixel of the current block is encoded. Inter-block refers to a block that is coded using Inter Prediction Coding. Inter-prediction coding refers to one or more past pictures or a future picture to generate a prediction block by predicting a current block in the current picture, And the difference value is encoded. Here, a frame to be referred to in encoding or decoding a current picture is referred to as a reference frame.

1 is a block diagram showing a color space prediction apparatus 100 according to a first embodiment of the present invention.

The color space prediction apparatus 100 according to the first embodiment of the present invention includes a prediction weight calculation unit 120, a prediction frequency selection unit 130, and a frequency domain prediction unit 140. The conversion unit 110 may be added.

The transforming unit 110 receives the intra or inter-predicted residual block and performs the transform.

The predictive weight calculation unit 120 determines a weighted block from the base-adjoined neighboring block of the current block to a block of the color planes other than the predicted reference color plane, and outputs the weighted block to each color plane (i.e., And the prediction weight for each frequency domain is calculated. The calculated prediction weight for each frequency domain can be transmitted to the predicted frequency selection unit 130 and the frequency domain prediction unit 140. Herein, in the image encoding apparatus according to the embodiment of the present invention and the color space prediction apparatus according to the first embodiment of the present invention, . ≪ / RTI > In addition, in the image decoding apparatus according to the embodiment of the present invention and the color space prediction apparatus according to the second embodiment of the present invention, since the encoded data is received and decoded, the weighted blocks are determined with respect to the neighboring decoded neighboring blocks . Therefore, the term "base-adjacency peripheral block" described in the following equations is used in the description of the apparatus for predicting the color image according to the embodiment of the present invention and the color space prediction apparatus according to the second embodiment of the present invention, Quot; peripheral block ".

The prediction frequency selector 130 calculates a prediction gain for each frequency domain using the calculated frequency-domain weighting coefficients and the frequency-transforming coefficients on the weighting block, and calculates a frequency domain to be used for color space prediction using the calculated frequency- .

The frequency domain prediction unit 140 receives the transformed residual signal and performs a color space prediction of the residual block transformed from the prediction weight of the frequency domain selected by the prediction weight calculation unit 120.

On the other hand, the meaning of the term color space prediction may be used to mean the generation of a color space prediction block or the generation of a frequency domain prediction residual signal.

The residual block, which is an input of the transform unit 110, may be generated through inter / intra prediction of a conventional video encoder. The residual block may be generated through motion prediction image generation and residual block generation or may be generated through intra prediction image generation and residual block generation . The transforming unit 110 may calculate a transform coefficient of a residual signal block using a transform such as a discrete cosine transform (DCT) on the input inter / intra residual block.

The predictive weight calculator 120 may calculate the predictive weight for each frequency domain based on the correlation (correlation) of the frequency-space transform coefficients of the residual signal blocks of two or more color images. Since residual signals between color signal (R, G, and B signals) residual signal blocks can have different correlation (correlation) for each frequency, independent weighting is set for each frequency to increase the prediction efficiency between the residual signal color- . The predictive weight can be calculated using a mean square error or the like, which is an example of implementing the present invention, but the specific weight calculation method does not limit the idea of the present invention.

2 is a block diagram illustrating a prediction weight calculation unit.

The predictive weight calculation unit 120 may include a weighted block determination unit 122 and a block predicted weight calculation unit 124 as shown in FIG. The weighted block determination unit 122 determines one or more blocks that are selected to calculate the predicted weight from the frequency coefficients of the base-encoded neighboring blocks. That is, the prediction weight can be calculated based on the transform coefficients of the remaining signal blocks of the base-adjoined neighboring blocks, which may vary greatly depending on the number and type of blocks. Therefore, in order to determine an appropriate prediction weight, The weight can be determined by preselecting the blocks. The block predictive weight calculation unit 124 calculates a predictive weight for each frequency domain based on the transform coefficients of the determined residual signal block. The detailed operation of the prediction weight calculation unit 120 will be described later.

3 is a block diagram illustrating the predicted frequency selection unit 130. Referring to FIG.

The prediction frequency selector 130 can determine a frequency band to be used for prediction by using the prediction weight calculated by the prediction weight calculator 120. [ That is, the prediction frequency selector 130 selects a frequency band that is expected to obtain a coding gain due to the frequency domain prediction with a high degree of correlation. The prediction frequency selection unit 130 may include a prediction gain calculation unit 132 and a prediction frequency range calculation unit 134 as shown in FIG.

The prediction gain calculation unit 132 calculates the prediction gain for each frequency domain by applying the prediction weight for each frequency domain calculated by the prediction weight calculation unit 120 to the prediction of the residual signal block. The predictive frequency domain calculator 134 selects a frequency domain that is expected to obtain a gain when a prediction is used based on a result of the prediction gain calculator 132, and determines a frequency domain to be used for prediction.

FIG. 4 is a block diagram illustrating the frequency domain prediction unit 140. FIG.

The frequency domain predictor 140 subtracts a value obtained by applying a predictive weight of a frequency region selected in a frequency coefficient of a color plane other than the reference color plane to a frequency coefficient of the selected frequency region of the reference color plane of the transformed residual block, Can be performed.

The frequency domain predictor 140 performs frequency domain color space prediction using the predictive weights calculated by the predictive weight calculator 120 in the frequency domain determined by the predictive frequency selector 130. [ The frequency domain prediction unit 140 may include a weight quantization unit 142 and a prediction unit 144 as shown in FIG. The weight quantization unit 142 quantizes the prediction weights generated by the prediction weight calculation unit 120 appropriately to limit the computation accuracy and the computation load of the sub-decoder. When the operation of the decoder supports floating-point numbers, The portion 142 can be omitted. The prediction unit 144 performs color-to-planar frequency domain prediction using the quantized prediction weights in the frequency domain determined by the prediction frequency selection unit 130. [

Data whose redundancy is removed by the frequency domain prediction unit 140 is compressed by an entropy encoder such as Variable Length Coding (VLC) or Context-based Adaptive Binary Arithmetic Coding (CABAC), and added to a bit string, And decoding can be performed by performing this encoding process in reverse.

The present embodiment is based on the assumption that an RGB image, which is a general format of the image capturing apparatus, is an input signal and is based on a single macro block, but the format and size of the input image are not limited to the description of the present embodiment. That is, the prediction weight for each frequency domain can be calculated as any one of units of image such as a sequence unit of an image, a frame unit, a macroblock unit, and a sub-block unit, thereby performing color space prediction.

5 is a diagram illustrating that the color space prediction apparatus 100 according to the first embodiment of the present invention performs color space prediction on an RGB input image.

As shown in FIG. 5, after a residual signal generator (not shown) performs intra or motion prediction on each color image of RGB to generate a predicted image, a residual image corresponding to the difference between the original image and the predicted image . The transforming unit 110 performs the same frequency-space transform on the residual signal images of R, G, and B, respectively. It is recommended to use DCT (Discrete Cosine Transform) or H.264 integer transform, but the present invention is not limited thereto. Different frequency space transforms may be used for R, G and B residual signals. In the frequency-space conversion unit, various sizes of transformations such as 4x4, 8x8, or 16x16 can be used. When a 4x4 transform is used, a 16x16 macroblock is converted into 16 4x4 transform coefficient blocks. Lt; RTI ID = 0.0 > 8x8 < / RTI >

The predictive weight calculation unit 120 calculates a predictive weight for each optimum frequency of the RGB residual signal. The frequency-specific prediction weight is determined by the transformation coefficient relationship between the color plane used as the prediction reference color plane and the predicted color plane. For example, the G plane image can be used as the prediction reference color plane, and the G plane The residual signal transform coefficients of the R plane and the B plane can be predicted using the residual signal transform coefficients of the residual signal transform coefficients of the R and B planes. Here, the prediction reference color plane is not limited to the G plane, and an R plane or a B plane may be used. On the other hand, the predicted reference color plane may be predetermined, and information about the predicted reference color plane may be input from the reference color plane information generator (not shown). In this case, the prediction weight calculator 120 may receive the information about the prediction reference color plane to calculate the prediction weight, and the information about the prediction reference color plane may be input to the prediction frequency selector 130 and the frequency domain predictor 140 , And after the encoding is completed, information on the prediction reference color plane may be included in the encoded data and transmitted to the decoder.

FIG. 6 is a diagram illustrating a block format and color space prediction for an RGB image when a 4 × 4 DCT is performed on one macroblock in a case where the predicted reference color plane is a G plane.

In FIG. 6, one 16x16 macroblock is composed of 16 4x4 macroblocks, and predicts the transform coefficients in a block in the same position of the color plane different from the predicted reference color plane. The prediction operation performed by the prediction unit 144, which performs the actual prediction, can be expressed by the following equation (1).

Figure 112010042659540-pat00001

In Equation (1), rB (i, j) predicts the residual signal B (i, j) of the B plane by the residual signal G (i, j) of the G plane in the (I, j) is a frequency-domain prediction residual that predicts the residual signal (R (i, j)) of the R plane as the residual signal of the G plane at the Respectively. F R / G (G (i, j)) is a prediction function for predicting the B plane using the G plane and F R / G And can be expressed as a linear function as shown in Equation (2).

Figure 112010042659540-pat00002

In equation (2). w B / G (i, j) and w R / G (i, j) is the contribution generated by the luxury of a neighboring block, B plane and the R plane conversion coefficient block (i, j) at the position frequency And can be calculated by a mean square error technique as shown in Equation (3).

Figure 112010042659540-pat00003

In Equation (3), the E () function is a function representing an expected value, and the predictive weight calculated by the block predictive weight calculator 124 can be adaptively calculated for all the blocks, . It is also recommended, but not limited to, that the prediction weights are determined adaptively for each macroblock. Also, the prediction function is not limited to a linear function as shown in Equation (2).

As described above, the prediction weight can be adaptively calculated for each frequency band in each macroblock. However, since the DCT coefficients of the current macroblock can not be known in advance at the decoding end, it is necessary to transmit information such as the frequency domain and the weight to be predicted for each macroblock. Since a large number of bits are required to encode the information, in the present invention, a frequency region and a weight to be used for prediction can be calculated using the information of the already-coded neighboring blocks.

 Therefore, the prediction weight calculation unit 120 includes a weighted block determination unit 122 for determining a neighboring block for predicting a weight of a current block, a block prediction weight calculation unit 122 for determining a prediction weight of the current block using the determined blocks, (124).

FIG. 7 is a diagram showing a macroblock to be coded at present and a line of a macroblock already coded.

As shown in FIG. 7, in order to calculate the predictive weights of the current macroblock located in the k- th macroblock row, quantized G / B / R residual signal planes of the macroblocks of the ( k-1 ) DCT coefficients, and can be calculated based on the prediction weight determination method described above. This is a result of considering the high similarity between the current macroblock and the neighboring macroblock that has been donated, and securing minimum data for statistical stability. In the present embodiment, the prediction weight calculation unit 120 predicts the weight of the current block as a macroblock immediately before the current block. However, the present invention is not limited to this, Blocks can be targeted. Also, in the expression (k-1) th and (k) th macroblock rows, the k-1th macroblock row may mean a macroblock row temporally encoded earlier than the kth macroblock row, Or a row of macroblocks immediately above the block row.

Here, since the DCT coefficient distribution may be different for each mode such as intra 4x4, intra 8x8, and intra 16x16, the frequency domain and the prediction weight to be used for prediction can be calculated separately for each mode of the macroblock. That is, the macroblocks in the previous macroblock row are classified into the intra 4x4 mode, the intra 8x8 mode and the intra 16x16 mode, respectively, and then the macroblocks coded in the intra 4x4 mode, for example, (I. E., The weighted block), and calculate the predicted weight. Intra 8x8 mode and intra 16x16 mode can also calculate prediction weights from neighboring blocks in the same mode in the same way. Therefore, the weighted block referenced by the prediction weight calculation unit 120 may be a block coded in the same mode as the prediction mode of the current block.

The block predictive weight calculator 124 may calculate the predictive weight using Equation 3 using the DCT coefficients of the blocks determined by the weighted block determiner 122. [

After calculating the prediction weight, the prediction frequency selection unit 130 selects a frequency region in which the gain can be obtained due to the prediction. In the prediction frequency selector 130, the prediction gain calculator 1320 calculates a gain in the case where the prediction is performed using the prediction weight already calculated for each frequency region, on a frequency-by-frequency basis. This can be calculated by the equation (4).

Figure 112010042659540-pat00004

In Equation 4, Gain B / G (i, j) and Gain R / G (i, j) are each in the B plane (i, j) from the predicted gain and the R plane of the frequency positions (i, j) the frequency position . ≪ / RTI > In addition, t denotes all the weighted blocks existing in the selected block for calculating the predicted weight by the weighted block determining unit 122. That is, if the weighted block is three, the predicted gain is calculated for all three weighted blocks Sum. Also, T 1 and T 2 denote a threshold value having an arbitrary value for selecting a prediction frequency.

After calculating the gain, the predicted frequency domain calculator 134 compares the case of performing the prediction for each frequency domain with the case of not performing the prediction, and selects only the gain frequency region when the prediction is performed. This can be expressed by the equation (5).

Figure 112010042659540-pat00005

In Equation (5), B t (i, j) and R t (i, j) denote the frequency coefficients of the t-th weighted block in the B plane and the R plane at the (i, j) position, respectively. (I, j) for all the weighted blocks t, and then compares the absolute values of the frequency coefficients at (i, j) positions of all weighted blocks with the sum of the absolute values of the frequency coefficients. In this case, as a result of the comparison, only the frequency region of the prediction (gain B / G (i, j), gain R / G (i, j) do. That is, only the (i, j) frequency component in the case of satisfying the expression (5) performs color space prediction. In this example, Gain B / G (i, j), Gain R / G (i, j) to but represented as a prediction gain, real Gain B / G (i, j ), Gain R / G (i, j The smaller the value of (i, j) is, the more likely it is that the gain is generated when the color space prediction is performed.

The frequency domain prediction unit 140 performs color space prediction using the prediction weight calculated by the prediction weight calculation unit 120 and the prediction frequency region selected by the prediction frequency selection unit 130. [

The frequency domain prediction unit 140 may include a weight quantization unit 142 and a prediction unit 144. The weight quantization unit 142 is a unit for reducing the computational complexity of the predicted weight. The weight quantization unit 142 performs prediction using an integer type operation to prevent mismatching of predicted weights between the encoders / decoders. When the decoder supports floating point operations Can be omitted. That is, the frequency domain prediction unit 140 can perform color space prediction after quantizing the prediction weights of the selected frequency domain.

The following describes an embodiment in which the weight quantization unit 142 is used. Since most prediction weights can exist as floating point numbers between -1 and 1, we use a scaling factor to integerize the weights, and then divide the weights by an integer. In addition, since division operation requires a large amount of computation even if it is an integer operation, it is scaled to 2 m (m is a positive integer) so that a shift operation can be used. This can be expressed by Equation (6).

Figure 112010042659540-pat00006

In Equation (6), a is a prediction weight having an actual floating point, and floor (b) means an integer not exceeding b.

The final prediction using the above prediction weights can be performed as shown in Equation (7). (Where the prediction frequencies determined in the R and B color planes may be independent of each other).

Figure 112010042659540-pat00007

The prediction unit 144 predicts the residual signal of the reference color plane among the residual blocks transformed from the transform unit 110 and the data in which the redundancy between the transform coefficients of the prediction residual signal of each color space other than the reference color plane is removed If the color space prediction is used at the (i, j) -th frequency, the frequency domain prediction residual signal rB (i, j) is generated for the B and R color planes (i.e., the color space other than the reference color plane) (i, j) and rR (i, j) are entropy-encoded, and if color space prediction is not used at the (i, j) th frequency, the residual signals rB The signals B (i, j) and R (i, j) can be coded.

FIG. 8 is a diagram illustrating a color space prediction apparatus 800 according to the second embodiment of the present invention.

8, the color space prediction apparatus 800 according to the second embodiment of the present invention includes a prediction weight calculation unit 820, a prediction frequency selection unit 830, and a frequency domain restoration unit 840 .

The prediction weight calculation unit 820 determines a weighting block for a block of a color plane other than the prediction reference color plane from the previously decoded neighboring block of the current block and outputs the weighting block for each color plane (i.e., And the prediction weight for each frequency domain is calculated. The calculated prediction weight for each frequency domain can be transmitted to the prediction frequency selection unit 830 and the frequency domain reconstruction unit 840. Meanwhile, when the color space prediction apparatus 800 according to the second embodiment of the present invention is used in the image coding apparatus 100 according to an embodiment of the present invention, the prediction weight calculation unit 820 calculates A weighted block may be determined for blocks of color planes other than the predicted reference color plane from the neighboring decoded neighboring blocks for the generation of the image.

The prediction frequency selector 830 calculates a prediction gain for each frequency domain using the calculated frequency-domain-dependent prediction weight and the frequency-transform coefficients on the weighted block, and calculates a frequency-domain-based prediction gain using the calculated frequency- .

The frequency domain restoration unit 840 restores the transformed residual block from the color space prediction block using the predicted weight values of the selected frequency domain.

The frequency domain restoration unit 840 adds the frequency coefficients of the selected frequency region of the reference color plane of the color space prediction block and the values of the frequency coefficients of the color planes other than the reference color plane, The residual block can be restored.

Here, the restoration of the transformed residual block when restoring the transformed residual block of the B plane by modifying the equation (1) is [B (i, j) = rB (i, j) + F B / G (G (i , j))], and the R plane can be obtained similarly to the case of the B plane.

That is, the operation of the frequency domain decompression unit 840 can be performed in the reverse order of the operation of the frequency domain prediction unit 140 and can be performed using a weighted block, a prediction weight for each frequency domain, a prediction gain for each frequency domain, The selection of the region and the like have been described in the color space prediction apparatus 100 according to the first embodiment of the present invention, and a detailed description thereof will be omitted.

In the color space predicting apparatus 800 according to the second embodiment of the present invention, the prediction weight calculation unit 820 calculates a weighted value of a block of a color plane other than a predicted reference color plane, The predictive weight calculator 120 in the color space prediction apparatus 100 according to the first embodiment of the present invention estimates a predicted reference color block The operation of the predictive weight calculating unit 820 according to the second embodiment of the present invention is different from that of the predictive weight calculating unit 120 according to the first embodiment of the present invention Can be the same.

After the weighted blocks are determined, the prediction frequency selector 830 according to the second embodiment performs the same operation as the prediction frequency selector 130 in the color space prediction apparatus 100 according to the first embodiment of the present invention Can be performed.

Frequency-domain decompression unit 840 is a color space using the prediction weights (F B / G (G ( i, j)), F R / G (G (i, j))) in the frequency domain is selected from the formula (1) It is possible to restore the transformed residual block by calculating the predicted blocks r (i, j) and r (i, j) for B (i, j) and R (i, j).

FIG. 9 is a block diagram schematically illustrating an image encoding apparatus according to an embodiment of the present invention.

The image encoding apparatus 900 according to an exemplary embodiment of the present invention includes a predictor 910, a subtractor 920, a transformer 930, a color space predictor 932, a color space decompressor 934, A scanner 940, an encoder 950, an inverse transformer 960, an adder 970, and a filter 980.

The input image to be encoded can be inputted in block units, and the block can be a macro block. In an embodiment of the present invention, the shape of the macroblock may be fixed or may be various shapes such as M x N, where M and N are natural numbers having a value of 2 n (n is an integer of 1 or more) Lt; / RTI > In addition, a different type of block may be used for each frame to be encoded. When the type of the macroblock is various, such as M × N, information about the block type, which is information on the block type, is encoded for each frame, It is possible to determine the type of the block of the frame to be decoded when decoding the decoded data. The decision as to which type of block to use can be made by selecting the type of block that encodes the current frame into various types of blocks to obtain the optimum efficiency or by analyzing the characteristics of the frame and selecting the block type according to the analyzed characteristics have.

To this end, the image encoding apparatus 900 may further include a block type determiner (not shown) for determining the block type and encoding the information about the block type to be included in the encoded data.

The predictor 910 predicts the current block for each color plane to generate a prediction block. That is, the predictor 910 generates a predicted block having a predicted pixel value of each predicted pixel by predicting a pixel value of each pixel of a current block to be encoded in the image do. Here, the predictor 910 can predict a current block using intra prediction or inter prediction.

The subtractor 920 subtracts the prediction block from the current block to generate a residual block. That is, the subtractor 920 calculates the difference between the pixel value of each pixel of the current block to be encoded and the predicted pixel value of each pixel of the predicted block predicted by the predictor 910 to obtain a residual signal of a block form Thereby generating a residual block.

When the transformer 930 transforms the residual block, the transforming process may be included in the quantizing process, and in this case, the transforming process is also completed when the quantizing process is completed. Here, the transformation method includes a spatial domain image signal such as a Hadamard transform and a Discrete Cosine Transform Based Integer Transform (hereinafter, referred to as 'integer transform') into a frequency domain And various quantization techniques such as Dead Zone Uniform Threshold Quantization (DZUTQ) or Quantization Weighted Matrix may be used as the quantization method. .

The color space predictor 932 calculates a prediction weight for each frequency domain using the frequency coefficients of the neighboring blocks of the current block, calculates a prediction gain for each frequency domain from the prediction weight for each frequency domain, And performs color space prediction from the predicted weights of the selected frequency domain to generate a color space prediction block of the transformed residual block. Here, the color space predictor 932 can predict a color space using the color space prediction apparatus 100 according to the first embodiment of the present invention. The detailed description thereof has been given above in the description of the color space predicting apparatus 100 according to the first embodiment of the present invention, and a detailed description thereof will be omitted.

Alternatively, the quantizer may be performed in the converter 930. However, instead of the converter 930, the color space predictor 932 may generate a color space prediction block and quantize the color space prediction block.

And scans the coefficients of the color space prediction block generated by the color space predictor 932 to generate a coefficient sequence. At this time, the scanning scheme considers the characteristics of the conversion scheme, the quantization scheme, and the block (macroblock or sub-block), and the scanning order can be determined so that the scanned coefficient column has a minimum length. Although the scanner 940 is illustrated and described as being implemented independently of the encoder 950 in FIG. 9, the scanner 940 may be omitted, and the functions thereof may be incorporated in the encoder 950. [

The encoder 950 encodes the coefficients of the color space prediction block generated by the color space predictor 932.

The encoder 950 encodes a coefficient string generated by scanning the coefficients of the color space prediction block generated by the color space predictor 932 to generate encoded data or encodes coefficient strings generated by the scanner 840 Thereby generating encoded data.

As such an encoding technique, entropy encoding technology may be used, but various other encoding techniques may be used without being limited thereto. In addition, the encoder 950 may include not only the bit string in which the coefficient string of the color space prediction block is encoded, but also various information necessary for decoding the encoded bit string in the encoded data. Herein, various information necessary for decoding the coded bit stream includes information on the block type, information on the intra prediction mode when the prediction mode is the intra prediction mode, information on the intra prediction mode when the prediction mode is the inter prediction mode, Information about the information, conversion, and quantization type, and the like, but may be various other information.

The color space reconstructor 934 calculates the prediction weight for each frequency domain using the frequency coefficients of the neighboring blocks of the current block, calculates the frequency gain gain for each frequency domain from the prediction weight for each frequency domain, And restores the transformed residual block from the color space prediction block by using the predictive weight of the selected frequency domain.

Here, the color space restorer 934 can restore the transformed residual block using the color space prediction apparatus 800 according to the second embodiment of the present invention. The detailed description thereof has been given above in the description of the color space predicting apparatus 800 according to the second embodiment of the present invention, and a detailed description thereof will be omitted.

The inverse transformer 960 inverse transforms the transformed residual block restored by the color space restorer 934 to reconstruct the residual block. If the quantizer is also performed in the transformer 930, the inverse transformer 960 performs inverse quantization and inverse transform and performs the inverse transformation and quantization performed by the transformer 930. [ If the inverse quantization is performed in the inverse transformer 960 and the inverse quantization is performed in the transformer 930, the transform may be replaced by the transform and quantization, and the inverse transform may be replaced by the inverse quantization and inverse transform Lt; / RTI >

The inverse transformer 960 receives information on the transform when the transform (or transformation and quantization) information is generated from the transformer 930. The inverse transformer 960 inversely performs the transformed transformation of the transformer 930, Or inverse quantization and inverse transform).

The adder 970 adds the prediction block predicted by the predictor 910 and the residual block generated by the inverse transformer 960 to reconstruct the current block.

The filter 980 filters the current block restored by the adder 970. At this time, the filter 980 reduces blocking effects occurring at a block boundary or a conversion boundary by transforming and quantizing the block of an image.

In the second embodiment of the present invention, the image encoding apparatus 800 may generate a bit stream of image encoded data and intra-predictive encoded image encoded data according to inter-predictive encoding, The image encoding data generated by the intra prediction encoding as well as the image encoding under data and the intra prediction decode information or the encoded data of the image encoding data may be additionally included.

FIG. 10 is a block diagram of a video decoding apparatus according to an embodiment of the present invention. Referring to FIG.

1 to 10, a video decoding apparatus 1000 according to an exemplary embodiment of the present invention includes a decoder 1010, a reverse scanner An inverse transformer 1020, an inverse transformer 1030, an adder 1040, a predictor 1050, and a filter 1060. In this case, the inverse scanner 1020 and the filter 1060 are not necessarily included and can be selectively omitted according to the implementation method. If the inverse scanner 1020 is omitted, the functions are integrated into the decoder 1010 .

The decoder 910 decodes the encoded data to decode the color space prediction block. That is, when the function of the scanner 940 is integrated in the encoder 950 and implemented in the image encoding apparatus 900, the image decoding apparatus 1010 may decode the encoded data to restore the color space prediction block. Since the inverse scanner 1020 is omitted in the decoder 1000 and the function is integrated into the decoder 1010, the decoder 1010 can restore the color space prediction block by inversely scanning the encoded data.

Also, the decoder 1010 can decode or extract information necessary for decoding as well as the color space prediction block by decoding the encoded data. For example, information on a block type, information on an intra-prediction mode when the prediction mode is an intra-prediction mode, information on an intra-prediction mode, information on a block type, In the case of the prediction mode, information on the motion vector, information on the transform and the quantization type, and the like may be used.

Information about the block type may be transmitted to the inverse transformer 1030 and the predictor 1050 and information on the transform type (or transform and quantization type) may be transmitted to the inverse transformer 1030 And information necessary for prediction such as information on the intra prediction mode and information on the motion vector may be transmitted to the predictor 1050. [ Information on the quantization type may also be transmitted to the color space restorer 1070.

The inverse scanner 1020 restores the color space transform coefficient sequence in the decoder 1010, and then inversely scans the color space transform coefficient sequence to reconstruct the color space prediction block.

The inverse scanner 1020 generates the color space prediction block by inversely scanning the extracted coefficient sequence using various inverse scanning methods such as inverse zigzag scanning. In this case, the decoding unit 1010 may obtain the information on the size of the transform and generate a residual block using a corresponding inverse scanning method. Here, the method in which the inverse scanner 1020 performs inverse scanning according to the transformation and the quantization type is the same as or similar to the method in which the scanner 940 inversely performs the method of scanning the coefficients of the color space prediction block, A detailed description thereof will be omitted.

The color space reconstructor 1070 calculates the prediction weight for each frequency domain using the frequency coefficients of the neighboring blocks of the current block, calculates the prediction gain for each frequency domain from the prediction weight for each frequency domain, And restores the transformed residual block from the color space prediction block by using the predictive weight of the selected frequency domain.

Here, the color space restor 1070 can restore the transformed residual block from the color space prediction block using the color space prediction apparatus 800 according to the second embodiment of the present invention. The detailed description thereof has been given above in the description of the color space predicting apparatus 800 according to the second embodiment of the present invention, and the same operation as that of the color space restorer 934 can be performed, and detailed description will be omitted.

The inverse transformer 1030 reconstructs the residual block by inversely transforming the transformed residual block to be reconstructed. At this time, the inverse transformer 1030 can invert the transformed residual block according to the transform type. Here, the inverse transformer 1030 performs inverse transform of the transformed residual block according to the transform type. The inverse transformer 1030 transforms the transformed residual block according to the transform type by the transformer 930 of the image encoding apparatus 900, Therefore, a detailed description of the inverse conversion method will be omitted.

The predictor 1050 predicts the current block to generate a prediction block.

The predictor 1050 determines the size and type of the current block according to the block type identified by the information on the block type and predicts the current block using the intra prediction mode or the motion vector identified by the information required for prediction A prediction block can be generated. At this time, the predictor 1050 divides the current block into subblocks by the same or similar method as the predictor 910 of the image encoding apparatus 900, and combines prediction subblocks generated by dividing the divided subblocks, Can be generated.

The adder 1040 adds the residual block restored by the inverse transformer 1030 and the prediction block generated by the predictor 1050 to restore the current block.

The filter 1060 filters the current block reconstructed by the adder 1040, and the reconstructed current block is accumulated in units of pictures and stored as a reference picture in a memory (not shown) or the like, It can be utilized when predicting the next picture.

Since the method of performing the filtering by the filter 1060 is the same as or similar to the deblocking filtering of the filter 980 of the image encoding apparatus 900, a detailed description of the filtering method is omitted.

Meanwhile, the image encoding / decoding apparatus according to an embodiment of the present invention can be implemented by combining the image encoding apparatus 900 of FIG. 9 and the image decoding apparatus 1000 of FIG.

The image encoding / decoding apparatus according to an embodiment of the present invention generates a prediction block by predicting the current block by color plane, generates a residual block by subtracting the prediction block from the current block, transforms the residual block, The prediction gain for each frequency domain is calculated from the prediction weight for each frequency domain to select the frequency domain to be used for the color space prediction and the color space prediction from the prediction weight for the selected frequency domain is performed. (Which can be implemented using the image coding apparatus 900) that generates a color space prediction block of the transformed residual block and codes the color space prediction block, and decodes the encoded data to decode the color space prediction block Frequency domain prediction using the frequency coefficients of the neighboring blocks of the current block Calculates a weighted value, calculates a prediction gain for each frequency domain from the prediction weight for each frequency domain, selects a frequency domain to be used for color space prediction, and restores the transformed residual block from the color space prediction block using the prediction weight of the selected frequency domain (Image decoding apparatus 1000) that reconstructs a residual block by inversely transforming the transformed residual block, generates a prediction block by predicting the current block, and adds the restored residual block and the prediction block to reconstruct the current block Which can be implemented.

11 is a flowchart illustrating a color space prediction method according to the first embodiment of the present invention.

1 to 11, the color space prediction method according to the first embodiment of the present invention determines a weighted block from neighboring base blocks of a current block and calculates a prediction weight for each frequency domain from the weighted block A predicted frequency selection step S 1102 of calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain (S 1104), a prediction frequency selection step for selecting a frequency domain to be used for color space prediction using the frequency- (S1106) and a frequency domain prediction step (S1108) of receiving the transformed residual block and performing color space prediction of the transformed residual block from the predicted weight of the selected frequency domain.

12 is a flowchart illustrating a color space prediction method according to a second embodiment of the present invention.

1 to 12, a color space prediction method according to a second embodiment of the present invention determines a weighted block from neighboring decoded neighboring blocks of a current block and calculates a prediction weight for each frequency domain from the weighted block A predicted frequency selection step (S1202), a step of calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain (S1204), a step for selecting a frequency domain to be used for color space prediction using the frequency- (S1206) and restoring the transformed residual block from the color space prediction block using the prediction weight of the selected frequency domain (S1208).

13 is a flowchart illustrating an image encoding method according to an embodiment of the present invention.

1 to 13, an image encoding method according to an embodiment of the present invention includes generating a prediction block by predicting a current block for each color plane (S1302), subtracting a prediction block from the current block (S1306) generating a residual block (S1304), transforming the residual block (S1306), calculating a prediction weight for each frequency domain using frequency coefficients of neighboring blocks of the current block (S1308) (Step S1310), calculating a predicted gain for each frequency domain from the frequency domain, and selecting a frequency domain to be used for color space prediction (step S1310) A spatial prediction step S1312, and a color space prediction block encoding step S1314.

FIG. 14 is a flowchart illustrating a video decoding method according to an embodiment of the present invention.

1 to 14, an image decoding method according to an exemplary embodiment of the present invention includes decoding a coded data to decode a color space prediction block (S1402), calculating a frequency coefficient of a neighboring block of a current block (Step S1404), calculating a prediction gain for each frequency domain from the prediction weight for each frequency domain to select a frequency domain to be used for color space prediction (S1406), calculating a prediction weight for the selected frequency domain A color space restoration step (S1408) of restoring the residual block transformed from the color space prediction block using the color space prediction block, an inverse conversion step (S1410) of restoring the residual block by inversely transforming the transformed residual block, (S1412), and restoring the current block by adding the residual block and the prediction block to be restored (S1414).

The image encoding / decoding method according to an embodiment of the present invention can be realized by combining an image encoding method according to an embodiment of the present invention in FIG. 13 and an image decoding method according to an embodiment of the present invention in FIG. .

The image coding / decoding method according to an embodiment of the present invention includes generating a prediction block by predicting a current block by color plane, subtracting a prediction block from the current block to generate a residual block, transforming the residual block, Block prediction coefficients, frequency-domain prediction weights are calculated from the frequency-domain prediction weights, frequency-domain prediction gains are calculated from the frequency-domain prediction weights, Generating a color space prediction block of the transformed residual block and encoding the color space prediction block; decoding the color space prediction block to decode the color space prediction block; and encoding the color space prediction block using the frequency coefficients of the neighboring blocks of the current block, The predictive weight is calculated, and the frequency A frequency domain to be used for color space prediction is calculated by calculating a prediction gain for each region, a residual block transformed from the color space prediction block is restored by using the predicted weight of the selected frequency domain, and the transformed residual block is inversely transformed to restore the residual block And generating a prediction block by predicting the current block and adding the restored residual block and the prediction block to reconstruct the current block.

As described above, according to an embodiment of the present invention, in order to efficiently encode a motion vector of a current block, a context of a motion vector is generated based on a motion vector correlation of a neighboring block, So that the coding performance of the motion vector of the current block is greatly improved, thereby improving the coding performance of the video compression apparatus or the image quality of the reconstructed image.

While the present invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not limited to the disclosed embodiments. That is, within the scope of the present invention, all of the components may be selectively coupled to one or more of them. In addition, although all of the components may be implemented as one independent hardware, some or all of the components may be selectively combined to perform a part or all of the functions in one or a plurality of hardware. As shown in FIG. The codes and code segments constituting the computer program may be easily deduced by those skilled in the art. Such a computer program can be stored in a computer-readable storage medium, readable and executed by a computer, thereby realizing an embodiment of the present invention. As the storage medium of the computer program, a magnetic recording medium, an optical recording medium, a carrier wave medium, or the like may be included.

Furthermore, the terms "comprises", "comprising", or "having" described above mean that a component can be implanted unless otherwise specifically stated, But should be construed as including other elements. All terms, including technical and scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined. Commonly used terms, such as predefined terms, should be interpreted to be consistent with the contextual meanings of the related art, and are not to be construed as ideal or overly formal, unless expressly defined to the contrary.

The foregoing description is merely illustrative of the technical idea of the present invention, and various changes and modifications may be made by those skilled in the art without departing from the essential characteristics of the present invention. Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.

As described above, according to the video encoding / decoding method and apparatus using the adaptive frequency domain color space prediction according to the embodiment of the present invention, the video data can be directly compressed without performing the conventional color conversion. In addition, the coding efficiency can be further improved by removing redundant information between color components that vary depending on the encoding mode, using correlation between image components.

In addition, when direct encoding is performed in the color space region of the original image, there is no image quality loss such as distortion of color occurring in conversion to another region. Therefore, a digital cinema and a digital archive (Digital Archive) and the like, which is highly likely to be used in industry.

Claims (78)

delete delete delete delete delete delete delete delete delete delete delete delete An apparatus for decoding an image, the apparatus comprising:
A decoder for decoding the encoded color space prediction block of the current block of the color plane different from the reference color plane by decoding the encoded data;
A predictive weight for each frequency domain is calculated using the frequency coefficient of the neighboring block of the current block and the frequency coefficient of the previously decoded block of the reference color plane, and the prediction weight for each frequency domain, the frequency coefficient of the previously- A color space restorer for selecting a frequency region to which the prediction weight for each frequency region is to be applied using the frequency coefficient of the neighboring block and restoring the transformed residual block from the color space prediction block using the predicted weight of the selected frequency region, ;
An inverse transformer for inversely transforming the transformed residual block to reconstruct a residual block;
A predictor for generating a prediction block by predicting a current block; And
An adder for adding the restored residual block and the prediction block to restore the current block,
And an image decoding unit for decoding the image.
14. The method of claim 13,
Wherein the inverse transformer comprises:
And inverse transforms the transformed residual block after inverse-quantizing the transformed residual block.
14. The method of claim 13,
The color space reconstructor includes:
Wherein the color space prediction block is dequantized and then the prediction weight for each frequency domain is calculated.
14. The method of claim 13,
The predictive weight for each frequency domain may include:
Wherein a correlation coefficient between a frequency coefficient of a neighboring block of the current block and a frequency coefficient of a previously decoded block of the reference color plane is calculated.
14. The method of claim 13,
The predictive weight for each frequency domain may include:
Wherein the motion vector is calculated in units of a sequence unit of an image, a frame unit, and a block unit.
14. The method of claim 13,
The color space reconstructor includes:
Wherein the transformed residual block is reconstructed by adding a frequency coefficient of the selected frequency domain of the color space prediction block and a value obtained by applying a predictive weight of the selected frequency domain to a frequency coefficient of the reference color plane, Device.
14. The method of claim 13,
The color space reconstructor includes:
A prediction weight calculation unit for determining a weighted block from the decoded neighboring blocks of the current block and calculating the prediction weight for each frequency domain using the weighted block;
A prediction frequency selection unit for selecting a frequency region to which the prediction weight for each frequency domain is to be applied using the prediction weight for each frequency domain, the frequency coefficient of the previously decoded block, and the frequency coefficient of the neighboring block; And
And a frequency domain restoration unit for restoring the transformed residual block from the color space prediction block using the prediction weight of the selected frequency domain,
And an image decoding unit for decoding the image.
20. The method of claim 19,
The neighboring block may include:
Wherein the current block is a block immediately above a block row in which the current block is located.
20. The method of claim 19,
The weighting block may include:
Wherein the prediction mode is a block encoded in the same mode as the prediction mode of the current block.
20. The method of claim 19,
Wherein the prediction-
And a frequency region to which the prediction weight for each frequency domain is to be applied is selected based on a difference between a result of applying the predictive weight for each frequency domain to the frequency coefficient of the previously decoded block and a frequency coefficient of the previously decoded block. The image decoding apparatus comprising:
20. The method of claim 19,
The frequency-
And selects the frequency domain using the quantized prediction weights after quantizing the prediction weights of the selected frequency domain.
delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete A method of encoding an image,
Generating a prediction block by predicting a current block of a color plane different from the reference color plane;
Generating a residual block by subtracting the prediction block from the current block;
A transforming step of transforming the residual block;
Calculating a prediction weight for each frequency domain by using a frequency coefficient of a neighboring block of the current block and a frequency coefficient of a base-reduced block of the reference color plane, and calculating a prediction weight by frequency domain, a frequency coefficient of the base- Selecting a frequency region to which the prediction weight for each frequency region is to be applied by using the frequency coefficient of the neighboring block, performing inter-color plane prediction on the transformed residual block using the predicted weight of the selected frequency region, A color space prediction step of generating a color space prediction block of a residual block; And
Encoding the color space prediction block
Wherein the image encoding method comprises:
42. The method of claim 41,
After the converting step,
And quantizing the transformed residual block. ≪ RTI ID = 0.0 > 8. < / RTI >
42. The method of claim 41,
After the color space prediction step,
And quantizing the color space prediction block.
42. The method of claim 41,
The predictive weight for each frequency domain may include:
Wherein a correlation coefficient between a frequency coefficient of a neighboring block of the current block and a frequency coefficient of a base-encoded block of the reference color plane is calculated.
42. The method of claim 41,
The predictive weight for each frequency domain may include:
A frame unit, and a block unit, based on a result of the comparison.
42. The method of claim 41,
Wherein the color space prediction step comprises:
Wherein the color space prediction block is generated by subtracting a value obtained by applying a prediction weight of the selected frequency domain to a frequency coefficient of the reference color plane at a frequency coefficient of the selected frequency band of the transformed residual block .
42. The method of claim 41,
Wherein the color space prediction step comprises:
A predicted weight calculation step of determining a weighted block from the base-adjacated neighboring blocks of the current block and calculating the predicted weight for each frequency domain using the weighted block;
A prediction frequency selecting step of selecting a frequency region to which the prediction weight for each frequency domain is to be applied, using the prediction weight for each frequency domain, the frequency coefficient for the base-gated block, and the frequency coefficient for the neighboring block; And
A frequency domain prediction step of generating a color space prediction block of the transformed residual block from the prediction weight of the selected frequency domain,
Wherein the image encoding method comprises:
49. The method of claim 47,
The neighboring block may include:
Wherein the current block is a block immediately above a block row in which the current block is located.
49. The method of claim 47,
The weighting block may include:
Wherein the current block is a block encoded in the same mode as the prediction mode of the current block.
49. The method of claim 47,
In the prediction frequency selection step,
And a frequency region to which the prediction weight for each frequency region is to be applied is selected based on a difference between a result of applying the prediction weight for each frequency domain to the frequency coefficient of the base-gained block and a frequency coefficient of the base- / RTI >
49. The method of claim 47,
In the frequency domain prediction step,
And the frequency domain is selected from the quantized prediction weights after quantizing the prediction weights of the selected frequency domain.
delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete delete
KR1020100063359A 2010-07-01 2010-07-01 Method and Apparatus for Color Space Prediction and Method and Apparatus for Encoding/Decoding of Video Data Thereof KR101673027B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020100063359A KR101673027B1 (en) 2010-07-01 2010-07-01 Method and Apparatus for Color Space Prediction and Method and Apparatus for Encoding/Decoding of Video Data Thereof
PCT/KR2011/004839 WO2012002765A2 (en) 2010-07-01 2011-07-01 Method and apparatus for predicting a color space, and method and apparatus for encoding/decoding an image using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020100063359A KR101673027B1 (en) 2010-07-01 2010-07-01 Method and Apparatus for Color Space Prediction and Method and Apparatus for Encoding/Decoding of Video Data Thereof

Publications (2)

Publication Number Publication Date
KR20120002712A KR20120002712A (en) 2012-01-09
KR101673027B1 true KR101673027B1 (en) 2016-11-04

Family

ID=45402597

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020100063359A KR101673027B1 (en) 2010-07-01 2010-07-01 Method and Apparatus for Color Space Prediction and Method and Apparatus for Encoding/Decoding of Video Data Thereof

Country Status (2)

Country Link
KR (1) KR101673027B1 (en)
WO (1) WO2012002765A2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6656147B2 (en) * 2013-10-18 2020-03-04 ジーイー ビデオ コンプレッション エルエルシー Multi-component image or video coding concept
GB2621915A (en) * 2022-06-16 2024-02-28 Mbda Uk Ltd Method for image encoding

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101138392B1 (en) * 2004-12-30 2012-04-26 삼성전자주식회사 Color image encoding and decoding method and apparatus using a correlation between chrominance components
US8145002B2 (en) * 2007-06-28 2012-03-27 Mitsubishi Electric Corporation Image encoding device and image encoding method

Also Published As

Publication number Publication date
WO2012002765A2 (en) 2012-01-05
KR20120002712A (en) 2012-01-09
WO2012002765A3 (en) 2012-05-03

Similar Documents

Publication Publication Date Title
US9584810B2 (en) Encoding/decoding method and device for high-resolution moving images
US9473773B2 (en) Method and apparatus for encoding frequency transformed block using frequency mask table, and method and apparatus for encoding/decoding video using same
KR100946600B1 (en) An apparatus and method for encoding digital image data in a lossless manner
US10034024B2 (en) Method and apparatus for encoding/decoding images considering low frequency components
KR20090097688A (en) Method and apparatus of encoding/decoding image based on intra prediction
KR101418104B1 (en) Motion Vector Coding Method and Apparatus by Using Motion Vector Resolution Combination and Video Coding Method and Apparatus Using Same
KR101763113B1 (en) Video Encoding/Decoding Method and Apparatus for Noise Component in Spatial Domain
KR20120009861A (en) Method and Apparatus for Encoding/Decoding of Video Data Using Expanded Skip Mode
KR101681301B1 (en) Method and Apparatus for Encoding/Decoding of Video Data Capable of Skipping Filtering Mode
KR101449683B1 (en) Motion Vector Coding Method and Apparatus by Using Motion Vector Resolution Restriction and Video Coding Method and Apparatus Using Same
EP3010233A1 (en) Method and device for subband coding frequency conversion unit, and method and device for image decoding using same
KR101673027B1 (en) Method and Apparatus for Color Space Prediction and Method and Apparatus for Encoding/Decoding of Video Data Thereof
KR101369174B1 (en) High Definition Video Encoding/Decoding Method and Apparatus
KR101681307B1 (en) Method and Apparatus for Color Space Prediction Using Multiple Frequency Domain Weighted Prediction Filter and Method and Apparatus for Encoding/Decoding of Video Data Thereof
KR101943425B1 (en) Method and Apparatus for Image Encoding/Decoding using Efficient Non-fixed Quantization
KR20190009826A (en) Method and Apparatus for Image Encoding/Decoding using Efficient Non-fixed Quantization
JP6402520B2 (en) Encoding apparatus, method, program, and apparatus
KR101575634B1 (en) High Definition Video Encoding/Decoding Method and Apparatus
KR101673026B1 (en) Method and Apparatus for Coding Competition-based Interleaved Motion Vector and Method and Apparatus for Encoding/Decoding of Video Data Thereof
KR101693284B1 (en) Method and Apparatus for Encoding/Decoding of Video Data Using Global Motion-based Enconding Structure
KR101997655B1 (en) Method and Apparatus for Image Encoding/Decoding
KR101658585B1 (en) Video Coding Method and Apparatus by Using Tool Set
KR20140019011A (en) High definition video encoding/decoding method and apparatus
KR101575638B1 (en) High Definition Video Encoding/Decoding Method and Apparatus

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right