WO2017010850A1 - 분리 가능한 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 - Google Patents
분리 가능한 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 Download PDFInfo
- Publication number
- WO2017010850A1 WO2017010850A1 PCT/KR2016/007766 KR2016007766W WO2017010850A1 WO 2017010850 A1 WO2017010850 A1 WO 2017010850A1 KR 2016007766 W KR2016007766 W KR 2016007766W WO 2017010850 A1 WO2017010850 A1 WO 2017010850A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- transform
- graph
- segment
- line graph
- vectors
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- the present invention relates to a method and apparatus for encoding and decoding a video signal using a separable graph-based transform. Specifically, the present invention relates to a method for generating a graph-based transform kernel by combining transform coefficients of a region divided based on an edge.
- Compression coding refers to a series of signal processing techniques for transmitting digitized information through a communication line or for storing in a form suitable for a storage medium.
- Media such as an image, an image, an audio, and the like may be a target of compression encoding.
- a technique of performing compression encoding on an image is called video image compression.
- Next-generation video content will be characterized by high spatial resolution, high frame rate and high dimensionality of scene representation. Processing such content would result in a tremendous increase in terms of memory storage, memory access rate, and processing power.
- a graph is a data representation form useful for describing relationship information between pixels, and a graph-based signal processing method of expressing and processing such relationship information between pixels as a graph is used.
- This graph-based signal processing is based on a graph in which each signal sample represents a vertex and the relationship of the signal is represented by a graph edge with a positive weight. Since differential signals have very different statistical characteristics depending on the prediction method and the video content, it is necessary to optimize concepts such as sampling, filtering, and transformation using graph-based signal processing.
- An object of the present invention is to provide a method of applying a graph-based transformation adaptive to a characteristic of an image signal or a differential signal.
- the present invention provides a method of generating a graph-based transform kernel by combining transform coefficients of a region divided by an edge on a 1-dimensional line graph.
- An object of the present invention is to provide a method for generating a transform kernel using a graph after generating a graph from edge information of an image.
- An object of the present invention is to provide a method of generating a transform kernel using a graph after generating a graph from segmentation information of an image.
- the present invention seeks to provide a method for generating an optimal transform kernel based on the graph characteristics of the differential block.
- the present invention provides a method of selecting whether to apply a general transformation (eg, DCT or DST) or a graph-based transformation by transmitting flag information for each image segmentation unit.
- a general transformation eg, DCT or DST
- a graph-based transformation by transmitting flag information for each image segmentation unit.
- An object of the present invention is to provide a method of defining a translation index corresponding to an optimal translation kernel.
- An object of the present invention is to provide a method of generating a line graph based on at least one of an edge weight, a magnetic loop number, and a magnetic loop weight.
- An object of the present invention is to provide a method of generating a graph-based transform kernel using various types of line graphs.
- the present invention seeks to provide a method for defining and signaling a template for a graph-based transform based on at least one of edge weights, magnetic loop numbers, and magnetic loop weights.
- the present invention provides a method of applying a graph-based transform that is adaptive to the characteristics of a video signal or a differential signal.
- the present invention provides a method of generating a transform kernel using a graph after generating a graph from edge information of an image.
- the present invention provides a method of generating a transform kernel using a graph after generating a graph from segmentation information of an image.
- the present invention provides a method for generating an optimal transform kernel based on the graph characteristics of the difference block.
- the present invention provides a method of selecting whether to apply general transformation (eg, DCT or DST) or graph based transformation by transmitting flag information for each image segmentation unit.
- general transformation eg, DCT or DST
- the present invention provides a method of defining a translation index corresponding to an optimal translation kernel.
- the present invention provides a method of generating a line graph based on at least one of an edge weight, a magnetic loop number, and a magnetic loop weight.
- the present invention provides a method of generating a graph-based transform kernel using various types of line graphs.
- the present invention provides a method for defining and signaling a template for a graph-based transform based on at least one of edge weights, magnetic loop numbers, and magnetic loop weights.
- a still image or a moving image is represented by a graph that can well represent characteristics of an image signal, and then encoded / decoded by applying a transform kernel generated from the graph, thereby greatly reducing the amount of compressed data for a complex image.
- the present invention ensures the flexibility to adaptively apply transforms, reduces computational complexity, enables faster adaptation to changing statistical characteristics in different video segments, and performs transformations. Variability can be provided.
- the present invention can perform more efficient coding by providing a method of applying a graph-based transform that is adaptive to the characteristics of a video signal or a differential signal.
- the present invention can significantly reduce the overhead in transmission and transform selection of a transform matrix by defining a transform index corresponding to an optimal transform kernel.
- FIG. 1 is a schematic block diagram of an encoder in which encoding of a video signal is performed as an embodiment to which the present invention is applied.
- FIG. 2 is a schematic block diagram of a decoder in which decoding of a video signal is performed as an embodiment to which the present invention is applied.
- FIG. 3 shows examples of graphs used to model statistical relationships within 8x8 blocks in a video frame according to an embodiment to which the present invention is applied.
- FIG. 4 is a diagram illustrating two types of graphs showing a weight distribution as an embodiment to which the present invention is applied.
- FIG. 5 is a diagram for describing a process of obtaining a graph-based transformation matrix based on a 1D graph and a 2D graph as an embodiment to which the present invention is applied.
- FIG. 6 shows an example of one-dimensional graphs that may be a transformation basis for applying a separable transform as an embodiment to which the present invention is applied.
- FIG. 7 is a diagram for describing a method of applying a different separable transform to each line of a 2D graph as an embodiment to which the present invention is applied.
- FIG. 8 illustrates a schematic block diagram of an encoder for processing a graph-based signal as an embodiment to which the present invention is applied.
- FIG. 9 illustrates a schematic block diagram of a decoder for processing a graph-based signal as an embodiment to which the present invention is applied.
- FIG. 10 illustrates an internal block diagram of a graph-based transform unit according to an embodiment to which the present invention is applied.
- FIG. 11 is a flowchart illustrating a method of determining an optimal transform kernel based on edge information of an image signal according to an embodiment to which the present invention is applied.
- FIG. 12 is a flowchart illustrating a method of determining a separable graph-based transform kernel using a transform index, according to an embodiment to which the present invention is applied.
- FIG. 13 to 14 illustrate graph examples for explaining edge information and weight information in an image signal according to embodiments of the present invention.
- FIG. 15 is a diagram for explaining that different transform kernels are applied to each row or column in a 1D line graph as an embodiment to which the present invention is applied.
- FIG. 16 is a diagram for describing a method of determining an optimal transform kernel based on at least one of a magnetic loop number, a magnetic loop weight, and an edge weight as an embodiment to which the present invention is applied.
- FIG. 17 is a flowchart illustrating a process of rearranging transform vectors constituting a transform kernel based on edge information according to an embodiment to which the present invention is applied.
- FIG. 18 is a flowchart illustrating a process of decoding a video signal through a rearrangement process of transform vectors, according to an embodiment to which the present invention is applied.
- a method of decoding a video signal using a graph-based transform comprising: parsing a transform index from the video signal; Generating a line graph based on the edge information for the target unit; Sorting the segment-specific transform vectors of the line graph based on the transform type corresponding to the transform index; Obtaining a transform kernel by rearranging the segment-specific transform vectors of the line graph according to a predetermined condition; And performing inverse transform on the target unit based on the transform kernel.
- the transform kernel is composed of a combination of transform vectors in two or more line graph segments divided based on the edge information, wherein the transform vectors in the two or more line graph segments are applied to at least two transform types. It features.
- the transform vectors in the first segment and the second segment are angular frequency. Characterized by the value sorted in ascending order.
- the intervals between the transform vectors in the same segment may be aligned.
- the present invention provides a method of encoding a video signal using graph-based transform, comprising the steps of: identifying edge information for a target unit; Generating a line graph based on the edge information; Generating a transform kernel for each row and column of the target unit based on the line graph; And performing a transform on the target unit using the transform kernel, wherein the transform kernel is composed of a combination of transform vectors in at least two line graph segments separated based on the edge information.
- the transform vectors in the above line graph segment provide a method wherein at least two transform types are applied.
- the method may further include rearranging the transform vectors constituting the transform kernel according to a predetermined condition.
- the method further comprises the step of encoding a conversion index corresponding to the conversion kernel.
- the present invention provides an apparatus for decoding a video signal using a graph-based transform, the apparatus comprising: a parser for parsing a transform index from the video signal; And generating a line graph based on the edge information of the target unit, sorting the segment vector transform vectors of the line graph based on the transform type corresponding to the transform index, and transforming the segment graph of the line graph according to a predetermined condition. And an inverse transform unit for obtaining a transform kernel by rearranging the transform kernels, and performing an inverse transform on the target unit based on the transform kernel.
- an apparatus for encoding a video signal using a graph-based transform comprising: a graph signal generation unit for identifying edge information of a target unit and generating a line graph based on the edge information; A transform matrix determiner configured to generate a transform kernel for each row and column of the target unit based on the line graph; And a transform performing unit performing a transform on the target unit using the transform kernel, wherein the transform kernel is composed of a combination of transform vectors in at least two line graph segments separated based on the edge information.
- the transform vectors in two or more line graph segments provide an apparatus characterized in that at least two or more transform types are applied.
- terms used in the present invention may be replaced for more appropriate interpretation when there are general terms selected to describe the invention or other terms having similar meanings.
- signals, data, samples, pictures, frames, blocks, etc. may be appropriately replaced and interpreted in each coding process.
- partitioning, decomposition, splitting, and division may be appropriately replaced and interpreted in each coding process.
- Compression efficiency can be improved by applying a linear transformation that adaptively changes the statistical properties of the signal in other parts of the video sequence.
- General statistical methods have been tried for this purpose, but they have had limited results.
- a graph-based signal processing technique is introduced as a more efficient way of modeling the statistical characteristics of a video signal for video compression.
- the orthogonal transform for coding or prediction can be derived by calculating the spectral decomposition of the graph Laplacian matrix. For example, an eigen vector and an eigen value may be obtained through the spectral decomposition.
- the present invention provides a method of generating a graph-based transform kernel by combining transform coefficients of a region divided by an edge on a 1-dimensional line graph.
- the transform obtained from the graph may be defined as a graph-based transform (hereinafter, referred to as 'GBT').
- 'GBT' graph-based transform
- GBT conversion obtained from the graph
- FIG. 1 is a schematic block diagram of an encoder in which encoding of a video signal is performed as an embodiment to which the present invention is applied.
- the encoder 100 may include an image splitter 110, a transformer 120, a quantizer 130, an inverse quantizer 140, an inverse transformer 150, a filter 160, and a decoder. It may include a decoded picture buffer (DPB) 170, an inter predictor 180, an intra predictor 185, and an entropy encoder 190.
- DPB decoded picture buffer
- the image divider 110 may divide an input image (or a picture or a frame) input to the encoder 100 into one or more processing units.
- the processing unit may be a Coding Tree Unit (CTU), a Coding Unit (CU), a Prediction Unit (PU), or a Transform Unit (TU).
- CTU Coding Tree Unit
- CU Coding Unit
- PU Prediction Unit
- TU Transform Unit
- the terms are only used for the convenience of description of the present invention, the present invention is not limited to the definition of the terms.
- the term coding unit is used as a unit used in encoding or decoding a video signal, but the present invention is not limited thereto and may be appropriately interpreted according to the present invention.
- the encoder 100 may generate a residual signal by subtracting a prediction signal output from the inter predictor 180 or the intra predictor 185 from the input image signal, and generate the residual signal. Is transmitted to the converter 120.
- the transformer 120 may generate a transform coefficient by applying a transform technique to the residual signal.
- the conversion process may be applied to pixel blocks having the same size as the square, or may be applied to blocks of variable size rather than square.
- the quantization unit 130 may quantize the transform coefficients and transmit the quantized coefficients to the entropy encoding unit 190, and the entropy encoding unit 190 may entropy code the quantized signal and output the bitstream.
- the quantized signal output from the quantization unit 130 may be used to generate a prediction signal.
- the quantized signal may restore the residual signal by applying inverse quantization and inverse transformation through the inverse quantization unit 140 and the inverse transform unit 150 in the loop.
- a reconstructed signal may be generated by adding the reconstructed residual signal to a prediction signal output from the inter predictor 180 or the intra predictor 185.
- the filtering unit 160 applies filtering to the reconstruction signal and outputs it to the reproduction apparatus or transmits the decoded picture buffer to the decoding picture buffer 170.
- the filtered signal transmitted to the decoded picture buffer 170 may be used as the reference picture in the inter predictor 180. As such, by using the filtered picture as a reference picture in the inter prediction mode, not only image quality but also encoding efficiency may be improved.
- the decoded picture buffer 170 may store the filtered picture for use as a reference picture in the inter prediction unit 180.
- the inter prediction unit 180 performs temporal prediction and / or spatial prediction to remove temporal redundancy and / or spatial redundancy with reference to a reconstructed picture.
- the reference picture used to perform the prediction is a transformed signal that has been quantized and dequantized in units of blocks at the time of encoding / decoding in the previous time, blocking artifacts or ringing artifacts may exist. have.
- the inter prediction unit 180 may interpolate the signals between pixels in sub-pixel units by applying a lowpass filter in order to solve performance degradation due to discontinuity or quantization of such signals.
- the subpixel refers to a virtual pixel generated by applying an interpolation filter
- the integer pixel refers to an actual pixel existing in the reconstructed picture.
- the interpolation method linear interpolation, bi-linear interpolation, wiener filter, or the like may be applied.
- the interpolation filter may be applied to a reconstructed picture to improve the precision of prediction.
- the inter prediction unit 180 generates an interpolation pixel by applying an interpolation filter to integer pixels, and uses an interpolated block composed of interpolated pixels as a prediction block. You can make predictions.
- the intra predictor 185 may predict the current block by referring to samples around the block to which current encoding is to be performed.
- the intra prediction unit 185 may perform the following process to perform intra prediction. First, reference samples necessary for generating a prediction signal may be prepared. The prediction signal may be generated using the prepared reference sample. Then, the prediction mode is encoded. In this case, the reference sample may be prepared through reference sample padding and / or reference sample filtering. Since the reference sample has been predicted and reconstructed, there may be a quantization error. Accordingly, the reference sample filtering process may be performed for each prediction mode used for intra prediction to reduce such an error.
- a prediction signal generated through the inter predictor 180 or the intra predictor 185 may be used to generate a reconstruction signal or to generate a residual signal.
- FIG. 2 is a schematic block diagram of a decoder in which decoding of a video signal is performed as an embodiment to which the present invention is applied.
- the decoder 200 includes a parser (not shown), an entropy decoder 210, an inverse quantizer 220, an inverse transformer 230, a filter 240, and a decoded picture buffer (DPB).
- a decoded picture buffer unit 250, an inter predictor 260, and an intra predictor 265 may be included.
- the reconstructed video signal output through the decoder 200 may be reproduced through the reproducing apparatus.
- the decoder 200 may receive a signal output from the encoder 100 of FIG. 1, and the received signal may be entropy decoded through the entropy decoding unit 210.
- the inverse quantization unit 220 obtains a transform coefficient from the entropy decoded signal using the quantization step size information.
- the obtained transform coefficients may be applied to various embodiments described in the transform unit 120 of FIG.
- the inverse transform unit 230 inversely transforms the transform coefficient to obtain a residual signal.
- a reconstructed signal is generated by adding the obtained residual signal to a prediction signal output from the inter predictor 260 or the intra predictor 265.
- the filtering unit 240 applies filtering to the reconstructed signal and outputs the filtering to the reproducing apparatus or transmits it to the decoded picture buffer unit 250.
- the filtered signal transmitted to the decoded picture buffer unit 250 may be used as the reference picture in the inter predictor 260.
- the embodiments described by the filtering unit 160, the inter prediction unit 180, and the intra prediction unit 185 of the encoder 100 are respectively the filtering unit 240, the inter prediction unit 260, and the decoder. The same may be applied to the intra predictor 265.
- FIG. 3 illustrates an example of graphs used to model a statistical relationship in an 8x8 block in a video frame as an embodiment to which the present invention is applied.
- Discrete-time signal processing techniques have evolved from directly processing and filtering analog signals, and therefore have some common assumptions, such as sampling and processing only regularly and organized data. Has been limited.
- the field of video compression is basically based on the same assumptions, but is generalized to multidimensional signals.
- Signal processing based on graph representations generalizes concepts such as sampling, filtering, and Fourier transforms, uses graphs where each signal sample represents a vertex, and from traditional approaches where signal relationships are represented by positively weighted graph edges. Begins. This completely disconnects the signal from its acquisition process, so that characteristics such as sampling rate and sequence are completely replaced by the characteristics of the graph.
- the graph representation can be defined by some specific graph models.
- an undirected simple graph and an undirected edge can be used.
- the undirected simple graph may refer to a graph without a self-loop or multi edges.
- G is an undirected simple graph with weights assigned to each edge
- an undirected simple graph G can be described by three parameters as shown in Equation 1 below. have.
- V denotes a set of V graph vertices
- ⁇ denotes a graph edge set
- W denotes a weight expressed by a VxV matrix.
- the weight W may be expressed as Equation 2 below.
- W i, j represents the weight of the edge (i, j)
- W j, i represents the weight of the edge (j, i)
- if there is no edge connecting the vertices (i, j), W i, j 0.
- W i, i 0.
- the present invention provides two embodiments of a graph type that can be used for processing 8x8 pixel blocks in an image or video.
- Each pixel is associated with a graph vertex, whose value is the value of the graph vertex.
- the graph edge may mean a line connecting graph vertices.
- the graph edge is used to represent any form of statistical dependence in the signal, where a positive weight may indicate its strength.
- each vertex may be connected to all other vertices, and a weight of zero may be assigned to the edges connecting the unrelated or weakly associated vertices.
- an edge with a weight of zero may be completely removed.
- each vertex may be defined to be connected to the eight adjacent vertices nearest thereto.
- FIG. 4 is a diagram illustrating two types of graphs showing a weight distribution as an embodiment to which the present invention is applied.
- the vertex values of the graph are independent variables based on signal measurements (typically modeled as arbitrary variables), but it is necessary to select the edge weights of the graph that match the characteristics of some signals.
- the colors of the lines for the graph edges represent different edge weights.
- the graph of FIG. 4 (a) shows a case of having a "weak link” along a straight line and a case of having only two types of edge weights.
- the "weak link” means that it has a relatively small edge weight.
- the graph of FIG. 4 (b) shows the distribution of edge weights covering an irregular area, and the present invention intends to provide a method of processing a signal using the distribution graph of the edge weights.
- FIG. 5 is a diagram for describing a process of obtaining a graph-based transformation matrix based on a 1D graph and a 2D graph as an embodiment to which the present invention is applied.
- FIG. 5A illustrates a 1D graph corresponding to each line of the pixel block
- FIG. 5B illustrates a 2D graph corresponding to the pixel block.
- the graph vertex is associated with each pixel of the pixel block, and the value of the graph vertex may be expressed as a pixel value.
- the graph edge may mean a line connecting graph vertices.
- the graph edge is used to represent some form of statistical dependence in the signal, and a value representing the strength may be referred to as an edge weight.
- FIG. 5 (a) one-dimensional graphs are shown, 0, 1, 2, and 3 represent positions of vertices, and w 0 , w 1 and w 2 represent edge weights between the vertices.
- Each vertex may be connected to all other vertices, and an edge weight of zero may be assigned to the edges connecting the unrelated or weakly associated vertices. However, for the sake of simplicity, the edge with an edge weight of zero can be completely removed.
- the relationship information between pixels may be represented by edge presence and edge weight values between pixels when each pixel corresponds to a vertex of the graph.
- GBT can be obtained through the following process.
- the encoder or decoder may obtain graph information from the target block of the video signal. From the obtained graph information, a Laplacian matrix L may be obtained as shown in Equation 3 below.
- D represents a degree matrix, for example, the degree matrix may mean a diagonal matrix including information about the order of each vertex.
- A represents an adjacency matrix that represents, by weight, a connection relationship (eg, an edge) with adjacent pixels.
- the GBT kernel can be obtained by performing eigen decomposition on the Laplacian matrix L as shown in Equation 4 below.
- Equation 4 L denotes a Laplacian matrix, U denotes an eigen matrix, and U T denotes a transpose matrix of U.
- the eigen matrix U may provide a specialized graph-based Fourier transform for a signal that fits the corresponding graph model.
- an eigen matrix U satisfying Equation 4 may mean a GBT kernel.
- FIG. 6 shows an example of one-dimensional graphs that may be a transformation basis for applying a separable transform as an embodiment to which the present invention is applied.
- Embodiments for 1D graphs that can be the basis for a line can be described as follows.
- the correlation is small for only one pixel pair, so that only the weight value of the corresponding edge is set small.
- a small edge weight may be set for the graph edge including the block boundary.
- a self-loop may or may not exist at both ends, or only one side may have a self-loop.
- FIGS. 6 (a) and 6 (b) show a case in which a self-loop exists only on one side of the 1D graph
- FIG. 6 (c) shows a magnetic loop (across the 1D graph).
- self-loop is present
- FIG. 6 (d) shows a case where a self-loop is not present across the 1D graph.
- the self-loop represents a dependency with an adjacent vertex and may mean, for example, a self weight. That is, weights may be further added to a portion where a self loop exists.
- a separate 1D separable transform set may be defined according to the TU size.
- the transform coefficient data increases with O (N 4 ) as the TU size increases, but in the case of a separable transform, it increases with O (N 2 ). Accordingly, the following configuration may be possible by combining various underlying 1D separable transforms.
- a template index is signaled, and a separate weight value is additionally assigned to only an edge corresponding to the boundary. You can apply a template instead.
- FIG. 7 is a diagram for describing a method of applying a different separable transform to each line of a 2D graph as an embodiment to which the present invention is applied.
- FIG. 7 illustrates a two-dimensional graph corresponding to a pixel block, a graph vertex is associated with each pixel of the pixel block, and the value of the graph vertex may be expressed as a pixel value.
- the line connecting the graph vertices means a graph edge.
- the graph edge is used to represent some form of statistical dependence in a signal, and a value representing the strength may be referred to as an edge weight.
- NGBT 2D non-separable GBT
- SGBTs 1D separable GBTs
- edge weights of each side (a ij , b kl ) can be used to create and apply 2D non-separable GBT kernels.
- SGBT 1D separable GBT
- a 1D separable GBT may be applied to a graph composed of edge weights of the j th column b 0j , b 1j , and b 2j .
- 1D separate GBTs may be applied to each line.
- 1D SGBT may be applied to each combination.
- one GBT template set for an NxN TU is composed of M four-connected graphs, a total of M N 2 xN 2 transformation matrices must be prepared and then stored. The memory requirement for this is large.
- one 4-connected graph can be constructed by combining at least one 1D graph element, only a transformation for at least one 1D graph element is required, and thus a memory for storing transformation matrices. The amount can be reduced.
- a variety of 4-connected 2D graphs can be generated with a limited number of 1D graph elements, thereby making a GBT template set suitable for each mode combination. You can customize Even if the total number of GBT templates increases, the underlying number of 1D transforms will remain the same, minimizing the amount of memory required. For example, after preparing a limited number of (a i0 , a i1 , a i2 ) and (b 0j , b 1j , b 2j ) combinations in FIG. In this case, one 4-connected 2D graph may be generated.
- the combination of 1D transforms is used to customize the combination of 1D transforms. can do.
- FIG. 8 illustrates a schematic block diagram of an encoder for processing a graph-based signal as an embodiment to which the present invention is applied.
- the encoder 800 to which the present invention is applied includes a graph-based transform unit 810, a quantizer 820, an inverse quantizer 830, an inverse transform unit 840, a buffer 850, and a predictor ( 860, and an entropy encoding unit 870.
- the encoder 800 receives a video signal and generates a prediction error by subtracting the predicted signal output from the predictor 860 from the video signal.
- the generated prediction error is transmitted to the graph-based transform unit 810, and the graph-based transform unit 810 generates a transform coefficient by applying a transform scheme to the prediction error.
- the graph-based transform unit 810 may select a more suitable transform matrix by comparing the obtained graph-based transform matrix with the transform matrix obtained from the transform unit 120 of FIG. 1. have.
- the quantization unit 820 quantizes the generated transform coefficient and transmits the quantized coefficient to the entropy encoding unit 870.
- the entropy encoding unit 870 performs entropy coding on the quantized signal and outputs an entropy coded signal.
- the quantized signal output by the quantization unit 820 may be used to generate a prediction signal.
- the inverse quantizer 830 and the inverse transformer 840 in the loop of the encoder 800 may perform inverse quantization and inverse transformation on the quantized signal so that the quantized signal is restored to a prediction error. Can be.
- the reconstructed signal may be generated by adding the reconstructed prediction error to the prediction signal output by the prediction unit 860.
- the buffer 850 stores the reconstructed signal for future reference by the predictor 860.
- the prediction unit 860 may generate a prediction signal using a signal previously restored and stored in the buffer 850.
- the generated prediction signal is subtracted from the original video signal to generate a differential signal, which is transmitted to the graph-based converter 810.
- FIG. 9 illustrates a schematic block diagram of a decoder for processing a graph-based signal as an embodiment to which the present invention is applied.
- the decoder 900 of FIG. 9 receives a signal output by the encoder 800 of FIG. 8.
- the entropy decoding unit 910 performs entropy decoding on the received signal.
- the inverse quantization unit 920 obtains a transform coefficient from the entropy decoded signal based on a quantization step size.
- the inverse transform unit 930 obtains a difference signal by performing inverse transform on a transform coefficient.
- the inverse transform may mean an inverse transform for the graph-based transform obtained by the encoder 800.
- the reconstruction signal may be generated by adding the obtained difference signal to the prediction signal output by the prediction unit 950.
- the buffer 940 may store the reconstruction signal for future reference by the predictor 950.
- the prediction unit 950 may generate a prediction signal based on a signal previously restored and stored in the buffer 940.
- FIG. 10 illustrates an internal block diagram of a graph-based transform unit according to an embodiment to which the present invention is applied.
- the graph-based converter 810 may include a graph parameter determiner 811, a graph signal generator 813, a transform matrix determiner 815, and a transform performer 817.
- the graph parameter determiner 811 may extract a graph parameter in a graph corresponding to the target unit of the video signal or the difference signal.
- the graph parameter may include at least one of a vertex parameter and an edge parameter.
- the vertex parameter may include at least one of a vertex position and a vertex number
- the edge parameter may include at least one of an edge weight value and an edge weight number.
- the graph parameter may be defined as a predetermined number of sets.
- the graph parameter extracted from the graph parameter determiner 811 may be expressed in a generalized form.
- the graph signal generator 813 may generate a graph signal based on the graph parameter extracted from the graph parameter determiner 811.
- the graph signal may include a weighted or unweighted line graph.
- the line graph may be generated for each row or column of the target block.
- the transformation matrix determiner 815 may determine a transformation matrix suitable for the graph signal.
- the transformation matrix may be determined based on Rate Distortion (RD) performance.
- the transformation matrix may be replaced with a representation of a transformation or a transformation kernel.
- the transform matrix may be a value determined at the encoder and the decoder, and in this case, the transform matrix determiner 815 may derive a transform matrix suitable for the graph signal from a stored location.
- the transformation matrix determiner 815 may generate a one-dimensional transform kernel for a line graph, and combine two of the one-dimensional transform kernels to form a two-dimensional separable graph-based transform kernel. Can be generated.
- the transform matrix determiner 815 may determine a transform kernel suitable for the graph signal among the two-dimensional separable graph-based transform kernels based on RD (Rate Distortion) performance.
- the transform performer 817 may perform a transform by using the transform matrix obtained from the transform matrix determiner 815.
- the function is divided and described for each function, but the present invention is not limited thereto.
- the graph-based converter 810 may be largely composed of a graph signal generator and a converter.
- the function of the graph parameter determiner 811 may be performed by the graph signal generator.
- the transform matrix determiner 815 and the transform performer 817 may perform functions in the transform unit.
- the function of the transform unit may be divided into a transform matrix determiner and a transform performer.
- FIG. 11 is a flowchart illustrating a method of determining an optimal transform kernel based on edge information of an image signal according to an embodiment to which the present invention is applied.
- the encoder may generate or design a line graph.
- the encoder may generate a 1D graph-based transform (GBT) associated with the line graph, wherein the 1D graph-based transform (GBT) may be generated using a generalized Laplacian operator.
- the Laplacian matrix L may be obtained through Equation 5 below.
- D represents a degree matrix, for example, the degree matrix may mean a diagonal matrix including information on the order of each vertex.
- A represents an adjacency matrix that represents, by weight, a connection relationship (eg, an edge) with adjacent pixels.
- S represents a diagonal matrix representing a self-loop at the nodes of G.
- an optimal transform kernel may be obtained by performing eigen decomposition on the Laplacian matrix L as shown in Equation 6 below.
- Equation 6 L denotes a Laplacian matrix, U denotes an eigen matrix, and U T denotes a transpose matrix of U.
- the eigen matrix U may provide a specialized graph-based Fourier transform for a signal that fits the graph model.
- an eigen matrix U satisfying Equation 6 may mean a GBT kernel.
- the columns of the eigen matrix U may refer to basis vectors of the GBT. If the graph does not have a self-loop, the generalized Laplacian matrix is given by Equation 3 above.
- One embodiment of the present invention provides a method of generating an optimal transform kernel according to a characteristic of a video signal or a differential signal.
- the encoder receives a video signal and subtracts a prediction signal output from the prediction unit from the video signal to generate a difference signal (or prediction error).
- the difference signal may be transmitted to a graph-based converter, and the graph-based converter may generate a graph signal according to the characteristics of the video signal or the difference signal.
- the characteristic of the video signal or the difference signal may be represented by boundary information.
- the boundary information may include at least one of an edge weight, a magnetic loop number, and a magnetic loop weight.
- the number of magnetic loops may refer to the number of magnetic loops or the position of the magnetic loops. In the present specification, the number of magnetic loops may be replaced with a magnetic loop position.
- the encoder may acquire edge information from the difference signal (S1110).
- the edge information may mean an edge weight for the transform unit (TU).
- a separate coding tool may be needed to efficiently code edge information separately from the block diagrams of FIGS. 1,2,8, and 9.
- a block corresponding to the coding tool will be added to the block diagram.
- the present invention does not deal with a method of coding edge information, and assumes that all edge weight values are determined by any method.
- the edge information may be transmitted to a graph-based converter, and the graph-based converter may generate a graph based on the edge information (S1120).
- the edge information may be derived from surrounding information or may retrieve predetermined edge information. Therefore, if the derivation is possible in the same manner in the decoder, the edge information may not be transmitted.
- the encoder may generate a transform kernel for each row and column based on the graph (S1130).
- the encoder can derive a predefined transform kernel using the graph.
- the transform kernel may correspond to one of preset values, and in this case, the encoder and the decoder may know the preset values and may store them as a table, for example.
- the conversion kernel may be defined for each row or column of the target block.
- the encoder may determine an optimal transform kernel through rate-distortion optimization (S1140).
- a method of generating a transform kernel is described in detail in this specification, and even if the same edge information is used according to a basic kernel type (for example, DCT type 2 or DST type 7) to be applied, several types of transform kernels may be generated. Can be. Accordingly, step S1140 may be added, and if only one type of conversion kernel is used, step S1140 and a feedback process may be omitted.
- a basic kernel type for example, DCT type 2 or DST type 7
- the encoder may perform transform on the difference signal by using the optimal transform kernel and may encode transform coefficients (S1150).
- a transform index corresponding to the optimal transform kernel may be set, and the transform index may be encoded and transmitted to a decoder. If only one type of transform kernel is used, additional information coding such as transform index may be unnecessary.
- FIG. 12 is a flowchart illustrating a method of determining a separable graph-based transform kernel using a transform index, according to an embodiment to which the present invention is applied.
- the decoder may parse a transform index of a target block from a video signal (S1210).
- the transform index indicates a graph-based transform to be applied to the target block.
- the graph-based transform to be applied to the target block may mean a 1-dimensional transform kernel for each row or column.
- the step S1210 may be performed by a parser in the decoder.
- the transform index may be received for at least one unit of a coding unit, a prediction unit, or a transform unit.
- Encoders or decoders to which the present invention is applied know various transform types, and each transform type may be mapped to a transform index.
- the transform index may be determined based on at least one of the prediction mode and the size of the transform unit.
- the transform index may be configured in different combinations based on at least one of the prediction mode and the size of the transform unit. That is, different graph-based transform kernels may be applied according to the prediction mode or the size of the transform unit.
- the transform index may correspond to each subblock.
- the graph-based transform is derived for each subblock according to the transform index, and different transform types may be applied to at least two or more subblocks.
- the different transformation types may include at least two of DCT, DST, Asymmetric Discrete Sine Transform (ADST), and Reverse ADST (RADST).
- the graph-based transform may be a two-dimensional separable graph-based transform kernel generated based on a combination of a plurality of one-dimensional graph-based transforms.
- the decoder may decode transform coefficients for the target block (S1220).
- the decoder may acquire edge information (S1230).
- the edge information may be derived from surrounding information or may retrieve predetermined edge information.
- the edge information may be extracted from the bitstream at the decoder.
- the decoder may generate a graph based on the edge information (S1240), and obtain an inverse transform kernel for each row or column based on at least one of the graph or the transformation index (S1240). .
- the inverse transform kernel for each row or column may be generated based on at least one of boundary information, a prediction mode, or a size of a transform unit.
- the boundary information refers to information for expressing a characteristic of a signal at a boundary of a block.
- the boundary information may include at least one of an edge weight, a magnetic loop number, and a magnetic loop weight.
- the edge weight may be divided into an edge weight of a left boundary and an edge weight of a right boundary
- the magnetic loop weight may also be divided into a magnetic loop weight of a left boundary and a magnetic loop weight of a right boundary.
- the edge weight or the magnetic loop weight may be expressed as three values of strong weight, no weight, and weak weight.
- the strong weight may be expressed as 2, the unweighted value is 1, and the weak weight is 0.
- the present invention is not limited thereto, and the weight value may be expressed by at least one value.
- the decoder may perform inverse transformation using the inverse transform kernel for each row or column (S1250).
- transform index parsing and transform coefficient decoding may be performed in parallel.
- the edge information may be sequentially performed with transform index parsing and transform coefficient decoding.
- the present invention generates and uses row and / or column-specific transformations each time using edge information, so that only DCT or DST is applied to all rows and / or columns. More efficient coding is possible.
- the present invention can further improve compression efficiency by deriving or extracting edge information and using the same to determine the transform.
- the present invention provides a method for selecting one of various methods when generating a transform kernel using edge information, and may require additional information transmission.
- FIG. 13 to 14 illustrate graph examples for explaining edge information and weight information in an image signal according to embodiments of the present invention.
- An image consists of several objects or backgrounds and an edge exists between the objects and the backgrounds.
- Edge information can be extracted by appropriately applying various algorithms known in the literature (eg, Sobel operator, Prewitt operator, or Canny edge detector, etc.).
- Sobel operator e.g., Sobel operator, Prewitt operator, or Canny edge detector, etc.
- the edge information of one image is available and the image is divided into rectangular blocks (eg 4x4, 8x8, 16x16, 32x32, 16x8, 4x8, etc.) to apply a transform
- an example graph of one block is shown in FIG. Same as
- a vertex may mean a pixel
- edge information may mean disconnection between pixels.
- the weight may be assigned a value of 0 or 1. A value of zero indicates a case in which there is little correlation between two pixels or a large change in pixel value, which may occur when objects belonging to two pixels are different.
- a connection with a neighboring diagonal connection or another row or column separated by two or more hops may be omitted. Since it does not exist, one-dimensional separable transform may be applied in the row direction and the column direction, respectively.
- a transform kernel for a one-dimensional line graph represented by a rectangular box A may be applied to the second row in FIG. 15.
- FIG. 15 is a diagram for explaining that different transform kernels are applied to each row or column in a 1D line graph as an embodiment to which the present invention is applied.
- the 1D line graph A of the second row may determine that the edge value between the fifth pixel and the sixth pixel is zero. Accordingly, the one-dimensional line graph A of the second row may be divided into two regions (or segments) with respect to the edge, so that the corresponding GBT kernel may have 5x5 DCT type-2 coefficients. And 3x3 DCT type-2 coefficients.
- the 5x5 DCT type-2 matrix and the 3x3 DCT type-2 matrix are represented by T 5x5 and T 3x3 , respectively, and the 5x3 zero matrix and the 5x5 zero matrix are represented by O 5x3 and O 3x5 , respectively.
- the corresponding 8x8 GBT kernel may be determined as in Equation 7 and Equation 8 below.
- Equation 7 8 Points to a 1x8 row vector.
- the coefficients for the k th row vector in the N ⁇ N DCT type-2 kernel may be determined as in Equation 9 below.
- the k th row vector may be regarded as a basis vector for extracting a signal component having an angular frequency of k ( ⁇ / N).
- an image codec utilizes a property in which most energy is concentrated in a low frequency region, thereby making most of the transform coefficient values in the high frequency region zero through conversion and quantization.
- data compression is performed through a technique such as run-length coding. Accordingly, the present invention can arrange the row vectors in ascending order based on the angular frequency value of k ( ⁇ / N) to arrange the low frequency components in front.
- the present invention may be aligned by applying the following criteria.
- priority may be given to a line segment disposed on the left side of the graph.
- a row vector for (0/5) ⁇ may be placed before a row vector for (0/3) ⁇ . have.
- a priority can be given to a row vector having a larger N value.
- a row vector for (0/5) ⁇ may be placed before a row vector for (0/3) ⁇ .
- a priority may be given to a row vector having a smaller N value.
- a row vector for (0/3) ⁇ may be placed before a row vector for (0/5) ⁇ .
- row vectors belonging to the same segment may be arranged as far apart as possible. For example, suppose that you select one of (1/3) ⁇ and (3/9) ⁇ first, and the row vector for (2/9) ⁇ was placed just before (1 / 3) Let ⁇ be selected first. However, if the row vector disposed immediately before belongs to a segment different from the row vectors to be placed, the criteria of the first example or the second example may be applied.
- row vectors belonging to the same segment may be arranged as close as possible.
- an angular frequency corresponding to the transform equation should be used.
- the coefficients of the k th row vector may be determined as in Equation 10 below.
- the angular frequency for to be is equal, the priority is given to the row vector for the left segment in the one-dimensional line graph.
- FIG. 16 is a diagram for describing a method of determining an optimal transform kernel based on at least one of a magnetic loop number, a magnetic loop weight, and an edge weight as an embodiment to which the present invention is applied.
- the DST type-7 transformation may be obtained from a graph in which a self loop is added to the left vertex of the line graph.
- a transform kernel is generated for a graph in which a self-loop is added to the left vertex of the line graph, a 4x4 DST type-7 transform can be obtained.
- Equations 7 and 8 row vectors of a matrix formed by combining a 5x5 DST type-7 transform kernel and a zero matrix, a zero matrix, and a 3x3 matrix.
- the row vectors of the matrix formed by combining the DCT type-2 transform kernels are aligned in the size of the angular frequency, the entire 8x8 GBT kernel can be obtained.
- the coding mode may include at least one of an intra angular prediction mode and a transform unit size.
- a final transform kernel for one row or column may be obtained.
- the edge information of the image may be extracted by analyzing the original image, or may be extracted by analyzing the difference image.
- the unit for transmitting the edge information may be set in various ways.
- edge information may be transmitted in units of frames or slices, and may be transmitted in units of blocks (eg, CU, PU, TU, etc.).
- blocks eg, CU, PU, TU, etc.
- whether or not to apply the GBT by using the flag information for each transmission level it is possible to reduce the amount of edge information transmission by determining whether to transmit the edge information on the basis.
- edge information may be derived from a reference picture. Since the reference block pointed to by the motion vector is very similar to the current block, the edge information can be derived by applying an edge detection algorithm to the reference block, and then the corresponding graph can be obtained to obtain the GBT for the current block. have.
- the reference picture may be a reconstructed picture or may be difference data with respect to the reference picture.
- the difference data may be separately stored in a reconstruction process of the corresponding reference picture.
- a graph indicating connection or disconnection between pixels is generated from location information or boundary information of each object, and then the conversion kernel for each block is performed through the above-described GBT generation process. Can be obtained.
- the graph may be constructed by disconnecting a corresponding connection of the graph between pixels belonging to different objects.
- a boundary of the CU or PU may approximately reflect edge characteristics of the image. Therefore, when a boundary of a CU or PU is included in the TU, the above-described GBT generation method may be applied after constructing a graph reflecting the boundary. For example, when a boundary of a CU or a PU is included in the TU, the connection to the part meeting the boundary may be disconnected.
- flag information indicating whether to apply a GBT generated in the above-described manner at various levels (e.g., frame, slice, CU, PU, TU, etc.), and at least one level for optimal translation. You can choose.
- the encoder applies both general transforms (eg DCT type-2, DST type-7, etc.) and graph-based transform (GBT), the least cost of which Conversion can be specified via flags or indexes.
- general transforms eg DCT type-2, DST type-7, etc.
- GBT graph-based transform
- FIG. 17 is a flowchart illustrating a process of rearranging transform vectors constituting a transform kernel based on edge information according to an embodiment to which the present invention is applied.
- the encoder may check edge information about the target unit (S1710).
- the encoder may generate a line graph based on the edge information in operation S1720.
- the encoder may arrange the transform vectors for each segment of the line graph (S1730).
- the segment-specific transform vectors may mean transform vectors corresponding to two or more line graph segments divided based on the edge information. At least two transform types may be applied to the transform vectors in the two or more line graph segments.
- the encoder may obtain a transform kernel by rearranging transform vectors for each segment according to a predetermined condition (S1740).
- various embodiments may be applied to the predetermined condition.
- the two or more line graph segments are referred to as a first segment on the left side and a second segment on the right side on the basis of edge information
- the transform vectors in the first segment and the second segment are angular frequency. It can be sorted in ascending order by value.
- priority may be given to the line graph segment located on the left side.
- the intervals between the transform vectors in the same segment may be aligned.
- the encoder may perform transformation on the target unit using the transformation kernel (S1750).
- the encoder may encode and transmit a transform index corresponding to at least one of a transform for a target unit, a transform for a line graph, or a transform for a segment.
- FIG. 18 is a flowchart illustrating a process of decoding a video signal through a rearrangement process of transform vectors, according to an embodiment to which the present invention is applied.
- the decoder may parse a transform index from a video signal (S1810).
- the transform index may correspond to at least one of a transform for a target unit, a transform for a line graph, or a transform for a segment.
- the decoder may sort the segment-specific transform vectors of the line graph based on the transform type corresponding to the transform index (S1820).
- the decoder may obtain a transform kernel by rearranging transform vectors for each segment of the line graph according to a predetermined condition (S1830).
- the transform vectors in the first segment and the second segment are angular frequency. It can be sorted in ascending order by value.
- the decoder may perform inverse transform on the target unit using the transform kernel (S1840).
- the transform kernel may be composed of a combination of transform vectors in at least two line graph segments separated based on the edge information, and at least two transform types may be applied to the transform vectors in the at least two line graph segments. .
- the line graph may be modeled on a prediction residual signal generated through intra prediction or inter prediction, and an optimal transform kernel may be adaptively selected and used according to characteristics of the prediction residual signal.
- the transform kernel generated through each line graph may be selectively applied using various combinations for the horizontal and vertical directions, which may be signaled through additional information.
- the embodiments described herein may be implemented and performed on a processor, microprocessor, controller, or chip.
- the functional units illustrated in FIGS. 1, 2, 8, 9, and 10 may be implemented by a computer, a processor, a microprocessor, a controller, or a chip.
- the decoder and encoder to which the present invention is applied include a multimedia broadcasting transmitting and receiving device, a mobile communication terminal, a home cinema video device, a digital cinema video device, a surveillance camera, a video chat device, a real time communication device such as video communication, a mobile streaming device, Storage media, camcorders, video on demand (VoD) service providing devices, internet streaming service providing devices, three-dimensional (3D) video devices, video telephony video devices, and medical video devices, and the like, for processing video signals and data signals Can be used for
- the processing method to which the present invention is applied can be produced in the form of a program executed by a computer, and can be stored in a computer-readable recording medium.
- Multimedia data having a data structure according to the present invention can also be stored in a computer-readable recording medium.
- the computer readable recording medium includes all kinds of storage devices for storing computer readable data.
- the computer-readable recording medium may include, for example, a Blu-ray disc (BD), a universal serial bus (USB), a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
- the computer-readable recording medium also includes media embodied in the form of a carrier wave (eg, transmission over the Internet).
- the bit stream generated by the encoding method may be stored in a computer-readable recording medium or transmitted through a wired or wireless communication network.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (15)
- 그래프 기반 변환을 이용하여 비디오 신호를 인코딩하는 방법에 있어서,타겟 유닛에 대한 에지 정보를 확인하는 단계;상기 에지 정보에 기초하여 라인 그래프를 생성하는 단계;상기 라인 그래프에 기초하여 상기 타겟 유닛의 각 행(row) 및 열(column)에 대한 변환 커널을 생성하는 단계; 및상기 변환 커널을 이용하여 상기 타겟 유닛에 대해 변환을 수행하는 단계를 포함하되,상기 변환 커널은 상기 에지 정보를 기준으로 구분된 2이상의 라인 그래프 세그먼트에 있는 변환 벡터들의 조합으로 구성되고,상기 2이상의 라인 그래프 세그먼트에 있는 변환 벡터들은 적어도 2이상의 변환 타입이 적용되는 것을 특징으로 하는 방법.
- 제1항에 있어서, 상기 방법은,상기 변환 커널을 구성하는 변환 벡터들을 정해진 조건에 따라 재배열하는 단계를 더 포함하는 것을 특징으로 하는 방법.
- 제2항에 있어서,상기 2이상의 라인 그래프 세그먼트를 에지 정보를 기준으로 좌측을 제1세그먼트, 우측을 제2세그먼트이라고 할 때,상기 제1세그먼트 및 상기 제2세그먼트 내 변환 벡터들은 각도 주파수(angular frequency) 값을 기준으로 오름차순으로 정렬되는 것을 특징으로 하는 방법.
- 제3항에 있어서,상기 제1세그먼트 및 상기 제2세그먼트 내 변환 벡터들의 각도 주파수(angular frequency) 값이 동일한 경우, 좌측에 위치한 라인 그래프 세그먼트에 우선 순위를 부여하는 것을 특징으로 하는 방법.
- 제3항에 있어서,상기 제1세그먼트 및 상기 제2세그먼트 내 변환 벡터들의 각도 주파수(angular frequency) 값이 동일한 경우, 보다 넓은 라인 그래프 세그먼트에 우선 순위를 부여하는 것을 특징으로 하는 방법.
- 제3항에 있어서,상기 제1세그먼트 및 상기 제2세그먼트 내 변환 벡터들의 각도 주파수(angular frequency) 값이 동일한 경우, 동일 세그먼트 내 변환 벡터들 간의 간격이 멀어지도록 정렬되는 것을 특징으로 하는 방법.
- 제1항에 있어서, 상기 방법은,상기 변환 커널에 대응되는 변환 인덱스를 인코딩하는 단계를 더 포함하는 것을 특징으로 하는 방법.
- 그래프 기반 변환을 이용하여 비디오 신호를 디코딩하는 방법에 있어서,상기 비디오 신호로부터 변환 인덱스(transform index)를 파싱하는 단계;타겟 유닛에 대한 에지 정보에 기초하여 라인 그래프를 생성하는 단계;상기 변환 인덱스에 대응되는 변환 타입에 기초하여, 라인 그래프의 세그먼트별 변환 벡터들을 정렬하는 단계;정해진 조건에 따라 상기 라인 그래프의 세그먼트별 변환 벡터들을 재배열함으로써 변환 커널을 획득하는 단계; 및상기 변환 커널에 기초하여 상기 타겟 유닛에 대해 역변환을 수행하는 단계를 포함하는 것을 특징으로 하는 방법.
- 제8항에 있어서,상기 변환 커널은 상기 에지 정보를 기준으로 구분된 2이상의 라인 그래프 세그먼트에 있는 변환 벡터들의 조합으로 구성되고,상기 2이상의 라인 그래프 세그먼트에 있는 변환 벡터들은 적어도 2이상의 변환 타입이 적용된 것을 특징으로 하는 방법.
- 제8항에 있어서,상기 2이상의 라인 그래프 세그먼트를 에지 정보를 기준으로 좌측을 제1세그먼트, 우측을 제2세그먼트라고 할 때,상기 제1세그먼트 및 상기 제2세그먼트 내 변환 벡터들은 각도 주파수(angular frequency) 값을 기준으로 오름차순으로 정렬된 것을 특징으로 하는 방법.
- 제10항에 있어서,상기 제1세그먼트 및 상기 제2세그먼트 내 변환 벡터들의 각도 주파수(angular frequency) 값이 동일한 경우, 좌측에 위치한 라인 그래프 세그먼트에 우선 순위가 부여된 것을 특징으로 하는 방법.
- 제10항에 있어서,상기 제1세그먼트 및 상기 제2세그먼트 내 변환 벡터들의 각도 주파수(angular frequency) 값이 동일한 경우, 보다 넓은 라인 그래프 세그먼트에 우선 순위가 부여된 것을 특징으로 하는 방법.
- 제10항에 있어서,상기 제1세그먼트 및 상기 제2세그먼트 내 변환 벡터들의 각도 주파수(angular frequency) 값이 동일한 경우, 동일 세그먼트 내 변환 벡터들 간의 간격이 멀어지도록 정렬된 것을 특징으로 하는 방법.
- 그래프 기반 변환을 이용하여 비디오 신호를 인코딩하는 장치에 있어서,타겟 유닛에 대한 에지 정보를 확인하고, 상기 에지 정보에 기초하여 라인 그래프를 생성하는 그래프 신호 생성부;상기 라인 그래프에 기초하여 상기 타겟 유닛의 각 행(row) 및 열(column)에 대한 변환 커널을 생성하는 변환 행렬 결정부; 및상기 변환 커널을 이용하여 상기 타겟 유닛에 대해 변환을 수행하는 변환 수행부를 포함하되,상기 변환 커널은 상기 에지 정보를 기준으로 구분된 2이상의 라인 그래프 세그먼트에 있는 변환 벡터들의 조합으로 구성되고,상기 2이상의 라인 그래프 세그먼트에 있는 변환 벡터들은 적어도 2이상의 변환 타입이 적용되는 것을 특징으로 하는 장치.
- 그래프 기반 변환을 이용하여 비디오 신호를 디코딩하는 장치에 있어서,상기 비디오 신호로부터 변환 인덱스(transform index)를 파싱하는 파싱부; 및타겟 유닛에 대한 에지 정보에 기초하여 라인 그래프를 생성하고, 상기 변환 인덱스에 대응되는 변환 타입에 기초하여 라인 그래프의 세그먼트별 변환 벡터들을 정렬하고, 정해진 조건에 따라 상기 라인 그래프의 세그먼트별 변환 벡터들을 재배열함으로써 변환 커널을 획득하고, 상기 변환 커널에 기초하여 상기 타겟 유닛에 대해 역변환을 수행하는 역변환부를 포함하는 것을 특징으로 하는 장치.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020187002053A KR20180021101A (ko) | 2015-07-15 | 2016-07-15 | 분리 가능한 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 |
US15/744,665 US10499061B2 (en) | 2015-07-15 | 2016-07-15 | Method and device for processing video signal by using separable graph-based transform |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562192578P | 2015-07-15 | 2015-07-15 | |
US62/192,578 | 2015-07-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017010850A1 true WO2017010850A1 (ko) | 2017-01-19 |
Family
ID=57758062
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2016/007766 WO2017010850A1 (ko) | 2015-07-15 | 2016-07-15 | 분리 가능한 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10499061B2 (ko) |
KR (1) | KR20180021101A (ko) |
WO (1) | WO2017010850A1 (ko) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180220158A1 (en) * | 2015-07-21 | 2018-08-02 | Lg Electronics Inc. | Method and device for processing video signal using graph-based transform |
US10841581B2 (en) | 2016-07-14 | 2020-11-17 | Arris Enterprises Llc | Region specific encoding and SAO-sensitive-slice-width-adaptation for improved-quality HEVC encoding |
IT201600122898A1 (it) * | 2016-12-02 | 2018-06-02 | Ecole Polytechnique Fed Lausanne Epfl | Metodi e apparati per codificare e decodificare immagini o flussi video digitali |
US11134272B2 (en) * | 2017-06-29 | 2021-09-28 | Qualcomm Incorporated | Memory reduction for non-separable transforms |
WO2019231206A1 (ko) | 2018-05-30 | 2019-12-05 | 디지털인사이트주식회사 | 영상 부호화/복호화 방법 및 장치 |
US11432014B2 (en) * | 2019-10-25 | 2022-08-30 | Qualcomm Incorporated | Parametric graph-based separable transforms for video coding |
US11412258B2 (en) * | 2020-02-12 | 2022-08-09 | Tencent America LLC | Line graph transforms (LGTs) using 8-bit and 10-bit cores |
US11405647B2 (en) * | 2020-02-18 | 2022-08-02 | Tencent America LLC | Primary transforms using 8-bit and 10-bit cores |
US11272212B2 (en) * | 2020-05-12 | 2022-03-08 | Tencent America LLC | Tuned line graph transforms |
US11575937B2 (en) * | 2020-07-24 | 2023-02-07 | Tencent America LLC | Methods for efficient application of LGT |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110093532A (ko) * | 2010-02-12 | 2011-08-18 | 삼성전자주식회사 | 그래프 기반 화소 예측을 이용한 영상 부호화/복호화 시스템 및 방법 그리고 깊이 맵 부호화 시스템 및 방법 |
JP2014007477A (ja) * | 2012-06-21 | 2014-01-16 | Research Organization Of Information & Systems | 濃淡画像符号化装置及び復号装置 |
WO2015009039A1 (ko) * | 2013-07-15 | 2015-01-22 | 삼성전자 주식회사 | 비디오 코딩에서 사선 모드의 인트라 예측 향상을 위한 방법 |
KR20150046353A (ko) * | 2010-07-09 | 2015-04-29 | 퀄컴 인코포레이티드 | 인트라 예측 모드들의 서브세트 및 대응하는 방향 변환들을 이용한 비디오 코딩 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7216140B1 (en) * | 2000-09-30 | 2007-05-08 | Intel Corporation | Efficient implementation of n-point DCT, n-point IDCT, SA-DCT and SA-IDCT algorithms |
KR100586882B1 (ko) * | 2004-04-13 | 2006-06-08 | 삼성전자주식회사 | 모션 스케일러빌리티를 지원하는 코딩 방법 및 장치 |
US7565021B2 (en) * | 2005-03-01 | 2009-07-21 | Microsoft Corporation | Efficient implementation of block-based transform on graphics processing unit |
JP2007096431A (ja) * | 2005-09-27 | 2007-04-12 | Matsushita Electric Ind Co Ltd | 任意の変換比率を有するデジタル・ビデオ・フォーマット下方変換装置及び方法 |
KR20110135787A (ko) * | 2010-06-11 | 2011-12-19 | 삼성전자주식회사 | 엣지-적응 변환을 이용한 영상 부호화/복호화 시스템 및 방법 |
KR101641863B1 (ko) * | 2011-10-19 | 2016-07-22 | 주식회사 케이티 | 영상 부호화/복호화 방법 및 그 장치 |
-
2016
- 2016-07-15 WO PCT/KR2016/007766 patent/WO2017010850A1/ko active Application Filing
- 2016-07-15 US US15/744,665 patent/US10499061B2/en active Active
- 2016-07-15 KR KR1020187002053A patent/KR20180021101A/ko unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110093532A (ko) * | 2010-02-12 | 2011-08-18 | 삼성전자주식회사 | 그래프 기반 화소 예측을 이용한 영상 부호화/복호화 시스템 및 방법 그리고 깊이 맵 부호화 시스템 및 방법 |
KR20150046353A (ko) * | 2010-07-09 | 2015-04-29 | 퀄컴 인코포레이티드 | 인트라 예측 모드들의 서브세트 및 대응하는 방향 변환들을 이용한 비디오 코딩 |
JP2014007477A (ja) * | 2012-06-21 | 2014-01-16 | Research Organization Of Information & Systems | 濃淡画像符号化装置及び復号装置 |
WO2015009039A1 (ko) * | 2013-07-15 | 2015-01-22 | 삼성전자 주식회사 | 비디오 코딩에서 사선 모드의 인트라 예측 향상을 위한 방법 |
Non-Patent Citations (1)
Title |
---|
PAVEZ, EDUARDO ET AL.: "GTT: Graph Template Transforms with Applications to Image Coding", PICTURE CODING SYMPOSIUM (PCS, 3 June 2015 (2015-06-03), Cairns Australia, XP055294452 * |
Also Published As
Publication number | Publication date |
---|---|
KR20180021101A (ko) | 2018-02-28 |
US20180213233A1 (en) | 2018-07-26 |
US10499061B2 (en) | 2019-12-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017010850A1 (ko) | 분리 가능한 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 | |
WO2017014585A1 (ko) | 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 | |
WO2018128323A1 (ko) | 이차 변환을 이용한 비디오 신호의 인코딩/디코딩 방법 및 장치 | |
WO2016129872A1 (ko) | 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 | |
WO2018097691A2 (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 | |
WO2018236028A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2016190690A1 (ko) | 적응적인 분리가능한 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 | |
WO2018128322A1 (ko) | 영상 처리 방법 및 이를 위한 장치 | |
WO2018038554A1 (ko) | 이차 변환을 이용한 비디오 신호의 인코딩/디코딩 방법 및 장치 | |
WO2016064185A1 (ko) | 최적화 함수를 이용하여 그래프 기반 예측을 수행하는 방법 및 장치 | |
WO2017065525A2 (ko) | 영상을 부호화 또는 복호화하는 방법 및 장치 | |
WO2018062880A1 (ko) | 영상 처리 방법 및 이를 위한 장치 | |
WO2018062788A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2016153146A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2018236031A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2018070713A1 (ko) | 크로마 성분에 대한 인트라 예측 모드를 유도하는 방법 및 장치 | |
WO2015190839A1 (ko) | 임베디드 블록 파티셔닝을 이용하여 비디오 신호를 인코딩, 디코딩하는 방법 및 장치 | |
WO2020218793A1 (ko) | Bdpcm에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020116961A1 (ko) | 이차 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020213944A1 (ko) | 영상 코딩에서 매트릭스 기반의 인트라 예측을 위한 변환 | |
WO2016195455A1 (ko) | 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 | |
WO2018105759A1 (ko) | 영상 부호화/복호화 방법 및 이를 위한 장치 | |
WO2018044125A1 (ko) | 레이어드 기븐스 변환을 이용하여 변환을 수행하는 방법 및 장치 | |
WO2020213946A1 (ko) | 변환 인덱스를 이용하는 영상 코딩 | |
WO2019221472A1 (ko) | 참조 샘플을 이용하는 비디오 신호 처리 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16824765 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15744665 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20187002053 Country of ref document: KR Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16824765 Country of ref document: EP Kind code of ref document: A1 |