CN111543054B - Image encoding method, image decoding method and device thereof - Google Patents
Image encoding method, image decoding method and device thereof Download PDFInfo
- Publication number
- CN111543054B CN111543054B CN201880084999.9A CN201880084999A CN111543054B CN 111543054 B CN111543054 B CN 111543054B CN 201880084999 A CN201880084999 A CN 201880084999A CN 111543054 B CN111543054 B CN 111543054B
- Authority
- CN
- China
- Prior art keywords
- current
- coding unit
- block
- sample
- current block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
A method of image decoding, comprising: obtaining information on a transform coefficient of a current block from a bitstream; determining a filter for a current sample point in the current block based on at least one of a distance between the current sample point and a reference sample point in the current block and a size of the current block; obtaining a prediction block for a current block comprising prediction samples for a current sample generated using the determined filter; obtaining a residual block of the current block based on the obtained information on the transform coefficient of the current block; and restoring the current block based on the prediction block of the current block and the residual block of the current block.
Description
Technical Field
The method and apparatus according to the embodiment can encode or decode an image by using coding units of various shapes included in the image. The method and apparatus according to the embodiment include an intra prediction method and apparatus thereof.
Background
As hardware capable of reproducing and storing high-resolution or high-quality image content is being developed and provided, the demand for a codec that efficiently encodes or decodes the high-resolution or high-quality image content is increasing. The encoded image content may be reproduced through decoding. Recently, methods for efficiently compressing image content, such as high-resolution or high-quality image content, have been carried out. For example, a method of efficiently compressing an image is performed by processing an image to be encoded via an arbitrary method.
Various data units may be used to compress the image, and containment relationships may exist between these data units. The data units may be divided by various methods to determine the size of the data units for these image compressions, and as the data units optimized according to the features of the image are determined, encoding or decoding may be performed on the image.
Disclosure of Invention
Technical scheme
The image decoding method according to the embodiment includes: obtaining information on a transform coefficient of a current block from a bitstream; determining a filter for a current sample in the current block based on at least one of a distance between the current sample and a reference sample and a size of the current block; obtaining a prediction block for a current block comprising prediction samples for a current sample generated using the determined filter; obtaining a residual block of the current block based on the obtained information on the transform coefficient of the current block; and restoring the current block based on the prediction block of the current block and the residual block of the current block.
The determining of the filter for the current sample based on at least one of a distance between the current sample and the reference sample in the current block and a size of the current block may include: the type of the filter for the current sample and the coefficient of the filter are determined based on at least one of a distance between the current sample and a reference sample in the current block and a size of the current block.
The type of the filter may be one of a low pass filter (low pass filter), a Gaussian filter (Gaussian filter), a bilateral filter (bilateral filter), a uniform filter (uniform filter), a bilinear interpolation filter (bilinear interpolation filter), a cubic filter (cubic filter), and a Discrete Cosine Transform (DCT filter).
The number of taps of the filter for the current sample is a predetermined value or is determined based on at least one of a distance between the current sample and the reference sample and a size of the current block. The predetermined value may be an integer equal to or greater than 4.
The determining of the filter for the current sample based on at least one of a distance between the current sample and the reference sample in the current block and a size of the current block may include: determining a plurality of filters for the current block based on at least one of a size of the current block and a distance between a sample point in the current block and a reference sample point; and determining a filter corresponding to the current sample point among a plurality of filters for the current block.
The determining of the plurality of filters for the current block based on at least one of the size of the current block and the distance between the current sample point and the reference sample point in the current block may include: a plurality of filters for the current block is determined based on a ratio of a height or a width of the current block and a distance between a sample point in the current block and a reference sample point.
The determining of the filter for the current sample based on at least one of a distance between the current sample and the reference sample in the current block and a size of the current block may include: determining a first reference sample point corresponding to a sample point in a current block based on an intra prediction mode of the current block; determining a plurality of filters for the current block based on a distance between the sample point and a first reference sample point; and determining a filter corresponding to the current sample point among a plurality of filters for the current block.
The determining of the filter for the current sample based on at least one of a distance between the current sample and the reference sample in the current block and a size of the current block may include: determining the number of filters for the current block based on the size of the current block and determining filters for the current block corresponding to the number of filters; and determining a filter for the current sample among the filters for the current block based on a distance between the current sample and the reference sample and a size of the current block.
The determining of the filter for the current sample based on at least one of a distance between the current sample and the reference sample in the current block and a size of the current block may include: the filter for the current sample is further determined based on at least one of an intra prediction mode of the current block and a shape of the current block.
The step of determining the filter for the current sample point further based on at least one of an intra prediction mode of the current block and a shape (shape) of the current block may include:
when the intra prediction mode of the current block is a predetermined intra prediction mode, a filter for the current sample is determined based on at least one of a distance between the current sample and a reference sample in the current block and a size of the current block.
The step of determining the filter for the current sample point further based on at least one of an intra prediction mode of the current block and a shape (shape) of the current block may include:
when the width of the current block is less than or equal to a predetermined first value and the height of the current block is less than or equal to a predetermined second value, a filter for the current sample is determined based on at least one of a distance between the current sample and a reference sample in the current block and a size of the current block.
The determining of the filter for the current sample based on at least one of a distance between the current sample and the reference sample in the current block and a size of the current block may include: when the distance between the current sample and the reference sample is smaller than a predetermined value, a first filter for the current sample is determined, and when the distance between the current sample and the reference sample is greater than the predetermined value, a second filter for the current sample is determined, and the smoothing strength of the first filter may be smaller than that of the second filter.
An image encoding method according to an embodiment includes: determining a filter for a current sample point based on at least one of a distance between the current sample point and a reference previous sample point in the current block and a size of the current block; generating a prediction block of the current block including prediction samples of the current sample generated using the determined filter; and encoding information regarding transform coefficients of the current block based on a prediction block of the current block.
An image decoding apparatus according to an embodiment includes: a processor which obtains information on a transform coefficient of a current block from a bitstream; determining a filter for a current sample in the current block based on at least one of a distance between the current sample and a reference sample and a size of the current block; obtaining a prediction block for a current block comprising prediction samples for a current sample generated using the determined filter; obtaining a residual block of the current block based on the obtained information on the transform coefficient of the current block; and restoring the current block based on the prediction block of the current block and the residual block of the current block.
According to an embodiment of the present disclosure, a computer program regarding an image decoding method may be recorded on a computer-readable recording medium.
Drawings
Fig. 1a illustrates a block diagram of an image decoding apparatus according to various embodiments.
Fig. 1b illustrates a flow diagram of an image decoding method according to various embodiments.
Fig. 1c illustrates a block diagram of an image decoder, in accordance with various embodiments.
Fig. 1d illustrates a block diagram of an image decoding apparatus according to various embodiments.
Fig. 2a illustrates a block diagram of an image encoding device according to various embodiments.
Fig. 2b illustrates a flow diagram of an image encoding method according to various embodiments.
Fig. 2c illustrates a block diagram of an image decoder, according to various embodiments.
Fig. 2d illustrates a block diagram of an image encoding device according to various embodiments.
Fig. 3 illustrates a process of determining at least one coding unit as an image decoding apparatus divides a current coding unit according to an embodiment.
Fig. 4 illustrates a process in which an image decoding apparatus divides coding units having a non-square shape to determine at least one coding unit according to an embodiment.
Fig. 5 illustrates a process in which an image decoding apparatus according to an embodiment divides a coding unit based on at least one of block shape information and divided shape mode information.
Fig. 6 illustrates a method of determining a predetermined coding unit from among an odd number of coding units by an image decoding apparatus according to an embodiment.
Fig. 7 illustrates an order in which a plurality of coding units are processed when an image decoding apparatus divides a current coding unit to determine the plurality of coding units according to an embodiment.
Fig. 8 illustrates a process of determining that a current coding unit is divided into odd-numbered coding units when the coding units cannot be processed in a predetermined order by an image decoding apparatus according to an embodiment.
Fig. 9 illustrates a process of determining at least one coding unit as the image decoding apparatus divides the first coding unit according to an embodiment.
Fig. 10 illustrates that the shape in which the second coding unit can be divided is limited when the second coding unit having the non-square shape determined by dividing the first coding unit satisfies the predetermined condition according to the embodiment.
Fig. 11 illustrates a process in which an image decoding apparatus divides a coding unit having a square shape when the division shape mode information cannot indicate that the coding unit is divided into four square shapes, according to an embodiment.
Fig. 12 illustrates that a processing order of a plurality of coding units according to an embodiment may vary according to a process of dividing coding units.
Fig. 13 illustrates a process in which the depth of a coding unit is determined as the shape and size of the coding unit vary when the coding unit is recursively divided to determine a plurality of coding units, according to an embodiment.
Fig. 14 illustrates a depth that can be determined according to the shape and size of a coding unit and an index (hereinafter, PID) for distinguishing the coding unit according to an embodiment.
Fig. 15 illustrates determining a plurality of coding units from a plurality of predetermined data units included in a picture according to an embodiment.
Fig. 16 illustrates a processing block used as a criterion for determining a determination order of reference coding units included in a picture according to an embodiment.
Fig. 17 is a diagram for explaining an intra prediction mode according to an embodiment.
Fig. 18 is a flowchart for explaining a method of generating a predicted sample of a current sample by using different filters based on at least one of a distance between the current sample and a reference sample and a size of a current block, according to an embodiment of the present disclosure.
Fig. 19 is a diagram for explaining that an encoding (decoding) order between encoding units is determined as forward or reverse based on an encoding order flag, and a right reference line or an upper reference line may be used for intra prediction according to the determined encoding (decoding) order according to an embodiment of the present disclosure.
Best mode for carrying out the invention
A video decoding method according to various embodiments includes: obtaining information on a transform coefficient of a current block from a bitstream; determining a filter for a current sample in the current block based on at least one of a distance between the current sample and a reference sample and a size of the current block; obtaining a prediction block of a current block including prediction samples of current samples generated using the determined filter; obtaining a residual block of the current block based on the obtained information on the transform coefficient of the current block; and restoring the current block based on the prediction block of the current block and the residual block of the current block.
A video encoding method according to various embodiments includes: determining a filter for a current sample point in the current block based on at least one of a distance between the current sample point and a reference previous sample point in the current block and a size of the current block; generating a prediction block for the current block including prediction samples for the current sample generated using the determined filter; and encoding information regarding a transform coefficient of the current block based on the prediction block of the current block.
A video decoding device according to various embodiments comprises: a processor which obtains information on a transform coefficient of a current block from a bitstream; determining a filter for a current sample point in the current block based on at least one of a distance between the current sample point and a reference sample point in the current block and a size of the current block; obtaining a prediction block for a current block comprising prediction samples for a current sample generated using the determined filter; obtaining a residual block of the current block based on the obtained information on the transform coefficient of the current block; and restoring the current block based on the prediction block of the current block and the residual block of the current block.
According to another aspect of the present disclosure, a computer-readable recording medium has recorded thereon a program for executing the method according to various embodiments.
Detailed Description
Advantages and features of the disclosed embodiments, and methods of accomplishing the same, will become more apparent with reference to the following detailed description and accompanying drawings. However, the present disclosure is not limited to the embodiments disclosed below, and may be embodied in various forms, which are provided only for making the present disclosure more complete, and to fully inform the scope of the invention to those skilled in the art to which the present disclosure pertains.
Terms used in the specification will be described briefly, and the disclosed embodiments will be described in detail.
Terms used in the present specification have selected general terms that may be currently widely used in consideration of functions in the present disclosure, but may be changed according to intentions or examples of persons skilled in the art, the emergence of new technology, and the like. In addition, some terms may be arbitrarily selected by the applicant, and in this case, the meaning of the selected terms will be described in detail in the pertinent summary. Therefore, terms used in the present disclosure should be defined based on the meanings of the terms and the description throughout the specification, not only the names of the terms.
Unless a statement used in the singular has a distinctly different meaning in the context, it is intended that the statement used in the singular includes the plural.
In the specification, when a certain component includes "a certain constituent element, the component may further include other constituent elements without excluding the other constituent elements unless there is an explicit description contrary thereto.
In addition, the term "-device" used in the specification means a software or hardware constituent element, and the "-device" performs a predetermined function. However, the term "-processor" is not limited to software or hardware. The "-er" may be formed to reside in an addressable storage medium or may be formed to render one or more processors. Thus, as one embodiment, the "-device" includes components such as software components, object-oriented software components, class components, and task components, and may include processes, functions, attributes, steps, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, or variables. The functions provided within the constituent elements and the "-ers" may be combined by a smaller number of constituent elements and the "-ers" or may be divided by additional constituent elements and the "-ers".
According to an embodiment of the present disclosure, a "-er" may be implemented as a processor and a memory. The term "processor" should be broadly interpreted as including general purpose processors, central processing units, microprocessors, digital signal processors, controllers, microcontrollers, state machines, and the like. In some environments, "processor" may refer to an application specific integrated circuit, a programmable logic device, a field programmable gate array, or the like. The term "processor" refers to a combination of processing devices, such as a combination of a digital signal processor and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a discrete cosine transform core, or any other combination of such devices.
The term "memory" should be broadly interpreted to include any electronic component capable of storing electronic information. The term memory may refer to various types of processor-readable media such as random access memory, read only memory, non-volatile random access memory, programmable read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, flash memory, magnetic or optical data storage, registers, and the like. A memory is said to be in electronic communication with a processor if the processor can read information from, and/or write information to, the memory. A memory integrated in the processor is in electronic communication with the processor.
Hereinafter, "image" may mean a still image such as a still image of a video or a moving image such as a video (i.e., the video itself).
Hereinafter, "sampling point" refers to data that is assigned to a sampling position of an image and is to be processed. For example, pixel values in an image in the spatial domain or transform coefficients on the transform domain may be samples. A unit including at least one sample point may be defined as a block.
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings so that those skilled in the art to which the present disclosure pertains can easily carry out the embodiments. In addition, in order to clearly describe the present disclosure in the drawings, portions irrelevant to the description are omitted.
Hereinafter, the image encoding apparatus and the image decoding apparatus, the image encoding method, and the image decoding method will be described in detail with reference to fig. 1 to 19. A method of determining a data unit of an image according to an embodiment is described with reference to fig. 3 to 16, and an image encoding or decoding method and apparatus thereof are described with reference to fig. 1-2 and 17 to 19, which determine a filter based on at least one of a distance to a reference sample point and a size of a current block according to an embodiment and adaptively perform intra prediction using the determined filter.
Hereinafter, an image encoding/decoding method of adaptively performing intra prediction based on various shapes of coding units according to an embodiment of the present disclosure and an apparatus thereof will be described with reference to fig. 1 and 2.
Fig. 1a is a block diagram of an image decoding apparatus according to various embodiments.
The image decoding apparatus 100 according to various embodiments may include an obtainer 105, an intra predictor 110, and an image decoder 115.
The obtainer 105, the intra predictor 110, and the image decoder 115 may include at least one processor. In addition, the obtainer 105, the intra predictor 110, and the image decoder 115 may include a memory storing instructions to be executed by at least one processor. The image decoder 115 may be implemented in hardware separate from the obtainer 105 and the intra predictor 110, or may include the obtainer 105 and the intra predictor 110.
The obtainer 105 may obtain information about a transform coefficient of the current block from a bitstream. The obtainer 105 may obtain information on a prediction mode of the current block and information on an intra prediction mode of the current block from the bitstream.
The obtainer 105 may include information indicating that a prediction mode regarding the current block is an intra prediction mode or an inter prediction mode. The information regarding the intra prediction mode of the current block may be information regarding an intra prediction mode applied to the current block among a plurality of intra prediction modes. For example, the intra prediction mode may be one of a DC mode, a planar mode, and at least one angular mode having a prediction direction. The angle pattern may include a horizontal pattern, a vertical pattern, and a diagonal pattern, and may include a pattern having a predetermined direction other than the horizontal direction, the vertical direction, and the diagonal direction. For example, the number of angular patterns may be 65 or 33.
When the prediction mode of the current block is an intra prediction mode, the intra predictor 110 may be activated.
The intra predictor 110 may perform intra prediction on the current block based on the intra prediction mode of the current block. The intra predictor 110 may determine a reference sample for the current sample among the reference samples based on the position of the current sample in the current block and the intra prediction mode of the current block, and may generate a predicted sample value of the current sample using the reference sample for the current sample. Here, the reference samples may include samples of a reference line at a left or upper portion adjacent to the current block. The reference samples for the current sample may include at least one sample neighboring the current block among samples of a reference line at a left or upper portion of the current block. When there is only one reference sample for the current sample, the intra predictor 110 may use the reference sample to generate a predicted sample value for the current sample. Meanwhile, when there are two or more reference samples for the current sample, the intra predictor 110 may generate a predicted sample of the current sample by applying a filter to sample values of the two or more reference samples for the current sample. Here, the coefficients of the filter may be integers. In order to determine the coefficients of the filter as integers, scaling is performed on the coefficients of the filter, and the scaled coefficients of the filter may be used. When scaling is performed on the coefficients of the filter and the scaled coefficients of the filter are used, a process of de-scaling may thereafter be performed according to the degree of scaling of the coefficients of the filter. The accuracy of the filter may be 1/32 fractional pixel accuracy.
The intra predictor 110 may determine a filter for the current sample based on at least one of a size of the current block and a distance between the current sample and the reference sample. The intra predictor 110 may determine at least one filter candidate applicable to the current sample, and determine a filter for the current sample among the at least one filter candidate based on at least one of a distance between the current sample and a reference sample in the current block and a size of the current block. Here, the distance between the current sample point and the reference sample point may be a distance between the current sample point and the reference sample point located in a vertical direction of the current sample point or a distance between the current sample point and the reference sample point located in a horizontal direction of the current block, in which case the distance between the current sample point and the reference sample point may correspond to a position coordinate value of the current sample point in the current block. Accordingly, it can be seen that the intra predictor 110 determines a filter for the current sample based on at least one of the size of the current block and the distance between the current sample and the reference sample.
The intra prediction unit 110 may determine the type of filter and the coefficient of the filter for a current sample based on at least one of a distance between the current sample and a reference sample in the current block and a size of the current block. Here, the type of the filter may be one of a low pass filter (low pass filter), a Gaussian filter (Gaussian filter), a bilateral filter (bilateral filter), a uniform filter (uniform filter), a bilinear interpolation filter (bilinear interpolation filter), a cubic filter (cubic filter), [ 12 ] filter, and a Discrete Cosine Transform (DCT) filter (DCT filter). Here, the discrete cosine transform filter may be a 4-tap discrete cosine transform interpolation filter (DCT-IF) for compensating for sub-pixel motion of the chrominance component. In addition, a new filter may be generated by combining two or more filters, and a newly created filter may be added as a filter type, and in addition, not limited to the types of filters listed above, various types of filters may be the type of filter for the current sampling point.
The number of taps of the filter for the current sample point may be a predetermined value. Here, the predetermined value may be 4, that is, the filter for the current sample point may be a 4-tap filter, but is not limited thereto, and the predetermined value may be one of various integer values of 1 or more, preferably 2 or more.
In addition, the number of taps of the filter for the current sample may be determined based on at least one of a distance between the current sample and the reference sample and a size of the current block. For example, when the distance between the current sample point and the reference sample point is less than a predetermined value, the intra predictor 110 may determine the number of taps of the filter as a predetermined first number of taps. When the distance between the current sample and the reference sample is greater than a predetermined value, the intra predictor 110 may determine the number of taps of the filter as a predetermined second number of taps. Here, the number of the predetermined first taps may be smaller than the number of the predetermined second taps.
The intra predictor 110 may determine a plurality of filters for the current block based on at least one of a size of the current block and a distance between at least one sample point in the current block and a reference sample point. In other words, the plurality of filters for the current block may include at least one filter candidate that may be used for the current sample point.
The intra predictor 110 may determine a filter corresponding to the current sample point among a plurality of filters for the current block. Here, the intra predictor 110 may determine a filter corresponding to the current sample among a plurality of filters for the current block based on a ratio of a size (height or width) of the current block and a distance between at least one sample in the current block and a position of the current sample in the current block (or a distance between the current sample and a reference sample).
The intra predictor 110 may determine a first reference sample corresponding to a sample in the current block based on the intra prediction mode of the current block. For example, the intra predictor 110 may determine, as the first reference sample, a reference sample that intersects an extension line of an intra prediction direction according to an intra prediction mode of the current block from the current sample among the reference samples. The intra predictor 110 may determine a plurality of filters for the current block based on a distance between the sample point and the first reference sample point. The intra predictor 110 may determine a filter corresponding to the current sample point among a plurality of filters for the current block.
The intra predictor 110 may determine the number of filters for the current block based on the size of the current block and determine filters for the current block corresponding to the number of filters. For example, when the height or width of the current block is less than a predetermined value, the image decoding apparatus 100 may determine the number of filters for the current block to be one. When the number of filters for the current block is determined to be 1, the intra predictor 110 may determine the filter for the current block based on the size of the current block and the current intra prediction mode. In other words, the intra predictor 110 may determine a critical value based on the size of the current block, and determine an intra prediction mode index difference value based on an index difference value between the current intra prediction mode and the vertical and horizontal modes, and determine a filter for the current block by comparing the determined intra prediction mode index difference value with the critical value.
In addition, the intra predictor 110 may determine the first filter as a filter for the current block when a value of an index of an intra prediction mode of the current block is an odd value, and may determine the second filter as a filter for the current block when the value of the intra prediction mode of the current block is an even number.
The intra predictor 110 may determine a range of distances between samples to which each filter is applied and reference samples based on the size of the current block. For example, when the height and width of the current block are both greater than or equal to 32, the distance between the sample point of the current block where the filter f0 is used and the reference sample point may be [0, 2), and the distance between the sample point of the current block where the filter f1 is used and the reference sample point may be [2, 4), and the distance between the sample point of the current block where the filter f2 is used and the reference sample point may be [4, 8), and the distance between the sample point of the current block where the filter f3 is used and the reference sample point may be [8, 16), and the distance between the sample point of the current block where the filter f4 is used and the reference sample point may be [16, size). Otherwise, when the height or width of the current block is less than 32, the distance between the sample point of the current block where the filter f0 is used and the reference sample point may be [0, 1), and the distance between the sample point of the current block where the filter f1 is used and the reference sample point may be [1, 2), and the distance between the sample point of the current block where the filter f2 is used and the reference sample point may be [2, 3), and the distance between the sample point of the current block where the filter f3 is used and the reference sample point may be [3,4], and the distance between the sample point of the current block where the filter f4 is used and the reference sample point may be [4, size).
The intra predictor 110 may determine a filter for the current sample among the filters for the current block based on at least one of a distance between the current sample and the reference sample and a size of the current block.
The intra predictor 110 may further determine a filter to be used for the current sample based on at least one of an intra prediction mode of the current block and a shape of the current block.
For example, when the intra prediction mode of the current block is a predetermined intra prediction mode, the intra predictor 110 may determine a filter for the current sample point based on at least one of a distance between the current sample point and the reference sample point in the current block and a size of the current block. Here, the predetermined intra prediction mode may be one of angular modes other than the DC mode and the planar mode. In detail, the predetermined intra prediction mode may be one of the remaining modes except for the diagonal mode among the angular modes.
For example, when the prediction mode of the current block is a horizontal mode or a vertical mode, the intra predictor 110 may perform intra prediction without using any filter (e.g., a bilinear interpolation filter or a [1,2,1] reference filter, etc.) on the reference samples in order to generate the prediction samples of the current block. When the prediction mode of the current block is a diagonal mode having an angle of a multiple of 45 degrees, the intra predictor 110 may perform intra prediction by using a [1,2,1] reference sample filter in order to generate prediction samples of the current block. When the prediction mode of the current block is an angle mode other than the mode, the intra predictor 110 may determine a filter for the current sample based on at least one of a distance between the current sample and the reference sample in the current block and a size of the current block.
The intra predictor 110 may further determine a filter for a current sample in the current block based on whether the current block is square or rectangular, and based on at least one of a distance between the current sample and a reference sample and a size of the current block.
In addition, the intra predictor 110 may further determine a filter for the current sample based on a ratio of a height and a width of the current block, and based on at least one of a distance between the current sample and a reference sample in the current block and a size of the current block.
When the width of the current block is less than or equal to a predetermined first value and the height of the current block is less than or equal to a predetermined second value, the intra predictor 110 may determine a first filter for the current sample point based on at least one of a distance between the current sample point and the reference sample point in the current block and a size of the current block. In addition, when the width of the current block is greater than a predetermined first value or the height of the current block is greater than a predetermined second value, the intra predictor 110 may determine a second filter for the current sample based on at least one of a distance between the current sample and the reference sample in the current block and the size of the current block.
The intra predictor 110 may determine a first filter for the current sample when a distance between the current sample and the reference sample is less than a predetermined value, and the intra predictor 110 may determine a second filter for the current sample when the distance between the current sample and the reference sample is greater than the predetermined value. Here, the smoothing strength of the first filter may be smaller than that of the second filter.
The intra predictor 110 may obtain a prediction block of the current block, which includes prediction samples of the current sample generated using the determined filter.
The image decoder 115 may obtain a residual block of the current block based on information regarding the transform coefficient of the current block. In other words, the image decoder 115 may perform inverse quantization and inverse transform based on information on transform coefficients of the current block to obtain residual samples of a residual block with respect to the current block from the bitstream.
The image decoder 115 may restore the current block based on the prediction block of the current block and the residual block of the current block. The image decoder 115 may generate a restored sample in the current block by using sample values of prediction samples in the prediction block of the current block and sample values of residual samples in the residual block of the current block, and may generate a restored block of the current block based on the restored sample.
Meanwhile, the image decoding apparatus 100 may determine the filter for the current sample based on at least one of the size of the current block and the distance between the current sample and the reference sample, and obtain flag information indicating whether to adaptively perform intra prediction from the bitstream, and may determine whether to determine the filter for the current sample based on at least one of the size of the current block and the distance between the current sample and the reference sample based on the flag information. Here, the flag information is obtainable in blocks, particularly in maximum coding units. In addition, the flag information may be obtained by frame.
In addition, the image decoding apparatus 100 can obtain flag information commonly applied to the luminance component and the color difference component. In addition, the image decoding apparatus 100 can obtain flag information applied to the luminance component or the color difference component, respectively.
The image decoding apparatus 100 may determine a filter set commonly applied to the luminance component and the color difference component. In addition, the image decoding apparatus 100 may determine a set of filters applied per component.
In addition, the image decoding apparatus 100 may determine whether to determine a filter for the current sample based on at least one of the size of the current block and the distance between the current sample and the reference sample without obtaining flag information from the bitstream. For example, when the prediction mode of the current block is a predetermined intra prediction mode, the image decoding apparatus 100 may determine to adaptively perform intra prediction by determining a filter for the current sample based on at least one of the size of the current block and the distance between the current sample and the reference sample.
In addition, the image decoding apparatus 100 may determine whether to adaptively perform intra prediction based on the reference samples for performing filtering and the weighting values of the filter by using information of the neighboring blocks without obtaining flag information from the bitstream. For example, the image decoding apparatus 100 may determine, for a neighboring block of the current block, a filter for a first sample point in the neighboring block based on the size of the neighboring block and a distance between the first sample point in the neighboring block and a reference sample point, and determine whether to determine the filter for the current sample point based on at least one of the size of at least one current block and the distance between the current sample point and the reference sample point based on flag information of the neighboring block, wherein the flag information of the neighboring block indicates whether to adaptively perform intra prediction based on the filter for the first sample point in the neighboring block. In addition, when the size of the current block is the size of a predetermined first block, the image decoding apparatus 100 may determine a filter for the current sample based on at least one of the size of the current block and a distance between the current sample and the reference sample, and determine to perform intra prediction based on the filter for the current sample. When the size of the current block is the size of the predetermined second block, the image decoding apparatus 100 may perform existing intra prediction without determining a filter for the current sample based on the size of the current block and the distance between the current sample and the reference sample.
The image decoding apparatus 100 may determine a filter for the current sample based on at least one of the size of the current block and the distance between the current sample and the reference sample, and may perform intra prediction by combining an encoding/decoding tool based on intra prediction for the current sample and a similar encoding/decoding tool for intra prediction with each other. In addition, the image decoding apparatus 100 may give priority among a plurality of encoding/decoding tools for intra prediction, and may perform intra prediction according to the priority among the encoding/decoding tools. In other words, when an encoding/decoding tool having a high priority is used, an encoding/decoding tool having a low priority may not be used, and when an encoding/decoding tool having a high priority is not used, an encoding/decoding tool having a low priority may be used.
Fig. 1b illustrates a flow diagram of an image decoding method according to various embodiments.
In operation S105, the image decoding apparatus 100 may obtain information regarding a transform coefficient of a current block from a bitstream.
In operation S110, the image decoding apparatus 100 may determine a filter for a current sample in the current block based on at least one of a distance between the current sample and a reference sample and a size of the current block.
In operation S115, the image decoding apparatus 100 may obtain a prediction block of the current block, which includes prediction samples of the current samples generated by using the determined filter.
In operation S120, the image decoding apparatus 100 may obtain a residual block of the current block based on information regarding the transform coefficient of the current block.
In operation S125, the image decoding apparatus 100 may restore the current block based on the prediction block and the residual block of the current block.
Fig. 1c illustrates a block diagram of an image decoder 6000 according to various embodiments.
The image decoder 6000 according to various embodiments performs operations performed when image data is decoded by the image decoder 115 of the image decoding apparatus 100.
Referring to fig. 1c, the entropy decoder 6150 parses encoded image data to be decoded and encoding information required for decoding from the bitstream 6050. The encoded image data is quantized transform coefficients. The inverse quantizer 6200 and the inverse transformer 6250 recover residual data from the quantized transform coefficients.
The intra predictor 6400 performs intra prediction per block. The intra predictor 6400 of fig. 1c may correspond to the intra predictor 110 of fig. 1A.
The inter predictor 6350 performs inter prediction by block by using a reference image obtained from the restored picture buffer 6300. Data of a spatial domain of a block for a current image may be restored by adding prediction data and residual data for each block generated by the intra predictor 6400 or the inter predictor 6350, and the deblocking unit 6450 and the sample adaptive offset performer 6500 may output a filtered restored image 6600 by performing loop filtering on the restored data of the spatial domain. In addition, the restored image stored in the restored picture buffer 6300 may be output as a reference image.
In order for a decoder (not shown) of the image decoding apparatus 100 to decode image data, the step-by-step operation of the image decoder 6000 according to various embodiments may be performed in blocks.
Fig. 1d illustrates a block diagram of the image decoding apparatus 100 according to an embodiment.
The image decoding apparatus 100 according to the embodiment may include a memory 120 and at least one processor 125 connected to the memory 120. The operation of the image decoding apparatus 100 according to the embodiment may operate as a separate processor or may operate according to the control of a central processor. In addition, the memory 120 of the image decoding apparatus 100 may store data received from the outside and data generated by the processor. The processor 125 of the image decoding apparatus 100 may obtain information on the current block from the bitstream, and determine a filter for the current sample point based on at least one of a distance between the current sample point and a reference sample point in the current block and a size of the current block, and obtain a predicted block of the current block including predicted sample points of the current sample point generated using the determined filter, and obtain a residual block of the current block based on the information on the transform coefficients of the current block, and restore the current block based on the predicted block and the residual block of the current block.
Fig. 2a illustrates a block diagram of an image encoding device according to various embodiments.
The image encoding apparatus 150 according to various embodiments may include an intra predictor 155 and an image encoder 160.
The intra predictor 155 and the image encoder 160 may include at least one processor. In addition, the intra predictor 155 and the image encoder 160 may include a memory storing instructions to be executed by at least one processor. Is implemented by hardware other than the intra predictor 155 and the image encoder 160, or may include the intra predictor 155 and the image encoder 160.
The intra predictor 155 may determine a filter for a current sample in the current block based on at least one of a distance between the current sample and a reference sample and a size of the current block.
The intra predictor 155 may generate a prediction block of the current block including prediction samples of the current samples generated using the filter for the current samples.
The intra predictor 155 may determine a type of a filter and a coefficient of the filter for a current sample based on at least one of a distance between the current sample and a reference sample in the current block and a size of the current block.
The number of taps of the filter for the current sample point may be a predetermined value. Here, the predetermined value may be an integer of 4 or more. In addition, the number of taps of the filter for the current sample may be determined based on at least one of a distance between the current sample and the reference sample and a size of the current block.
The intra predictor 155 may determine a plurality of filters for the current block based on at least one of a size of the current block and a distance between a sample point in the current block and a reference sample point.
The intra predictor 155 may determine a filter corresponding to the current sample point among a plurality of filters for the current block.
The intra predictor 155 may determine a first reference sample corresponding to a sample in the current block based on the intra prediction mode of the current block. The intra predictor 155 may determine a plurality of filters for the current block based on a distance between a sample point in the current block and a first reference sample point. The intra predictor 155 may determine a filter corresponding to the current sample point among a plurality of filters for the current block.
The intra predictor 155 may determine the number of filters for the current block based on the size of the current block and determine a filter for the current block corresponding to the number of filters, and the intra predictor 155 may determine the filter for the current sample among the filters for the current block based on at least one of a distance between the current sample and a reference sample and the size of the current block.
When the intra prediction mode of the current block is a predetermined intra prediction mode, the intra predictor 155 may determine a filter for the current sample point based on at least one of a distance between the current sample point and the reference sample point in the current block and a size of the current block.
The intra predictor 155 may determine a first filter for the current sample when the distance between the current sample and the reference sample is less than a predetermined value, and the intra predictor 110 may determine a second filter for the current sample when the distance between the current sample and the reference sample is greater than a predetermined value. Here, the smoothing strength of the first filter may be smaller than that of the second filter.
The image encoder 160 may encode information regarding transform coefficients of the current block based on a prediction block of the current block. In other words, the image encoder 160 may generate a residual block of the current block based on the original block of the current block and the predicted block of the current block, and encode information regarding a transform coefficient of the current block by transforming and quantizing the residual block of the current block. The image encoder 160 may encode information regarding a prediction mode of the current block and information regarding an intra prediction mode of the current block.
The image encoder 160 may generate a bitstream including information on a transform coefficient of the current block and output the bitstream.
Fig. 2b is a flow diagram of an image encoding method according to various embodiments.
In operation S150, the image encoding apparatus 150 may determine a filter for the current sample based on at least one of a distance between the current sample and the reference sample and a size of the current block.
In operation S155, the image encoding apparatus 150 may generate a prediction block of the current block including the prediction samples of the current sample generated using the determined filter. In operation S160, the image encoding apparatus 150 may encode information regarding a transform coefficient of the current block based on the prediction block of the current block.
Fig. 2c illustrates a block diagram of an image encoder, in accordance with various embodiments.
The image encoder 7000 according to various embodiments performs operations performed when the image data is decoded by the image encoder 160 of the image encoding apparatus 150.
In other words, the intra predictor 7200 performs intra prediction on the current picture 7050 on a block basis, and the inter predictor 7150 performs inter prediction on a block basis by using the current picture 7050 of each block and a reference picture obtained from the restored picture buffer 7100.
Residual data may be generated by subtracting prediction data for each block output from the intra predictor 7200 or the inter predictor 7150 from data of a coded block for the current image 7050, and the transformer 7250 and the quantizer 7300 may output block-wise quantized transform coefficients by performing transformation and quantization on the residual data. The intra predictor 7200 of fig. 2c may correspond to the intra predictor 155 of fig. 2A.
The inverse quantizer 7450 and the inverse transformer 7500 may restore residual data of a spatial domain by performing inverse quantization and inverse transformation on the quantized transform coefficients. The residual data of the restored spatial domain may be added to the prediction data for each block output from the intra predictor 7200 or the inter predictor 7150 to be restored as data of the spatial domain for the block of the current image 7050. The deblocking unit 7550 and the sample adaptive offset performer generate a filtered restored image by performing in-loop filtering on the restored spatial domain data. The generated restored image is stored in the restored picture buffer 7100. The restored picture stored in the restored picture buffer 7100 can be used as a reference picture for inter prediction of another picture. The entropy encoder 7350 may entropy encode the quantized transform coefficients, and the entropy-encoded coefficients may be output as a bitstream 7400.
In order for the image encoder 7000 according to various embodiments to be applied to the image encoding apparatus 150, the step-by-step operation of the image encoder 7000 according to various embodiments may be performed in blocks.
Fig. 2d is a block diagram of the image encoding device 150 according to an embodiment.
The image encoding apparatus 150 according to the embodiment may include a memory 165 and at least one processor 170 connected to the memory 170. The operation of the image encoding apparatus 150 according to the embodiment may operate as a separate processor or may operate according to the control of a central processor. In addition, the memory 165 of the image encoding apparatus 150 may store data received from the outside and data generated by the processor.
The processor 170 of the image encoding apparatus 150 may determine a filter for a current sample in the current block based on at least one of a distance between the current sample and a reference previous sample and a size of the current block, and generate a prediction block of the current block including prediction samples of the current sample generated using the determined filter, and encode information regarding transform coefficients of the current block based on the prediction block of the current block.
Hereinafter, the division of the coding unit will be described in detail according to an embodiment of the present disclosure.
First, one Picture (Picture) may be divided into one or more slices. A slice may be a sequence of one or more largest Coding units (Coding Tree units; CTUs). The concept in contrast to the largest Coding unit (CTU) is the largest Coding Block (CTB).
A largest coded block (CTB) refers to an N × N block including N × N samples (N is an integer). Each color component may be divided into one or more largest coded blocks.
When a picture has three arrays of samples (arrays of samples by Y, cr and Cb components), the maximum coding unit (CTU) is a unit of syntax structure for coding luma samples and chroma samples, including two maximum coding blocks of luma samples and chroma samples corresponding thereto. When a picture is a monochrome picture, the maximum coding unit is a unit including a maximum coding block of monochrome samples and a syntax structure for coding the monochrome samples. When a picture is a picture coded by a color plane separated for each color component, the maximum coding unit is a unit including a syntax structure for coding samples of the corresponding picture and picture.
One largest coding block (CTB) may be divided into MxN coding blocks (coding blocks) including MxN samples (M and N are integers).
When a picture has an array of samples in terms of Y, cr and Cb components, a Coding Unit (CU) is a syntax structure for Coding luma samples and chroma samples, including two Coding blocks of luma samples and chroma samples corresponding thereto. When the picture is a monochrome picture, the coding unit is a unit including an encoding block of monochrome samples and a syntax structure for encoding the monochrome samples. When a picture is a picture encoded by a color plane separated for each color component, the encoding unit is a unit including a corresponding picture and a syntax structure for encoding a sample of the picture.
As described above, the largest coding block and the largest coding unit are concepts that are distinguished from each other, and the coding block and the coding unit are concepts that are different from each other. In other words, a (largest) coding unit refers to a (largest) coding block including a corresponding sample and a data structure including a syntax structure corresponding thereto. However, it will be understood by those of ordinary skill in the art that a (largest) coding block or a (largest) coding block refers to a block of a predetermined size including a predetermined number of samples, and thus in the following description, the largest coding block and the largest coding unit, or the coding block and the coding unit, are referred to without distinction unless otherwise stated.
A picture may be divided into largest Coding units (CTUs). The size of the maximum coding unit may be determined based on information obtained from the bitstream. The shape of the maximum coding unit may have a square of the same size, but is not limited thereto.
For example, information about the maximum size of a luma coding block may be obtained from a bitstream. For example, the information on the maximum size of the luma coding block may indicate that the maximum size of the luma coding block is one of 16 × 16, 32 × 32, 64 × 64, 128 × 128, and 256 × 256.
For example, information on a difference between a maximum size of a luma coding block that can be divided into two and a size of the luma block may be obtained from a bitstream. The information on the difference of the luminance block size may indicate a difference of a size between a luminance maximum coding unit and a maximum luminance coding block that can be divided into two. Accordingly, when information on the maximum size of a luminance coding block divisible into two obtained from a bitstream and information on the difference in luminance block size are combined, the size of a luminance maximum coding unit can be determined. When the size of the luma maximum coding unit is used, the size of the chroma maximum coding unit may be determined. For example, when according to color format Y: cb: the ratio of Cr is 4:2: at 0, the size of the chrominance block may be half the size of the luminance block, and similarly, the size of the chrominance maximum coding unit may be half the size of the luminance maximum coding unit.
According to the embodiment, information on the maximum size of a bi-partitionable (binary split) capable luma coding block is obtained from a bitstream, and thus the maximum size of the bi-partitionable luma coding block may be variably determined. In contrast, the maximum size of a luminance coding block that can be divided into three (tertiary split) may be fixed. For example, the maximum size of a luma coding block that can be tri-divided in an I slice may be 32 × 32, and the maximum size of a luma coding block that can be tri-divided in a P slice or a B slice may be 64 × 64.
In addition, the maximum coding unit may be hierarchically divided into coding units based on the division shape mode information obtained from the bitstream. As the partition shape mode information, at least one of information indicating whether to perform a quad partition (quality split), information indicating whether to perform a multi-partition, information indicating a partition direction, and information of a partition type may be obtained from a bitstream.
For example, the information indicating whether or not the current coding unit is quartered (QUAD SPLIT) may indicate whether or not the current coding unit is quartered (QUAD SPLIT).
When the current coding unit is not divided by four, the information indicating whether to divide the current coding unit plurally may indicate whether the current coding unit is NO longer divided (NO _ SPLIT) or is divided by two/three.
When the current coding unit is divided into two or three, the division direction information indicates that the current coding unit is divided in a horizontal direction or a vertical direction.
When the current coding unit is divided in the horizontal or vertical direction, the division type information indicates that the current coding unit is divided by two or three.
According to the division direction information and the division type information, a division mode of the current coding unit may be determined. The division mode when the current coding unit is divided into two in the horizontal direction may be determined as a two-horizontal division (SPLIT _ BT _ HOR), the three-horizontal division (SPLIT _ TT _ HOR) when divided into three in the horizontal direction, the division mode when divided into two in the vertical direction may be determined as a two-vertical division (SPLIT _ BT _ VER), and the division mode when divided into three in the vertical direction may be determined as a three-vertical division (SPLIT _ BT _ VER).
The image decoding apparatus 100 can obtain the partition form mode information from one binary string (binning) in the bitstream. The shape of the bitstream received by the image decoding apparatus 100 may include Fixed length binary codes (Fixed length binary codes), unary codes (Unary codes), truncated Unary codes (Truncated Unary codes), predetermined binary codes, and the like. The binary string indicates information by arranging binary digits. The binary string may be made up of at least one bit. The image decoding apparatus 100 may obtain division form mode information corresponding to the binary string based on the division rule. The image decoding apparatus 100 may determine whether to divide the coding unit into four, whether not to divide, or whether to divide the direction and the type of division based on one binary string.
The coding unit may be less than or equal to the maximum coding unit. For example, the largest coding unit is a coding unit having the largest size, and the largest coding unit is also one of the coding units. When the partition shape mode information on the partition form mode indicates that no partition is made, the determined coding unit of the maximum coding units has the same size as the size of the maximum coding unit. When the partition form mode information on the maximum coding unit indicates the partition, the maximum coding unit may be partitioned into coding units. Also, when division is indicated with respect to the division form mode information on the coding unit, the coding unit may be divided into coding units whose sizes are smaller. However, the division of the image is not limited thereto, and the maximum coding unit and the coding unit may not be distinguished. The division of the coding unit is described in more detail in fig. 3 to 16.
In addition, one or more prediction blocks for prediction may be determined from the coding unit. The prediction block may be equal to or smaller than the coding unit. In addition, one or more transform blocks for transform may be determined from the coding unit. The transform block may be equal to or smaller than the coding unit.
The shapes and sizes of the transform block and the prediction block may be unrelated to each other.
According to another embodiment, the coding unit may perform the prediction by using the coding unit as a prediction block. In addition, the coding unit may perform transformation by using the coding unit as a transformation block.
The division of the coding unit is described in more detail in fig. 3 to 16. The current block and the neighboring block of the present disclosure may indicate one of a maximum coding unit, a prediction block, and a transform block. And, the current block or the current coding unit is a block on which decoding or encoding is currently performed or a block on which division is currently performed. The neighboring block may be a block restored before the current block. The neighboring block may be spatially or temporally adjacent to the current block. The neighboring block may be located at one of a lower left side, a left side, an upper right side, a right side, and a lower right side of the current block.
Fig. 3 illustrates a process in which the image decoding apparatus 100 divides a current coding unit to determine at least one coding unit according to an embodiment.
The block shape may include 4N × 4N, 4N × 2N, 2N × 4N, 4N × N, N × 4N, 32N × N, N × 32N, 16N × N, N × 16N, 8N × N, or N × 8N. Here, N may be a positive integer. The block shape information is information indicating at least one of a shape, a direction, a width-to-height ratio, and a size of the coding unit.
The shape of the coding unit may include a square (square) and a non-square (non-square). When the sizes of the width and the height of the coding unit are the same (i.e., when the block shape of the coding unit is 4N × 4N), the image decoding apparatus 100 may determine the block shape information of the coding unit as a square. The image decoding apparatus 100 may determine the shape of the coding unit to be non-square.
When the sizes of the width and the height of the coding unit are different (i.e., when the block shape of the coding unit is 4N × 2N, 2N × 4N, 4N × N, N × 4N, 32N × N, N × 32N, 16N × N, N × 16N, 8N × N, or N × 8N), the image decoding apparatus 100 may determine the block shape information of the coding unit to be non-square. When the shape of the coding unit is non-square, the image decoding apparatus 100 may determine the ratio of the width and the height in the block shape information of the coding unit to be 1: 2. 2: 1. 1:4. 4: 1. 1: 8. 8: 1. 1, 16: 1. 1, 32: 1. Also, based on the size of the width and the size of the height of the coding unit, the image decoding apparatus 100 can determine whether the coding unit is in the horizontal direction or the vertical direction. Also, the image decoding apparatus 100 may determine the size of the coding unit based on at least one of the size of the width, the size of the height, and the area of the coding unit.
According to an embodiment, the image decoding apparatus 100 may determine the shape of the coding unit using block shape information, and may determine in which form the coding unit is divided using division form mode information. In other words, the division method of the coding unit indicated by the division form mode information can be determined according to which block shape information used by the image decoding apparatus 100 indicates.
The image decoding apparatus 100 may obtain the division shape mode information from the bitstream. However, it is not limited thereto, and the image decoding apparatus 100 and the image encoding apparatus 150 may determine predetermined division shape mode information based on the block shape information. The image decoding apparatus 100 may determine division shape mode information predetermined for a maximum coding unit or a minimum coding unit. For example, for the maximum coding unit, the image decoding apparatus 100 may determine the partition form mode information as a quad partition (quad partition). Also, for the minimum coding unit, the image decoding apparatus 100 may determine the division form mode information as "division not performed". Specifically, the image decoding apparatus 100 may determine the size of the maximum coding unit to be 256 × 256. The image decoding apparatus 100 may determine the predetermined division shape mode information as the quad division. The four-division is a division form pattern that bisects both the width and the height of the coding unit. The image decoding apparatus 100 may obtain a coding unit of size 128 × 128 from a maximum coding unit of size 256 × 256 based on the partition shape mode information. In addition, the image decoding apparatus 100 may determine the size of the minimum coding unit to be 4 × 4. The image decoding apparatus 100 can obtain the division form mode information indicating "do not divide" for the minimum coding unit.
According to an embodiment, the image decoding apparatus 100 may use block shape information indicating that the current coding unit has a square shape. For example, the image decoding apparatus 100 may determine whether to divide the coding unit of the square into no, vertical, horizontal, or four coding units, or the like, according to the division form mode information. Referring to fig. 3, when the block shape information of the current coding unit 300 shows a square shape, the decoder 120 may not divide the coding unit 310a having the same size as the current coding unit 300 based on the division form mode information indicating not to be divided, or may determine the coding units 310b, 310c, 310d, 310e, and 310f, etc. divided based on the division form mode information indicating a predetermined division manner.
Referring to fig. 3, the image decoding apparatus 100 may determine two coding units 310b by dividing the current coding unit 300 in the vertical direction based on the division form mode information indicating the division in the vertical direction according to an embodiment. The image decoding apparatus 100 may determine two coding units 310c that divide the current coding unit 300 in the horizontal direction based on the division form mode information indicating the division in the horizontal direction. The image decoding apparatus 100 may determine four coding units 310d that divide the current coding unit 300 in the vertical direction and the horizontal direction based on division form mode information indicating that the coding units are divided in the vertical direction and the horizontal direction. According to the embodiment, the image decoding apparatus 100 may determine three coding units 310e that divide the current coding unit 300 in the vertical direction based on the division shape mode information indicating that the coding units are divided by three (tertiary) in the vertical direction. The image decoding apparatus 100 may determine three coding units 310f that divide the current coding unit 300 in the horizontal direction based on the division shape mode information indicating that the coding units are divided in three in the horizontal direction. However, the division form of the coding unit that can be used to divide the square should not be construed as being limited to the above-described form, but may include various forms that can be indicated by the division form mode information. A predetermined division form of the coding unit dividing a square is specifically described below with reference to various embodiments.
Fig. 4 illustrates a process in which the image decoding apparatus 100 divides a non-square shaped coding unit to determine at least one coding unit according to an embodiment.
According to an embodiment, the image decoding apparatus 100 may use block shape information indicating that the current coding unit has a non-square shape. The image decoding apparatus 100 may determine whether to divide the non-square current coding unit or to divide the current coding unit in a predetermined method based on the division form mode information. Referring to fig. 4, when the block shape information of the current coding unit 400 or 450 indicates a non-square shape, the image decoding apparatus 100 may determine a coding unit 410 or 460 having the same size as that of the current coding unit 400 or 450 according to partition form mode information indicating no partitioning, or may determine divided coding units 420a, 420b, 430a, 430b, 430c, 470a, 470b, 480a, 480b, and 480c based on partition form mode information indicating a predetermined partition method. A predetermined division method of dividing a non-square coding unit is specifically described below with reference to various embodiments.
According to an embodiment, the image decoding apparatus 100 may determine the form of the divided coding units using the division form mode information, in which case the division form mode information may indicate the number of at least one coding unit generated by dividing the coding units. Referring to fig. 4, when the partition form mode information indicates that the current coding unit 400 or 450 is divided into two coding units, the image decoding apparatus 100 may determine two coding units 420a, 420b or 470a, 470b included in the current coding unit by dividing the current coding unit 400 or 450 based on the partition form mode information.
According to an embodiment, when the image decoding apparatus 100 divides the non-square shaped current coding unit 400 or 450 based on the division form mode information, the image decoding apparatus 100 may divide the current coding unit in consideration of the position of the long side of the non-square shaped current coding unit 400 or 450. For example, the image decoding apparatus 100 may determine a plurality of coding units by dividing the current coding unit 400 or 450 in a direction of dividing a long side of the current coding unit 400 or 450 in consideration of the shape of the current coding unit 400 or 450.
According to an embodiment, when the division form mode information indicates that the coding unit is divided (triple division) into odd blocks, the image decoding apparatus 100 may determine odd-numbered coding units included in the current coding unit 400 or 450. For example, when the division form mode information indicates that the current coding unit 400 or 450 is divided into three coding units, the image decoding apparatus 100 may divide the current coding unit 400 or 450 into three coding units 430a, 430b and 430c or 480a, 480b, 480c.
According to an embodiment, the ratio of width to height of the current coding unit 400 or 450 may be 4:1 or 1:4. when the width to height ratio is 4: when 1, the block shape information may be in a horizontal direction since the size of the width is larger than that of the height. When the ratio of width to height is 1:4, since the size of the width is shorter than that of the height, the block shape information may be in the vertical direction. The image decoding apparatus 100 may determine to divide the current coding unit into odd blocks based on the division form mode information. Also, the image decoding apparatus 100 may determine the division direction of the current coding unit 400 or 450 based on the block shape information of the current coding unit 400 or 450. For example, when the current coding unit 400 is in the vertical direction, the image decoding apparatus 100 may determine the coding units 430a, 430b, and 430c by dividing the current coding unit 400 in the horizontal direction. In addition, when the current coding unit 450 is in the horizontal direction, the image decoding apparatus 100 may determine the coding units 480a, 480b, and 480c by dividing the current coding unit in the vertical direction.
According to an embodiment, the image decoding apparatus 100 may determine an odd number of coding units included in the current coding unit 400 or 450, and the determined coding units may not necessarily have the same size. For example, the size of a predetermined coding unit 430b or 480b of the determined odd-numbered coding units 430a, 430b, 430c, 480a, 480b, and 480c may be different from the sizes of the other coding units 430a, 430c, 480a, and 480c. In other words, the coding units that can be determined by dividing the current coding unit 400 or 450 may have a plurality of types of sizes, and the odd-numbered coding units 430a, 430b, 430c, 480a, 480b, and 480c may each have a different size according to circumstances.
According to an embodiment, when the division form mode information indicates that the coding unit is divided into odd blocks, the image decoding apparatus 100 may determine the odd-numbered coding units included in the current coding unit 400 or 450, and then, the image decoding apparatus 100 may implement a predetermined restriction on at least one coding unit among the odd-numbered coding units generated by the division. Referring to fig. 4, the image decoding apparatus 100 may differentiate a decoding process for a coding unit 430b and 480b located at the center among three coding units 430a, 430b, and 430c or 480a, 480b, and 480c generated by dividing the current coding unit 400 or 450 from a decoding process for the other coding units 430a, 430c, 480a, and 480c. For example, the image decoding apparatus 100 may restrict the centrally located coding units 430b and 480b from being divided no longer or only a predetermined number of times, unlike the other coding units 430a, 430c, 480a, and 480c.
Fig. 5 illustrates a process in which the image decoding apparatus 100 divides a coding unit based on at least one of block shape information and division form mode information according to an embodiment.
According to an embodiment, the image decoding apparatus 100 may determine to divide the square-shaped first coding unit 500 into a plurality of coding units or to not divide based on at least one of the block shape information and the division form mode information. According to an embodiment, when the division form mode information indicates that the first encoding unit 500 is divided in the horizontal direction, the image decoding apparatus 100 may determine the second encoding unit 510 by dividing the first encoding unit 500 in the horizontal direction. The first coding unit, the second coding unit, and the third coding unit used according to an embodiment are terms used to understand the context of division between coding units. For example, the second coding unit may be determined by dividing the first coding unit, and the third coding unit may be determined by dividing the second coding unit. Hereinafter, it will be understood that the relationship between the first coding unit, the second coding unit and the third coding unit used is based on the above-described features.
According to an embodiment, the image decoding apparatus 100 may determine to divide the determined second coding unit 510 into a plurality of coding units or to not divide based on the division form mode information. Referring to fig. 5, the image decoding apparatus 100 may divide the non-square shaped second coding unit 510 determined by dividing the first coding unit 500 into at least one third coding unit 520a, 520b, 520c, 520d, etc., or may not divide the second coding unit 510 based on the division form mode information. The image decoding apparatus 100 may obtain division form mode information and may divide the first encoding unit 500 into a plurality of second encoding units (e.g., 510) of various forms based on the obtained division form mode information, and the second encoding unit 510 may be divided according to the manner in which the first encoding unit 500 is divided based on the division form mode information. According to an embodiment, when the first encoding unit 500 is divided into the second encoding units 510 based on the partition form mode information regarding the first encoding unit 500, the second encoding units 510 may also be divided into third encoding units (e.g., 520a, 520b, 520c, 520d, etc.) based on the partition form mode information regarding the second encoding units 510. In other words, the coding units may be recursively divided based on the division form pattern information about each coding unit. Therefore, a square-shaped coding unit may be determined among non-square-shaped coding units, or a non-square-shaped coding unit may be determined by recursively dividing the square-shaped coding unit.
Referring to fig. 5, predetermined coding units (e.g., a centrally located coding unit or a square-shaped coding unit) among an odd number of third coding units 520b, 520c, and 520d determined by dividing a non-square-shaped second coding unit 510 may be recursively divided. According to an embodiment, the square third encoding unit 520b, which is one of the odd number of third encoding units 520b, 520c, and 520d, may be divided in the horizontal direction and into a plurality of fourth encoding units. The non-square-shaped fourth encoding unit 530b or 530d of one of the plurality of fourth encoding units 530a, 530b, 530c, and 530d may be further divided into a plurality of encoding units. For example, the non-square fourth coding unit 530b or 530d may be subdivided into an odd number of coding units. Methods that can be used to recursively divide the coding units are described later by various embodiments.
According to an embodiment, the image decoding apparatus 100 may divide the third encoding units 520a, 520b, 520c, and 520d into a plurality of encoding units, respectively, based on the division form mode information. Also, the image decoding apparatus 100 may determine not to divide the second encoding unit 510 based on the division form mode information. The image decoding apparatus 100 may divide the non-square-shaped second encoding unit 510 into an odd number of third encoding units 520b, 520c, and 520d according to an embodiment. The image decoding apparatus 100 may apply a predetermined restriction to a predetermined number of third encoding units among the odd-numbered third encoding units 520b, 520c, and 520d. For example, the image decoding apparatus 100 may limit the encoding unit 520c located at the center among the odd-numbered third encoding units 520b, 520c, and 520d to be no longer divided or to be divided a settable number of times.
Referring to fig. 5, the image decoding apparatus 100 may restrict a centrally located coding unit 520c among odd-numbered third coding units 520b, 520c, and 520d included in the non-square-shaped second coding unit 510 from being divided any more, restrict a division form (e.g., divided into only four coding units or divided in a form corresponding to the form in which the second coding unit 510 is divided), or restrict the number of times of division (e.g., divided only n times, n > 0). However, the limitation of the centrally located coding unit 520c is only a simple embodiment, and should not be construed as being limited to the above-described embodiment, and should be construed as including various limitations that the centrally located coding unit 520c can be decoded to be different from the other coding units 520b and 520d.
According to an embodiment, the image decoding apparatus 100 may obtain division form mode information for dividing the current coding unit from a predetermined position within the current coding unit.
Fig. 6 illustrates a method in which the image decoding apparatus 100 determines a predetermined coding unit from an odd number of coding units according to an embodiment.
Referring to fig. 6, partition form mode information of the current coding units 600 and 650 may be obtained from samples located at predetermined positions (e.g., the samples 640 and 690 located at the centers) among a plurality of samples included in the current coding units 600 and 650. However, the predetermined position within the current coding unit 600 at which at least one of such division form mode information can be obtained should not be restrictively interpreted as a central position shown in fig. 6, but the predetermined position may be interpreted as including various positions (e.g., uppermost, lowermost, left, right, left upper, left lower, right upper, or right lower, etc.) that may be included within the current coding unit 600. The image decoding apparatus 100 may obtain the division form mode information obtained from a predetermined position to determine whether to divide the current coding unit into coding units of various shapes and sizes or to determine not to divide.
According to an embodiment, the image decoding apparatus 100 may select one of the coding units when the current coding unit is divided into a predetermined number of coding units. There may be various methods of selecting one from among a plurality of coding units, which are described later with reference to various embodiments below.
According to an embodiment, the image decoding apparatus 100 may divide a current coding unit into a plurality of coding units and determine a coding unit of a predetermined position.
According to an embodiment, the image decoding apparatus 100 may use information indicating respective positions of the odd-numbered coding units to determine a centrally located coding unit among the odd-numbered coding units. Referring to fig. 6, the image decoding apparatus 100 may divide the current encoding unit 600 or the current encoding unit 650 to determine an odd number of encoding units 620a, 620b, and 620c or an odd number of encoding units 660a, 660b, and 660c. The image decoding apparatus 100 may determine the central encoding unit 620b or the central encoding unit 660b using information about the positions of the odd-numbered encoding units 620a, 620b, and 620c or the odd-numbered encoding units 660a, 660b, and 660c. For example, the image decoding apparatus 100 may determine the positions of the encoding units 620a, 620b, and 620c based on information indicating the positions of predetermined sampling points included in the encoding units 620a, 620b, and 620c to determine the encoding unit 620b located at the center. Specifically, the image decoding apparatus 100 may determine the positions of the encoding units 620a, 620b, and 620c based on the information indicating the positions of the samples 630a, 630b, and 630c at the upper left ends of the encoding units 620a, 620b, and 620c to determine the encoding unit 620b located at the center.
According to an embodiment, the information indicating the positions of the samples 630a, 630b, and 630c included in the upper left ends of the encoding units 620a, 620b, and 620c, respectively, may include information about the positions or coordinates of the encoding units 620a, 620b, and 620c within the screen. According to an embodiment, the information indicating the positions of the samples 630a, 630b, and 630c included at the upper left ends in the coding units 620a, 620b, and 620c, respectively, may include information indicating the width or height of the coding units 620a, 620b, and 620c included in the current coding unit 600, which may be equivalent to information indicating the difference between the coordinates of the coding units 620a, 620b, and 620c within the screen. That is, the image decoding apparatus 100 may determine the encoding unit 620b located at the center directly using information on the positions or coordinates of the encoding units 620a, 620b, and 620c within the screen or using information on the width or height of the encoding unit corresponding to the difference between the coordinates.
According to an embodiment, the information indicating the position of the sample 630a at the upper left end of the upper end coding unit 620a may indicate (xa, ya) coordinates, the information indicating the position of the sample 630b at the upper left end of the central coding unit 620b may indicate (xb, yb) coordinates, and the information indicating the position of the sample 630c at the upper left end of the lower end coding unit 620c may indicate (xc, yc) coordinates. The image decoding apparatus 100 may determine the central encoding unit 620b using the coordinates of the upper left samples 630a, 630b, and 630c included in the encoding units 620a, 620b, and 620c, respectively. For example, the coordinates of the samples 630a, 630b, and 630c at the upper left end are arranged in an ascending or descending order, and the coding unit 620b including the coordinates of the sample 630b located at the center (i.e., (xb, yb)) may be determined as the coding unit located at the center among the coding units 620a, 620b, and 620c determined by dividing the current coding unit 600. However, the coordinates indicating the positions of the samples 630a, 630b, and 630c at the upper left end may refer to coordinates indicating absolute positions within the screen, and furthermore, the (dxb, dyb) coordinates as information indicating the relative position of the sample 630b at the upper left end of the center coding unit 620b with respect to the position of the sample 630a at the upper left end of the upper end coding unit 620a and the (dxc, dyc) coordinates as information indicating the relative position of the sample 630c at the upper left end of the lower end coding unit 620c with respect to the position of the sample 630a at the upper left end of the upper end coding unit 620a may be used. Also, the method of determining the encoding unit of the predetermined position by using information indicating the position of the sampling point included in the encoding unit as the coordinates of the sampling point should not be construed as being limited to the above-described method, but as various arithmetic methods that can use the coordinates of the sampling point.
According to an embodiment, the image decoding apparatus 100 may divide the current coding unit 600 into a plurality of coding units 620a, 620b, and 620c, and may determine the coding units according to a predetermined standard from among the coding units 620a, 620b, and 620 c. For example, the image decoding apparatus 100 may select the encoding unit 620b having a different size from among the encoding units 620a, 620b, and 620 c.
According to an embodiment, the image decoding apparatus 100 may determine the width and height of each of the encoding units 620a, 620b, and 620c using (xa, ya) coordinates as information indicating the position of the sample 630a at the upper left end of the upper end encoding unit 620a, (xb, yb) coordinates as information indicating the position of the sample 630b at the upper left end of the center encoding unit 620b, and (xc, yc) coordinates as information indicating the position of the sample 630c at the upper left end of the lower end encoding unit 620 c. The image decoding apparatus 100 can determine the respective sizes of the encoding units 620a, 620b, and 620c using the coordinates (xa, ya), (xb, yb), and (xc, yc) indicating the positions of the encoding units 620a, 620b, and 620 c. According to an embodiment, the image decoding apparatus 100 may determine the width of the upper-end encoding unit 620a as the width of the current encoding unit 600. The image decoding apparatus 100 may determine the height of the upper-end encoding unit 620a as yb-ya. According to an embodiment, the image decoding apparatus 100 may determine the width of the central encoding unit 620b as the width of the current encoding unit 600. The image decoding apparatus 100 may determine the height of the central encoding unit 620b as yc-yb. According to an embodiment, the image decoding apparatus 100 may determine the width or height of the lower encoding unit using the width or height of the current encoding unit and the widths and heights of the upper-end encoding unit 620a and the central encoding unit 620b. The image decoding apparatus 100 may determine the coding unit having a size different from the sizes of the other coding units based on the determined widths and heights of the coding units 620a, 620b, and 620 c. Referring to fig. 6, the image decoding apparatus 100 may determine a central encoding unit 620b having a size different from the sizes of the upper-end encoding unit 620a and the lower-end encoding unit 620c as an encoding unit at a predetermined position. However, the process of determining the size having the size different from the sizes of other coding units by the image decoding apparatus 100 described above is only one embodiment of determining a coding unit of a predetermined position using the size of the coding unit determined based on the sampling point coordinates, and thus various processes of determining a coding unit of a predetermined position by comparing the sizes of the coding units determined according to the predetermined sampling point coordinates may be used.
The image decoding apparatus 100 can determine the width or height of each of the encoding units 660a, 660b, and 660c using the (xd, yd) coordinates as information indicating the position of the sample 670a at the upper left end of the left encoding unit 660a, (xe, ye) coordinates as information indicating the position of the sample 670b at the upper left end of the center encoding unit 660b, and (xf, yf) coordinates as information indicating the position of the sample 670c at the upper left end of the right encoding unit 660c. The image decoding apparatus 100 may determine the size of each of the encoding units 660a, 660b, and 660c using (xd, yd), (xe, ye), and (xf, yf) as coordinates indicating the positions of the encoding units 660a, 660b, and 660c.
According to an embodiment, the image decoding apparatus 100 may determine the width of the left encoding unit 660a as xe-xd. The image decoding apparatus 100 may determine the height of the left encoding unit 660a as the height of the current encoding unit 650. According to an embodiment, the image decoding apparatus 100 may determine the width of the central encoding unit 660b as xf-xe. The image decoding apparatus 100 may determine the height of the central encoding unit 660b as the height of the current encoding unit 600. According to an embodiment, the image decoding apparatus 100 may determine the width or height of the right encoding unit 660c using the width or height of the current encoding unit 650 and the widths and heights of the left encoding unit 660a and the center encoding unit 660b. The image decoding apparatus 100 may determine the coding units having a size different from the sizes of the other coding units based on the determined widths and heights of the coding units 660a, 660b, and 660c. Referring to fig. 6, the image decoding apparatus 100 may determine a central coding unit 660b as a coding unit of a predetermined position, the central coding unit 660b having a size different from sizes of the left coding unit 660a and the right coding unit 660c. However, the process of the image decoding apparatus 100 determining the coding unit having the size different from the sizes of the other coding units is only one embodiment of determining the coding unit of the predetermined position using the size of the coding unit determined based on the coordinates of the samples, and thus various processes of determining the coding unit of the predetermined position by comparing the sizes of the coding units determined according to the predetermined coordinates of the samples may be used.
However, the position of the sampling point considered for determining the position of the coding unit should not be construed as being limited to the left upper end, but may be construed that information on the position of any sampling point included in the coding unit may be used.
According to an embodiment, the image encoding apparatus 100 may select a coding unit of a predetermined position from an odd number of coding units determined by dividing the current coding unit in consideration of the form of the current coding unit. For example, if the current coding unit has a non-square shape whose width is larger than height, the image decoding apparatus 100 may determine a coding unit at a predetermined position according to the horizontal direction. In other words, the image decoding apparatus 100 can determine one of the coding units located at different positions in the horizontal direction and impose restrictions on the coding unit. If the current coding unit has a non-square shape whose height is greater than its width, the image decoding apparatus 100 may determine a coding unit at a predetermined position according to the vertical direction. In other words, the image decoding apparatus 100 can determine one of the coding units located at different positions in the vertical direction and impose restrictions on the coding unit.
According to an embodiment, the image decoding apparatus 100 may determine the coding unit at the predetermined position among the even number of coding units using information indicating respective positions of the even number of coding units. The image decoding apparatus 100 may determine an even number of coding units by dividing (binary-split) the current coding unit, and may determine a coding unit at a predetermined position using information on positions of the even number of coding units. The specific process may correspond to the process of determining the coding unit located at the predetermined position (e.g., the central position) from the odd number of coding units, which is described in detail with reference to fig. 6, and thus, will not be described again.
According to an embodiment, when a current coding unit having a non-square shape is divided into a plurality of coding units, a coding unit at a predetermined position may be determined from the plurality of coding units using predetermined information on the coding unit at the predetermined position in the dividing process. For example, the image decoding apparatus 100 may determine a coding unit at the center from among a plurality of coding units that divide the current coding unit using at least one of block shape information and division form mode information of samples stored in the central coding unit in the division process.
Referring to fig. 6, the image decoding apparatus 100 may divide the current coding unit 600 into a plurality of coding units 620a, 620b, and 620c based on the division form mode information, and may determine a coding unit 620b at the center from among the plurality of coding units 620a, 620b, and 620 c. Further, the image decoding apparatus 100 may determine the encoding unit 620b at the center in consideration of the position where the partition form mode information is obtained. That is, the partition form mode information may be obtained from the sampling point 640 at the center of the current coding unit 600, and the coding unit 620b including the sampling point 640 when the current coding unit 600 is partitioned into the plurality of coding units 620a, 620b, and 620c may be determined as a coding unit at the center based on the partition form mode information. However, the information for determining the centrally located coding unit should not be construed as being limited to the division form pattern information, but various information may be used in determining the centrally located coding unit.
According to an embodiment, the predetermined information for identifying the coding unit at the predetermined position is obtainable from a predetermined sample point included in the coding unit to be determined. Referring to fig. 6, the image decoding apparatus 100 may determine a coding unit of a predetermined position (e.g., a centrally located coding unit among coding units divided into a plurality) from among a plurality of coding units 620a, 620b, and 620c determined by dividing the current coding unit 600, using division form mode information obtained from a sample point of a predetermined position (e.g., a sample point located at the center of the current coding unit 600) within the current coding unit 600. That is, the image decoding apparatus 100 may determine the samples located at the predetermined position in consideration of the block shape of the current coding unit 600, and may determine a coding unit 620b including samples from which predetermined information (e.g., division form pattern information) may be obtained among a plurality of coding units 620a, 620b, and 620c determined by dividing the current coding unit 600 and apply a predetermined restriction to the coding unit 620b. Referring to fig. 6, according to an embodiment, the image decoding apparatus 100 may determine a sample 640 located at the center of the current encoding unit 600 as a sample that can be used to obtain predetermined information, and may implement a predetermined restriction on the encoding unit 620b including the sample 640 in the decoding process. However, the positions of the samples from which the predetermined information can be obtained should not be construed as being limited to the positions described above, but may be construed as samples at any positions included in the encoding unit 620b determined for setting the restriction.
According to an embodiment, the position of the sampling point from which the predetermined information can be obtained may be determined according to the shape of the current coding unit 600. According to an embodiment, the block shape information may determine whether the shape of the current coding unit is square or non-square, and may determine the positions of the samples from which the predetermined information may be obtained according to the shape. For example, the image decoding apparatus 100 may determine a sample point located on a boundary dividing at least one of the width and the height of the current coding unit in half as a sample point from which predetermined information can be obtained, using at least one of the information on the width and the information on the height of the current coding unit. According to another example, when the block shape information on the current coding unit indicates a non-square shape, the image decoding apparatus 100 may determine one of the samples adjacent to the boundary dividing the long side of the current coding unit into halves as a sample from which predetermined information can be obtained.
According to an embodiment, when dividing a current coding unit into a plurality of coding units, the image decoding apparatus 100 may determine a coding unit located at a predetermined position among the plurality of coding units using the division form mode information. According to an embodiment, the image decoding apparatus 100 may obtain the partition form mode information from the samples of the predetermined positions included in the coding units, and may divide the plurality of coding units generated by dividing the current coding unit by using the information on the partition form mode obtained from the samples of the predetermined positions respectively included in the plurality of coding units. In other words, the coding units may be recursively divided using division form pattern information obtained from samples included at predetermined positions of the respective coding units. The recursive partitioning process of the coding unit has been described in detail with reference to fig. 5, and thus, will not be described again.
According to an embodiment, the image decoding apparatus 100 may determine at least one coding unit by dividing a current coding unit, and may determine an order of decoding the at least one coding unit according to a predetermined block (e.g., the current coding unit).
Fig. 7 illustrates an order in which a plurality of coding units are processed when the image decoding apparatus 100 divides a current coding unit to determine the plurality of coding units according to an embodiment.
According to an embodiment, the image decoding apparatus 100 may divide the first encoding unit 700 in the vertical direction to determine the second encoding units 710a and 710b, divide the first encoding unit in the horizontal direction to determine the second encoding units 730a and 730b, or divide the first encoding unit 700 in the vertical direction and the horizontal direction to determine the second encoding units 750a, 750b, 750c, and 750d according to the division form mode information.
Referring to fig. 7, the image decoding apparatus 100 may determine an order such that the second encoding units 710a and 710b determined by dividing the first encoding unit 700 in the vertical direction are processed in the horizontal direction 710 c. The image decoding apparatus 100 may set the processing order of the second encoding units 730a and 730b determined by dividing the first encoding unit 700 in the horizontal direction to the vertical direction 730c. The image decoding apparatus 100 may determine the second coding units 750a, 750b, 750c,750d determined by dividing the first coding unit 700 in the vertical direction and the horizontal direction according to a predetermined order (e.g., a raster scan order (raster scan order) or a z scan order (z scan order) 750e, etc.) in which the coding unit located in one line is processed and then the coding unit located in the next line is processed.
According to an embodiment, the image decoding apparatus 100 may recursively divide the encoding units. Referring to fig. 7, the image decoding apparatus 100 may determine a plurality of coding units 710a, 710b, 730a, 730b, 750a, 750b, 750c,750d by dividing a first coding unit 700, and may recursively divide the determined plurality of coding units 710a, 710b, 730a, 730b, 750a, 750b, 750c,750d. The method of dividing the plurality of coding units 710a, 710b, 730a, 730b, 750a, 750b, 750c,750d may be a method corresponding to the method of dividing the first coding unit 700. Accordingly, the plurality of coding units 710a, 710b, 730a, 730b, 750a, 750b, 750c,750d may each be independently divided into a plurality of coding units. Referring to fig. 7, image decoding apparatus 100 may divide first coding section 700 in the vertical direction to determine second coding sections 710a and 710b, and may further determine whether to divide second coding sections 710a and 710b or not.
According to an embodiment, the image decoding apparatus 100 may divide the left second encoding unit 710a into the third encoding units 720a and 720b in the horizontal direction, and may not divide the right second encoding unit 710b.
According to an embodiment, the processing order of the coding units may be determined according to a division process of the coding units. In other words, the processing order of the divided coding units may be determined based on the processing order of the coding units before division. The image decoding apparatus 100 may determine the processing order of the third encoding units 720a and 720b determined by dividing the left second encoding unit 710a, separately from the right second encoding unit 710b. Since the third encoding units 720a and 720b are determined by dividing the left second encoding unit 710a in the horizontal direction, the third encoding units 720a and 720b may be processed in the vertical direction 720 c. Also, the order in which the left second encoding unit 710a and the right second encoding unit 710b are processed corresponds to the horizontal direction 710c, and thus, after the third encoding units 720a and 720b included in the left second encoding unit 710a are processed in the vertical direction 720c, the right encoding unit 710b may be processed. The above-described contents are only for explaining the process of determining the processing order of each coding unit from the coding units before division, and should not be construed as being limited to the above-described embodiments, and should be construed as being applicable to various methods in which the coding units determined by being divided in various forms can be independently processed in a predetermined order.
Fig. 8 illustrates a process of determining that a current coding unit is to be divided into an odd number of coding units when the image decoding apparatus 100 cannot process the coding units in a predetermined order, according to an embodiment.
According to an embodiment, the image decoding apparatus 100 determines that the current coding unit is divided into an odd number of coding units based on the information on the obtained block shape and the division form mode. Referring to fig. 8, a first coding unit 800 having a square shape may be divided into non-square second coding units 810a and 810b, and the second coding units 810a and 810b may be independently divided into third coding units 820a, 820b, 820c, 820d, and 820e, respectively. According to an embodiment, in the second encoding unit, the image decoding apparatus 100 may divide the left encoding unit 810a in the horizontal direction to determine a plurality of third encoding units 820a and 820b, and may divide the right encoding unit 810b into an odd number of third encoding units 820c, 820d, and 820e.
According to an embodiment, the image decoding apparatus 100 may determine whether the third encoding units 820a, 820b, 820c, 820d, and 820e may be processed in a predetermined order to determine whether there are encoding units divided into odd numbers. Referring to fig. 8, the image decoding apparatus 100 may recursively divide the first encoding unit 800 to determine third encoding units 820a, 820b, 820c, 820d, and 820e. The image decoding apparatus 100 may determine whether the first coding unit 800, the second coding units 810a and 810b, or the third coding units 820a, 820b, 820c, 820d, and 820e are divided into an odd number of coding units in the division form based on the division form mode information. For example, the coding units located at the right side among the second coding units 810a and 810b may be divided into odd number of third coding units 820c, 820d, and 820e. The processing order of the plurality of coding units included in the first coding unit 800 may be a predetermined order (e.g., z-scan order 830), and the image decoding apparatus 100 may determine whether the third coding units 820c, 820d, and 820e determined by dividing the right-side second coding unit 810b into an odd number satisfy a condition that can be processed in the predetermined order.
According to an embodiment, the image decoding apparatus 100 may determine whether the third encoding units 820a, 820b, 820c, 820d, and 820e included in the first encoding unit 800 satisfy a condition that can be processed in a predetermined order, the condition being related to whether at least one of the width and the height of the second encoding units 810a and 810b can be divided in half by the boundary of the third encoding units 820a, 820b, 820c, 820d, and 820e. For example, the third encoding units 820a and 820b determined by dividing the height of the left second encoding unit 810a of a non-square shape by half may satisfy the condition. The boundary of the third coding units 820c, 820d, 820e determined by dividing the right second coding unit 810b into three coding units cannot divide the width or height of the right second coding unit 810b in half, and thus it can be determined that the third coding units 820c, 820d, and 820e do not satisfy the condition. The image decoding apparatus 100 may determine that such a condition is not satisfied as discontinuity (disconnection) of the scan order, and may determine that the right second encoding unit 810b is divided into an odd number of encoding units based on the determination result. According to an embodiment, when the coding unit is divided into an odd number of coding units, the image decoding apparatus 100 may apply predetermined restrictions to the coding unit at a predetermined position among the divided coding units, and since the contents of the restrictions or the predetermined position are described in detail with reference to various embodiments, the description thereof is omitted.
Fig. 9 illustrates a process in which the image decoding apparatus 100 divides the first coding unit 900 to determine at least one coding unit according to an embodiment.
According to an embodiment, the image decoding apparatus 100 may divide the first encoding unit 900 based on division form mode information obtained from a receiver (not shown). The square first coding unit 900 may be divided into four coding units having a square shape or into a plurality of non-square shaped coding units. For example, referring to fig. 9, when the first coding unit 900 is square and the partition form mode information indicates a partition of coding units in a non-square form, the image decoding apparatus 100 may partition the first coding unit 900 into a plurality of non-square coding units. Specifically, when the division form mode information indicates that an odd number of coding units are determined by dividing the first coding unit 900 in the horizontal direction or the vertical direction, the image decoding apparatus 100 may divide the square-shaped first coding unit 900 into the odd number of coding units, that is, the second coding units 910a, 910b, and 910c determined by dividing in the vertical direction or the second coding units 920a, 920b, and 920c determined by dividing in the horizontal direction.
According to an embodiment, the image decoding apparatus 100 may determine whether the second coding units 910a, 910b, 910c, 920a, 920b, and 920c included in the first coding unit 900 satisfy a condition that can be processed in a predetermined order, the condition being related to whether at least one of the width and the height of the first coding unit 900 can be divided in half by the boundaries of the second coding units 910a, 910b, 910c, 920a, 920b, and 920c. Referring to fig. 9, the boundaries of the second coding units 910a, 910b, and 910c determined by dividing the square-shaped first coding unit 900 in the vertical direction cannot divide the width of the first coding unit 900 in half, and thus it can be determined that the first coding unit 900 does not satisfy the condition that can be processed in a predetermined order. Also, the boundaries of the second coding units 920a, 920b, and 920c determined by dividing the square-shaped first coding unit 900 in the horizontal direction cannot divide the height of the first coding unit 900 in half, and thus it can be determined that the first coding unit 900 does not satisfy the condition that can be processed in a predetermined order. The image decoding apparatus may determine that such a condition is not satisfied as discontinuity (disconnection) of the scanning order, and based on the determination result, may determine that the first encoding unit 900 is divided into an odd number of encoding units. According to an embodiment, when the coding unit is divided into an odd number of coding units, the image decoding apparatus 100 may implement predetermined restrictions on the coding unit at a predetermined position among the divided coding units, and since the contents of the restrictions or the predetermined position are described in detail with reference to various embodiments, the details are not repeated.
According to an embodiment, the image decoding apparatus 100 may divide the first coding unit to determine various forms of coding units.
Referring to fig. 9, the image decoding apparatus 100 may divide the square-shaped first coding unit 900, the non-square-shaped first coding unit 930 or 950 into various forms of coding units.
Fig. 10 illustrates that when the image decoding apparatus 100 divides the second encoding unit of a non-square shape determined by the first encoding unit 1000 to satisfy a predetermined condition, a form in which the second encoding unit can be divided is limited, according to an embodiment.
According to an embodiment, the image decoding apparatus 100 may determine to divide the square-shaped first coding unit 1000 into the non-square-shaped second coding units 1010a, 1010b, 1020a, and 1020b based on the division form mode information obtained from a receiver (not shown). The second encoding units 1010a, 1010b, 1020a, and 1020b may be independently divided. Thus, the image decoding apparatus 100 may determine to divide the second coding units 1010a, 1010b, 1020a, and 1020b into a plurality of coding units, respectively, or determine not to divide, based on the division form mode information of the respective second coding units 1010a, 1010b, 1020a, and 1020 b. According to an embodiment, the image decoding apparatus 100 may determine the third coding units 1012a and 1012b by dividing the left second coding unit 1010a of the non-square shape determined by dividing the first coding unit 1000 in the vertical direction in the horizontal direction. However, the image decoding apparatus 100 may restrict the right encoding unit 1010b from being divided in the horizontal direction after dividing the left second encoding unit 1010a in the horizontal direction, and may not be divided in the same horizontal direction as the dividing direction of the left second encoding division 1010 a. When the third coding units 1014a and 1014b are determined by dividing the second coding unit 1010b on the right side in the same direction, the second coding unit 1010a on the left side and the second coding unit 1010b on the right side may be each independently divided in the horizontal direction to determine the third coding units 1012a, 1012b, 1014a, and 1014b. However, this is the same result as the image decoding apparatus 100 divides the first encoding unit 1000 into the four square-shaped second encoding units 1030a, 1030b, 1030c, and 1030d based on the division form mode information, which may be inefficient from the viewpoint of image decoding.
According to an embodiment, the image decoding apparatus 100 may determine the third encoding units 1022a, 1022b, 1024a, and 1024b by dividing the second encoding unit 1020a or 1020b of a non-square shape determined by dividing the first encoding unit 1000 in the horizontal direction in the vertical direction. However, when one of the second coding units (e.g., the upper-end second coding unit 1020 a) is divided in the vertical direction, the image decoding apparatus 100 may restrict that another second coding unit (e.g., the lower-end coding unit 1020b cannot be divided in the same vertical direction as the direction in which the upper-end second coding unit 1020a is divided) according to the above-described reason.
Fig. 11 illustrates a process in which the image decoding apparatus 100 divides a coding unit of a square shape when the division form mode information cannot indicate the division into four coding units of a square shape according to an embodiment.
According to an embodiment, the image decoding apparatus 100 may determine the second encoding units 1110a, 1110b, 1120a, and 1120b, etc. based on dividing the first encoding unit 1100 in the division form mode information. The division form mode information may include various forms of information regarding which the coding units may be divided, however, in some cases, information regarding various forms including four coding units for dividing the coding units into squares cannot be included. According to the division form mode information, the image decoding apparatus 100 cannot divide the square-shaped first coding unit 1100 into the four square-shaped second coding units 1130a, 1130b, 1130c, and 1130d. Based on the division form mode information, the image decoding apparatus 100 may determine the non-square shaped second encoding units 1110a, 1110b, 1120a, and 1120b, and the like.
According to an embodiment, the image decoding apparatus 100 may divide the non-square shaped second encoding units 1110a, 1110b, 1120a, 1120b, and the like, each independently. The respective second encoding units 1110a, 1110b, 1120a, and 1120b are divided in a predetermined order by a recursive method, which may be a dividing method corresponding to a method of dividing the first encoding unit 1100 based on the division form pattern information.
For example, the image decoding apparatus 100 may determine the square-shaped third encoding units 1112a and 1112b by dividing the left-side second encoding unit 1110a in the horizontal direction, and may determine the square-shaped third encoding units 1114a and 1114b by dividing the right-side second encoding unit 1110b in the horizontal direction. Further, the image decoding apparatus 100 can determine the square-shaped third encoding units 1116a, 1116b, 1116c, and 1116d by dividing both the left-side second encoding unit 1110a and the right-side second encoding unit 1110b in the horizontal direction. In this case, the coding unit may be determined in the same form as that of the second coding units 1130a, 1130b, 1130c and 1130d in which the first coding unit 1100 is divided into four square shapes.
According to another example, the image decoding apparatus 100 may determine the square-shaped third encoding units 1122a and 1122b by dividing the upper-end second encoding unit 1120a in the vertical direction, and may determine the square-shaped third encoding units 1124a and 1124b by dividing the lower-end second encoding unit 1120b in the vertical direction. Further, the image decoding apparatus 100 may determine the square-shaped third encoding units 1126a, 1126b, 1126a, and 1126b by dividing each of the upper-end second encoding unit 1120a and the lower-end second encoding unit 1120b in the vertical direction. In this case, the coding units may be determined in a shape similar to the division of the first coding unit 1100 into four square-shaped second coding units 1130a, 1130b, 1130c, and 1130d.
Fig. 12 illustrates that the processing order among a plurality of coding units according to an embodiment may be changed according to the division process of the coding units.
According to an embodiment, the image decoding apparatus 100 may divide the first encoding unit 1200 based on the division form mode information. When the block shape is a square shape and the division form mode information indicates that the first encoding unit 1200 is divided in at least one of the horizontal direction and the vertical direction, the image decoding apparatus 100 may determine the second encoding unit (e.g., 1210a, 1210b, 1220a and 1220b, etc.) by dividing the first encoding unit 1200. Referring to fig. 12, the second coding units 1210a, 1210b, 1220a, and 1220b of a non-square shape determined by dividing the first coding unit 1200 only in the horizontal direction or the vertical direction may be independently divided based on the division form pattern information about the respective second coding units 1210a, 1210b, 1220a, and 1220 b. For example, the image decoding apparatus 100 may determine the third encoding units 1216a, 1216b, 1216c, and 1216d by dividing the second encoding units 1210a and 1210b generated by dividing the first encoding unit 1200 in the vertical direction in the horizontal direction, respectively, and may determine the third encoding units 1226a, 1226b, 1226c, and 1226d by dividing the second encoding units 1220a and 1220b generated by dividing the first encoding unit 1200 in the horizontal direction, respectively. The dividing process of the second encoding units 1210a, 1210b, 1220a and 1220b has been described in detail with reference to fig. 11, and thus, the description thereof is omitted.
According to an embodiment, the image decoding apparatus 100 may process the encoding units in a predetermined order. The features regarding the processing of the coding units in the predetermined order have already been described in conjunction with fig. 7, and thus will not be described in detail. Referring to fig. 12, the image decoding apparatus 100 may divide a square-shaped first coding unit 1200 to determine four square-shaped third coding units 1216a, 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d. According to an embodiment, the image decoding apparatus 100 may determine the processing order of the third encoding units 1216a, 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d according to the form in which the first encoding unit 100 is divided.
According to an embodiment, the image decoding apparatus 100 may determine the third encoding units 1216a, 1216b, 1216c, and 1216d by dividing the second encoding units 1210a and 1210b, which are generated by dividing in the vertical direction, in the horizontal direction, respectively, and may process the third encoding units 1216a and 1216c included in the left-side second encoding unit 1210a in the vertical direction first, and then process the third encoding units 1216b and 1216d included in the right-side second encoding unit 1210b in the vertical direction in order 1217.
According to an embodiment, the image decoding apparatus 100 may determine the third encoding units 1226a, 1226b, 1226c, and 1226d by dividing the second encoding units 1220a and 1220b generated by dividing in the horizontal direction in the vertical direction, respectively, and may process the third encoding units 1226a and 1226b included in the upper-end second encoding unit 1220a in the vertical direction first, and then process the third encoding units 1226c and 1226d included in the lower-end second encoding unit 1220b in the vertical direction in order 1227.
Referring to fig. 12, square-shaped third encoding units 1216a, 1216b, 1216c, 1216d, 1226a, 1226b, 1226c, and 1226d may be determined by dividing the second encoding units 1210a, 1210b, 1220a, and 1220b, respectively. The second coding units 1210a and 1210b determined by the vertical direction division and the second coding units 1220a and 1220b determined by the horizontal direction division are divided in different forms from each other, however, if according to a third coding unit determined later, the first coding unit 1200 may be divided into coding units of the same shape. Based on this, by recursively dividing the coding units in different procedures based on the division form mode information, even if a plurality of coding units of the same shape as a result are determined, the image decoding apparatus 100 can process such determined plurality of coding units of the same shape in different orders from each other.
Fig. 13 illustrates a process of determining a depth of a coding unit as a shape and a size of the coding unit are changed when the coding unit is recursively divided to determine a plurality of coding units according to an embodiment.
According to an embodiment, the image decoding apparatus 100 may determine the depth of the coding unit according to a predetermined reference. For example, the predetermined reference may be the length of the long side of the coding unit. When the length of the long side of the current coding unit is divided by 2n (n > 0) times the length of the long side of the coding unit before division, the image decoding apparatus 100 may determine that the depth of the current coding unit is increased by n more than the depth of the coding unit before division. Hereinafter, coding units whose depths are increased are expressed as coding units having lower depths.
Referring to fig. 13, based on block shape information indicating a square shape according to an embodiment (e.g., the block shape information may indicate' 0). If it is assumed that the size of the square-shaped first coding unit 1300 is 2N × 2N, the second coding unit 1302 determined by dividing the width and height of the first coding unit 1300 by 1/2 times may have a size of N × N. Further, the third encoding unit 1304, which is determined by dividing the width and height of the second encoding unit 1302 into 1/2 sizes, respectively, may have a size of N/2 × N/2. At this time, the width and height of the third encoding unit 1304 correspond to 1/4 times the width and height of the first encoding unit 1300, respectively. When the depth of the first coding unit 1300 is D, the depth of the second coding unit 1302, which is 1/2 times the width and height of the first coding unit 1300, may be D +1, and the depth of the third coding unit 1304, which is 1/4 times the width and height of the first coding unit 1300, may be D +2.
According to an embodiment, the image decoding device 100 may divide the first encoding unit 1310 or 1320 of the non-square shape and determine the second encoding unit 1312 or 1322, the third encoding unit 1314 or 1324, and the like of the lower depth based on block shape information indicating the non-square shape (e.g., the block shape information may indicate '1 ns \ver', which indicates a non-square shape having a longer height than width, or '2 ns_hor', which indicates a non-square shape having a longer width than height).
The image decoding apparatus 100 may determine the second encoding unit (e.g., the second encoding units 1302, 1312, 1322, etc.) by dividing at least one of the width and the height of the first encoding unit 1310 having the size of N × 2N. In other words, the image decoding apparatus 100 may divide the first encoding unit 1310 in the horizontal direction to determine the second encoding unit 1302 having the size of N × N or the second encoding unit 1322 having the size of N × N/2, or may divide the first encoding unit 1310 in the horizontal direction and the vertical direction to determine the second encoding unit 1312 having the size of N/2 × N.
According to an embodiment, the image decoding apparatus 100 may determine the second coding unit (e.g., 1302, 1312, 1322, etc.) by dividing at least one of the width and the height of the first coding unit 1320 having a size of 2N × N. That is, the image decoding apparatus 100 may determine the second coding unit 1302 having an N × N size or the second coding unit 1312 having an N/2 × N size by dividing the first coding unit 1320 in the vertical direction, or may determine the second coding unit 1322 having an N × N/2 size in the horizontal direction and the vertical direction.
According to an embodiment, the image decoding apparatus 100 may determine the third coding unit (e.g., 1304, 1314, 1324, etc.) by dividing at least one of the width and the height of the second coding unit 1302 having the N × N size. In other words, the image decoding apparatus 100 can determine the third encoding unit 1302 having the size of N/2 × N/2, determine the third encoding unit 1314 having the size of N/4 × N/2, or determine the third encoding unit 1324 having the size of N/2 × N/4 by dividing the second encoding unit 1302 in the vertical direction and the horizontal direction.
According to an embodiment, the image decoding apparatus 100 may determine the third encoding unit (e.g., 1304, 1314, 1324, etc.) by dividing at least one of the width and height of the second encoding unit 1312 having the size of N/2 × N. That is, the image decoding apparatus 100 may determine the third encoding unit 1304 having the size of N/2 × N/2 or the third encoding unit 1324 having the size of N/2 × N/4 by dividing the second encoding unit 1312 in the horizontal direction, or may determine the third encoding unit 1314 having the size of N/4 × N/2 by dividing the second encoding unit 1312 in the vertical direction and the horizontal direction.
According to an embodiment, the image decoding apparatus 100 may also determine the third coding unit (e.g., 1304, 1314, 1324, etc.) by dividing at least one of the width and the height of the second coding unit 1322 having the size of N × N/2. In other words, the image decoding apparatus 100 may determine the third encoding unit 1304 having the size of N/2 × N/2 or the second encoding unit 1304 having the size of N/4 × N/2 by dividing the second encoding unit 1322 in the vertical direction, or may determine the third encoding unit 1324 having the size of N/2 × N/4 by dividing the second encoding unit 1322 in the vertical direction and the horizontal direction.
According to an embodiment, the image decoding apparatus 100 may divide the coding unit (e.g., 1300, 1302, or 1304) having a square shape in a horizontal direction or a vertical direction. For example, the first coding unit 1310 having an N × 2N size is determined by dividing the first coding unit 1300 having a 2N × 2N size in a vertical direction or the first coding unit 1320 having a 2N × N size is determined by dividing the first coding unit 1300 in a horizontal direction. According to an embodiment, when a depth is determined based on an edge where the length of a coding unit is the largest, the depth of the coding unit determined by dividing the first coding unit 1300 having a size of 2N × 2N in a horizontal direction or a vertical direction may be the same as the depth of the first coding unit 1300.
According to an embodiment, the width and height of the third encoding unit 1314 or 1324 may be equivalent to 1/4 times that of the first encoding unit 1310 or 1320. When the depth of the first coding unit 1310 or 1320 is D, the depth of the second coding unit 1312 or 1322, which is 1/2 times the height and width of the first coding unit 1310 or 1320, may be D +1, and the depth of the third coding unit 1314 or 1324, which is 1/4 times the first coding unit 1310 or 1320, may be D +2.
Fig. 14 illustrates a depth that can be determined according to the shape and size of a coding unit and a partial index (hereinafter, PID) for distinguishing the coding unit according to an embodiment.
According to an embodiment, the image decoding apparatus 100 may determine the second encoding units having various shapes by dividing the first encoding unit 1400 having a square shape. Referring to fig. 14, the image decoding apparatus 100 may divide the first encoding unit 1400 in at least one of a vertical direction and a horizontal direction according to the division form mode information to determine the second encoding units 1402a, 1402b, 1404a, 1404b, 1406a, 1406b, 1406c, and 1406d. That is, the image decoding apparatus 100 may determine the second encoding units 1402a, 1402b, 1404a, 1404b, 1406a, 1406b, 1406c, 1406d based on the division form mode information for the first encoding unit 1400.
According to an embodiment, the depths of the second encoding units 1402a, 1402b, 1404a, 1404b, 1406a, 1406b, 1406c, and 1406 determined based on the division form mode information for the square-shaped first encoding unit 1400 may be determined based on the lengths of the long sides. For example, the length of one side of the square-shaped first encoding unit 1400 is the same as the length of the long side of the non-square-shaped second encoding units 1402a, 1402b, 1404a, and 1404b, and thus, the depths of the first encoding unit 1400 and the non-square-shaped second encoding units 1402a, 1402b, 1404a, and 1404b may be regarded as the same D. In contrast, when the image decoding apparatus 100 divides the first encoding unit 1400 into the four square-shaped second encoding units 1406a, 1406b, 1406c, and 1406D based on the division form mode information, the length of one side of the square-shaped second encoding units 1406a, 1406b, 1406c, and 1406D is 1/2 times that of one side of the first encoding unit 1400, and thus the depths of the second encoding units 1406a, 1406b, 1406c, and 1406D may be depths that are one depth deeper than the depth D of the first encoding unit 1400, i.e., D +1.
According to an embodiment, the image decoding apparatus 100 may divide the first encoding unit 1410 having a length greater than a width into a plurality of second encoding units 1412a, 1412b, 1414a, 1414b, and 1414c in a horizontal direction according to the division form mode information. According to an embodiment, the image decoding apparatus 100 may divide the first encoding unit 1420 having a width greater than a length into a plurality of second encoding units 1422a, 1422b, 1424a, 1424b, and 1424c in a horizontal direction according to the division form mode information.
According to an embodiment, the depths of the second encoding units 1412a, 1412b, 1414a, 1414b, 1414c, 1422a, 1422b, 1424a, 1424b, and 1406, which are determined according to the partition form pattern information for the non-square-shaped first encoding unit 1410 or 1420, may be determined based on the lengths of the long sides. For example, the length of one side of the square-shaped second coding units 1412a and 1412b is 1/2 times the length of one side of the non-square-shaped first coding unit 1410 having a height greater than a width, and thus, the depths of the square-shaped second coding units 1412a and 1412b are depths D +1 deeper by one depth than the depth D of the non-square-shaped first coding unit 1410.
Further, the image decoding apparatus 100 may divide the non-square-shaped first coding unit 1410 into odd-numbered second coding units 1414a, 1414b, and 1414c based on the division form mode information. The odd number of second coding units 1414a, 1414b, and 1414c may include non-square shaped second coding units 1414a and 1414c and square shaped second coding units 1414b. In this case, the length of the long sides of the non-square-shaped second coding units 1414a and 1414c and the length of one side of the square-shaped second coding unit 1414b are equal to 1/2 times the length of one side of the first coding unit 1410, and thus the depths of the second coding units 1414a, 1414b, and 1414c are depths D +1 which are deeper by one depth than the depth D of the first coding unit 1410. The image decoding apparatus 100 may determine the depths of the coding units related to the non-square-shaped first coding unit 1420 whose width is greater than the height in a manner corresponding to the manner of determining the depths of the coding units related to the first coding unit 1410.
According to an embodiment, for determining the index PID for distinguishing the divided coding units, when sizes of odd-numbered divided coding units are different from each other, the image decoding apparatus 100 may determine the index based on a size ratio between the coding units. Referring to fig. 14, among the divided odd number of coding units 1414a, 1414b, and 1414c, the width of the coding unit 1414b located at the center may be the same as the width of the other coding units 1414a and 1414c, however, the height thereof may be equal to twice the height of the other coding units 1414a and 1414c. In other words, in this case, the encoding unit 1414b located at the center may include two other encoding units 1414a and 1414c. Therefore, according to the scanning order, if the partial index PID of the coding unit 1414 located at the center is 1, the partial index of the coding unit 1414c located in the next order may be 3 increased from 1 to 2. In other words, there may be discontinuities in the index values. According to an embodiment, the image decoding apparatus 100 may determine whether the divided odd-numbered coding units have the same size as each other based on whether discontinuity of indexes for distinguishing the divided coding units exists.
According to an embodiment, the image decoding apparatus 100 may determine whether to divide using a specific division form based on a value of an index for distinguishing a plurality of coding units determined by dividing a current coding unit. Referring to fig. 14, the image decoding apparatus 100 may determine even-numbered encoding units 1412a and 1412b or determine odd-numbered encoding units 1414a, 1414b, and 1414c by dividing the rectangular-shaped first encoding unit 1410 having a height greater than a width. The image decoding apparatus 100 can distinguish a plurality of coding units using the PID of each coding unit. According to an embodiment, the PID may be obtained from a sample at a predetermined position (for example, a sample at the upper left end) in each coding unit.
According to an embodiment, the image decoding apparatus 100 may determine a coding unit at a predetermined position among the determined coding units divided using an index for distinguishing the coding units. According to an embodiment, when the partition form information for the first encoding unit 1410 of a rectangular shape having a height greater than a width indicates that the first encoding unit 1410 is partitioned into three encoding units, the image decoding apparatus 100 may partition the first encoding unit 1410 into three encoding units 1414a, 1414b, and 1414c. The image decoding apparatus 100 can allocate indices to the respective three coding units 1414a, 1414b, and 1414c. The image decoding apparatus 100 may compare indexes with respect to the respective coding units to determine a central coding unit among the divided odd-numbered coding units. The image decoding apparatus 100 may determine the coding unit 1414b having an index corresponding to the center value of the index as a centrally located coding unit among the coding units determined by dividing the first coding unit 1410, based on the index of the coding unit. According to an embodiment, for determining an index for distinguishing divided coding units, when sizes of the coding units are different from each other, the image decoding apparatus 100 may determine the index based on a size ratio between the coding units. Referring to fig. 14, the width of the coding unit 1414b generated by dividing the first coding unit 1410 is the same as the width of the other coding units 1414a and 1414c, but the height thereof may be twice the height of the other coding units 1414a and 1414c having the same height. In this case, if the PID of the coding unit 1414b located at the center is 1, the index of the coding unit 1414 located next is 3 increased by 2. As in this case, when the amplification changes when the index increases uniformly, the image decoding apparatus 100 may determine to divide by using a plurality of coding units including a coding unit having a size different from other coding units. According to an embodiment, when the division form mode information indicates that division is performed with an odd number of coding units, the image decoding apparatus 100 may divide the current coding unit using a form in which a coding unit (e.g., a central coding unit) having a predetermined position among the odd number of coding units has a size different from sizes of other coding units. In this case, the image decoding apparatus 100 may determine the central coding unit having a different size using the index PID with respect to the coding unit. However, the above-mentioned index, the size or the position of the coding unit of the predetermined position to be determined is predetermined for explaining an embodiment, and the determination of the coding unit should not be construed as being limited, and it should be construed that various indexes, positions and sizes of the coding units may be used.
According to an embodiment, the image decoding apparatus 100 may use a predetermined data unit that starts recursive division of a coding unit.
Fig. 15 illustrates determining a plurality of coding units based on a plurality of predetermined coding units included in a picture according to an embodiment.
According to an embodiment, the predetermined data unit may be defined as a data unit in which the encoding unit starts to be recursively divided using the division form mode information. In other words, the predetermined data unit may correspond to a coding unit of the highest bit depth used in determining the plurality of coding units that divide the current picture. Hereinafter, for convenience of explanation, these predetermined data units will be referred to as reference data units.
According to an embodiment, the reference data element may indicate a predetermined size and shape. According to an embodiment, the reference coding unit may include mxn samples. Here, M and N may be the same as each other, or may be integers expressed as a multiplier of 2. In other words, the reference data unit may exhibit a square or non-square shape and may be divided into an integer number of coding units later.
According to an embodiment, the image decoding apparatus 100 may divide a current picture into a plurality of reference data units. According to an embodiment, the image decoding apparatus 100 may divide a plurality of reference data units for dividing a current picture using division form mode information for each reference data unit. Such a division process of the reference data unit may correspond to a division process using a quad-tree (quad-tree) structure.
According to an embodiment, the image decoding apparatus 100 may determine in advance a minimum size that a reference data unit included in a current picture may have. Thus, the image decoding apparatus 100 may determine reference data units having various sizes equal to or greater than the minimum size, and may determine at least one coding unit using the partition form mode information based on the determined reference data units.
Referring to fig. 15, image decoding apparatus 100 may use square-shaped reference coding section 1500 or non-square-shaped reference coding section 1502. According to an embodiment, the shape and size of a reference coding unit may be determined by various data units (e.g., sequence, picture, slice segment, maximum coding unit, etc.) that may include at least one reference coding unit.
According to an embodiment, a receiver (not shown) of the image decoding apparatus 100 may obtain at least one of information regarding a shape of the reference coding unit and information regarding a size of the reference coding unit from the bitstream according to the various data units. The process in which at least one coding unit included in the square-shaped reference coding unit 1500 is determined is described in detail through the process in which the current coding unit 300 of fig. 3 is divided, and the process in which at least one coding unit included in the non-square-shaped reference coding unit 1502 is described in detail through the process in which the current coding unit 400 or 450 of fig. 4 is divided, and thus, will not be described in detail.
According to an embodiment, the image decoding apparatus 100 may use an index for identifying the size and shape of a reference coding unit to determine the size and shape of the reference coding unit from some data units that are predetermined based on a predetermined condition. In other words, for each slice, slice segment, maximum coding unit that is a data unit of the various data units (e.g., sequence, picture, slice segment, and maximum coding unit, etc.) satisfying a predetermined condition (e.g., a data unit having a size less than or equal to a slice), a receiver (not shown) may obtain only an index for identifying a size and a shape of a reference coding unit from a bitstream. The image decoding apparatus 100 may determine the size and shape of the reference data unit for each data unit satisfying the predetermined condition using the index. When information on the shape of the reference coding unit and information on the size of the reference coding unit are obtained and used from the bitstream according to a relatively small data unit, the use efficiency of the bitstream may not be good, and thus, only the index may be obtained and used instead of directly obtaining the information on the shape of the reference coding unit and the information on the size of the reference coding unit. In this case, at least one of the size and the shape of the reference coding unit corresponding to the index indicating the size and the shape of the reference coding unit may be predetermined. In other words, the image decoding apparatus 100 can select at least one of the size and the shape of a predetermined reference coding unit based on the index to determine at least one of the size and the shape of the reference coding unit included in the data unit serving as a reference for obtaining the index.
According to an embodiment, the image decoding apparatus 100 may use at least one reference coding unit included in one maximum coding unit. In other words, the maximum coding unit of the divided image may include at least one reference coding unit, and the coding unit may be determined by a recursive division process of each reference coding unit. According to an embodiment, at least one of the width and the height of the maximum coding unit may correspond to an integer multiple of at least one of the width and the height of the reference coding unit. According to an embodiment, the size of the reference coding unit may be a size obtained by dividing the maximum coding unit n times according to a quad-tree structure. In other words, the image decoding apparatus 100 may determine the reference coding unit by dividing the maximum coding unit n times according to the quad-tree structure, and may divide the reference coding unit based on at least one of block shape information and information on a division form mode according to various embodiments.
Fig. 16 illustrates a processing block as a reference for determining the determination order of reference coding units included in a picture 1600 according to an embodiment.
According to an embodiment, the image decoding apparatus 100 may determine at least one processing block for dividing a picture. The processing block is a data unit including at least one reference coding unit that divides the image, and the at least one reference coding unit included in the processing block is determinable in a particular order. In other words, the determination order of at least one reference coding unit determined in each processing block may correspond to one of various types of orders in which the reference coding units may be determined, and the determination order of the reference coding units determined in each processing block may be different according to each processing block. The determined order of the reference coding units determined in each processing block may be raster scan (raster scan), Z scan (Z-scan), N scan (N-scan), up-diagonal scan (up-diagonal scan), horizontal scan (horizontal scan), and vertical scan (vertical scan), however, the order that may be determined should not be construed as being limited to the scan order.
According to an embodiment, the image decoding apparatus 100 may determine the size of at least one processing block included in the image by obtaining information on the size of the processing block. The image decoding apparatus 100 may determine the size of at least one processing block included in the image by obtaining information on the processing block from the bitstream. The size of such a processing block may be a predetermined size of a data unit indicated by the information on the size of the processing block.
According to an embodiment, a receiver (not shown) of the image decoding apparatus 100 may obtain information on the size of the processing block from the bitstream according to each predetermined data unit. For example, information on the size of the processing block is available from a bitstream according to data units of an image, a sequence, a picture, a slice segment, and the like. In other words, the receiver (not shown) may obtain information on the size of the processing block from the bit stream according to the plurality of data units, respectively, and the image decoding apparatus 100 may determine the size of at least one processing block for dividing the picture using the obtained information on the size of the processing block, which may be an integer multiple of the size of the reference coding unit.
According to an embodiment, the image decoding apparatus 100 may determine the sizes of the processing blocks 1602 and 1612 included in the picture 1600. For example, the image decoding apparatus 100 may determine the size of the processing block based on information on the size of the processing block obtained from the bitstream. Referring to fig. 16, the image decoding apparatus 100 may determine the lateral size of the processing blocks 1602 and 1612 to be four times the lateral size of the reference coding unit and the longitudinal size to be four times the longitudinal size of the reference coding unit according to an embodiment. The image decoding apparatus 100 may determine an order in which at least one reference coding unit is determined within at least one processing block.
According to an embodiment, the image decoding apparatus 100 may determine the respective processing blocks 1602 and 1612 included in the picture 1600 based on the sizes of the processing blocks, and may determine the determination order of at least one reference coding unit included in the processing blocks 1602 and 1612. According to an embodiment, the determination of the reference coding unit may comprise a determination of a size of the reference coding unit.
According to an embodiment, the image decoding apparatus 100 may obtain information on a determined order of at least one reference coding unit included in at least one processing block from a bitstream, and may determine an order in which the at least one reference coding unit is determined based on the obtained information on the determined order. The information on the determined order may be determined according to the order or direction in which the reference coding units within the processing block are determined. In other words, the order in which the reference coding units are determined may be independently determined in each processing block.
According to an embodiment, the image decoding apparatus 100 may obtain information on the determined order of the reference coding units from the bitstream according to each specific data unit. For example, a receiver (not shown) may obtain information on a determined order of reference coding units from a bitstream according to each data unit of an image, a sequence, a picture, a slice segment, a processing block, and the like. The information on the determined order of the reference coding units indicates the determined order of the reference coding units within the processing block, and thus, the information on the determined order can be obtained from each specific data unit including an integer number of the processing blocks.
The image decoding apparatus 100 may determine at least one reference coding unit based on the order determined according to an embodiment.
According to an embodiment, a receiver (not shown) may obtain information on the determined order of the reference coding units as information on the processing blocks 1602 and 1612 from a bitstream, the image decoding apparatus 100 may determine the order of at least one reference coding unit included in the processing blocks 1602 and 1612, and may determine at least one reference coding unit included in the picture 1600 based on the determined order of the coding units. Referring to fig. 16, the image decoding apparatus 100 may determine the determination orders 1604 and 1614 of at least one reference coding unit with respect to the respective processing blocks 1602 and 1612. For example, when information on the determination order of the reference coding unit is obtained from each processing block, the determination order of the reference coding unit related to the respective processing blocks 1602 and 1612 may be different from each processing block. When the determination order 1604 of the reference coding units related to the processing block 1602 is a raster scan (raster scan) order, the reference coding units included in the processing block 1602 may be determined according to the raster scan order. In contrast, when the determination order 1614 of the reference coding units related to the other processing blocks 1612 is the reverse order of the raster scan order, the reference coding units included in the processing block 1612 are determined in the reverse order according to the raster scan order.
The image decoding apparatus 100 may decode the determined at least one coding unit according to an embodiment. The image decoding apparatus 100 can decode an image according to the reference encoding unit determined by the above-described embodiment. The method of decoding the reference coding unit may include various methods of decoding an image.
According to an embodiment, the image decoding apparatus 100 may obtain and use block shape information indicating a shape of a current coding unit or division form mode information indicating a method of dividing the current coding unit from a bitstream. The partition form mode information may be included in a bitstream related to various data units. For example, the image decoding apparatus 100 may use partition form mode information included in a sequence parameter set (sequence parameter set), a picture parameter set (picture parameter set), a video parameter set (video parameter set), a slice header (slice header), and a slice segment header (slice segment header). Further, the image decoding apparatus 100 may obtain a syntax element corresponding to the partition form mode information from the bitstream according to the maximum coding unit, the reference coding unit, and the processing block and use the syntax element.
Hereinafter, a method of determining a division rule according to an embodiment of the present disclosure is described in detail.
The image decoding apparatus 100 may determine a division rule of an image. The division rule may be predetermined between the image decoding apparatus 100 and the image encoding apparatus 150. The image decoding apparatus 100 may determine the division rule of the image based on information obtained from the bitstream. The image decoding apparatus 100 may determine the partitioning rule based on information obtained from at least one of a sequence parameter set (sequence parameter set), a picture parameter set (picture parameter set), a video parameter set (video parameter set), a slice header (slice header), and a slice segment header (slice segment header). The image decoding apparatus 100 may determine the division rule differently according to a frame, a slice, a Temporal layer (Temporal layer), a maximum coding unit, or a coding unit.
The image decoding apparatus 100 may determine the division rule based on the block shape of the coding unit. The block shape may include the size, shape, width to height ratio and direction of the coding unit. The image encoding apparatus 150 and the image decoding apparatus 100 may determine the division rule based on the block shape of the coding unit in advance. However, it is not limited thereto. The image decoding apparatus 100 may determine the division rule based on information obtained from the bitstream received from the image decoding apparatus 150.
The shape of the coding unit may include a square (square) and a non-square (non-square). When the width and the height of the coding unit are identical to each other, the image decoding apparatus 100 may determine the shape of the coding unit as a square. Also, when the lengths of the width and the height of the coding unit are different from each other, the image decoding apparatus 100 may determine the shape of the coding unit to be non-square.
The size of the coding unit may include various sizes of 4 × 4,8 × 4, 4 × 8, 8 × 8, 16 × 4, 16 × 8,. And 256 × 256. The size of the coding unit may be classified according to the length of the long side, the length of the short side, or the area of the coding unit. The image decoding apparatus 100 may apply the same division rule to the coding units classified into the same group. For example, the image decoding apparatus 100 may classify coding units having the same length with long sides into the same size. Also, the image decoding apparatus 100 may apply the same division rule to coding units having the same long side length.
The ratio of the width to the height of the coding unit may include 1: 2. 2: 1. 1:4. 4: 1. 1: 8. 8: 1. 1:16 or 16:1, etc. Also, the direction of the coding unit may include a horizontal direction and a vertical direction. The horizontal direction may indicate a case where the length of the width of the coding unit is greater than the length of the height. The vertical direction may indicate a case where the length of the width of the coding unit is smaller than the length of the height.
The image decoding apparatus 100 may adaptively determine the division rule based on the size of the coding unit. The image decoding apparatus 100 may variously determine the allowable division form mode based on the size of the coding unit. For example, the image decoding apparatus 100 may determine whether or not the division may be allowed based on the size of the coding unit. The image decoding apparatus 100 may determine the division direction according to the size of the coding unit. The image decoding apparatus 100 may determine an allowable partition type according to the size of the coding unit.
The division rule determined based on the size of the coding unit may be a division rule predetermined between the image encoding device 2300 and the image decoding device 100. Also, the image decoding apparatus 100 may determine the division rule based on information obtained from the bitstream.
The image decoding apparatus 100 may adaptively determine the division rule based on the location of the coding unit. The image decoding apparatus 100 may adaptively determine the division rule based on the position occupied by the coding unit in the image.
Also, the image decoding apparatus 100 may determine the division rule to prevent the coding units generated by the division routes different from each other from having the same block shape. However, without being limited thereto, the coding units generated by the division paths different from each other may have the same block shape. The coding units generated by the division approaches different from each other may have decoding processing orders different from each other. The decoding process sequence has already been described with reference to fig. 12, and thus will not be described again.
Hereinafter, with reference to fig. 17 to 18, an image encoding/decoding method and apparatus that determines a filter for a current sample in a current block based on at least one of a distance between the current sample and a reference sample and a size of the current block and performs intra prediction based on the determined filter will be described in detail.
Fig. 17 is a diagram for explaining an intra prediction mode according to an embodiment.
Referring to fig. 17, the intra prediction mode according to the embodiment may include a planar mode (mode 0) and a direct current mode (mode 1). In addition, the intra prediction mode may include an angle mode (modes 2 to 66) having a prediction direction. The angular mode may include a diagonal mode (mode 2 or 66), a horizontal mode (mode 18), and a vertical mode (mode 50).
In the above, the intra prediction mode according to the embodiment has been described with reference to fig. 17, but is not limited thereto, and may have various shapes of intra prediction modes by adding a new intra prediction mode or subtracting an existing intra prediction mode, and it can be understood by those of ordinary skill in the art that a mode number of each intra prediction mode may vary from case to case.
Fig. 18 is a diagram for explaining a method in which an image decoding apparatus generates predicted samples of a current sample by using different filters based on at least one of a distance between the current sample and a reference sample and a size of a current block, according to an embodiment of the present disclosure.
Referring to fig. 18, the image decoding apparatus 100 may generate prediction sample values regarding samples in the current block by using reference samples 1820 in order to perform intra prediction on the current block 1800.
For example, the image decoding apparatus 100 may determine the predicted sample value px, y of the current sample 1810 of the current sample 1800 according to equation 1 by using a reference sample crossing the extension line 1830 of the prediction direction of the intra prediction mode of the current block 1800 from the current sample 1810 and its neighboring samples. Here, x, y may refer to an x-coordinate and a y-coordinate of the current sample based on the position of the upper-left sample of the current block.
[ formula 1]
p x,y =f k,0 *a -1 +f k,1 *a 0 +f k,2 *a 1 +f k,3 *a 2
At this time, fk, i may refer to an i +1 (0 < = i < = 3) th filter coefficient to a k +1 th filter in a filter set including M +1 (M is an integer) filters. a0 may refer to a sample value of a reference sample crossing the extension 1830 from the current sample 1810, and a-1 may refer to a sample value of a reference sample directly located at the left side of the reference sample crossing the extension 1830 from the current sample 1810, and a1 may refer to a sample value of a reference sample directly located at the right side of the reference sample crossing the extension 1830 from the current sample 1810, and a2 may refer to a sample value of a reference sample located second right of the reference sample crossing the extension 1830 from the current sample 1810.
In the above, the content of generating the predicted sample value of the current sample by using the reference sample values a0, a-1, a2 has been described according to equation 1, but is not limited thereto, and it may be understood by those of ordinary skill in the art that the predicted value of the current sample may be generated by using sample values of various reference samples adjacent to each other based on a 0. For example, a predicted sample value for a current sample may be generated using a-2, a-1, a0, and a 1. In addition, a predicted sample value of the current sample may be generated by using a0, a1, a2, and a 3.
The image decoding apparatus 100 may determine a filter to be used for the current sample 1810 among a plurality of filters fk (0 < = k < = M) based on the distance between the current sample 1810 and the reference sample and the size of the current block 1800. For example, the image decoding apparatus 100 may previously determine a range of samples in which each of a plurality of filters fk (0 < = k < = M) is used by multiplying the size of the current block by a predetermined ratio, and determine a filter used in a range of samples including the current sample 1810 as the filter for the current sample 1810. For example, the number of filters included in the filter set may be four. In other words, the filter set may include filters f0, f1, f2, and f3.
When the distance between the current sample 1810 and a reference sample located on the upper side of the current sample 1810 is [0, size/4), or the distance between the current sample 1810 and a reference sample located on the left side of the current sample 1810 is [0, size/4), the image decoding apparatus 100 may determine f0 as a filter for the current sample 1810. Here, the size may be the size (height or width) of the current block 1800. When the distance between the current sample 1810 and a reference sample located on the upper side of the current sample 1810 is [ size/4, size/2) and the distance between the current sample 1810 and a reference sample located on the left side of the current sample 1810 is [ size/4, size), or the distance between the current sample 1810 and a reference sample located on the left side of the current sample 1810 is [ size/4, size/2) and the distance between the current sample 1810 and a reference sample located on the upper side of the current sample 1810 is [ size/4, size), the image decoding apparatus 100 may determine f1 as a filter for the current sample 1810. When the distance between the current sample 1810 and the reference sample located on the upper side of the current sample 1810 is [ size/2,3 × size/4) and the distance between the current sample 1810 and the reference sample located on the left side of the current sample 1810 is [ size/2, size)), or the distance between the current sample 1810 and the reference sample located on the left side of the current sample 1810 is [ size/2,3 × size/4)) and the distance between the current sample 1810 and the reference sample located on the upper side of the current sample 1810 is [ size/2, size ]), the image decoding apparatus 100 may determine f2 as a filter for the current sample 1810.
When the distance between the current sample 1810 and the reference sample located on the upper side of the current sample 1810 is [3 × size/4,size)) and the distance between the current sample 1810 and the reference sample located on the left side of the current sample 1810 is [3 × size/4,size)), the image decoding apparatus 100 may determine f3 as a filter for the current sample 1810.
In other words, referring to fig. 18, when the current sample 1810 is located in the first region 1850, the image decoding apparatus 100 may determine f0 as a filter for the current sample 1810. When the current sample 1810 is located in the second region 1860, the image decoding apparatus 100 may determine f1 as a filter for the current sample 1810. When the current sample 1810 is located in the third region 1870, the image decoding apparatus 100 may determine f2 as a filter for the current sample 1810. When the current sample 1810 is located in the fourth region 1880, the image decoding apparatus 100 may determine f4 as a filter for the current sample 1810.
Here, among the smoothing strengths of the filters f0, f1, f2, and f3, the smoothing strength of the filter f0 for a sample closest to the reference sample may be the smallest, and the smoothing strength of the filter f0 for a sample farthest from the reference sample may be the largest.
In the above, the contents in which the image decoding apparatus 100 performs intra prediction on the current block by using four filters for the current block have been described with reference to fig. 18, but are not limited thereto, and it may be understood by those of ordinary skill in the art that the image decoding apparatus 100 may perform intra prediction on the current block by using various numbers of filters. Here, the range of sampling points using each filter may be differently determined based on the number of filters. For example, referring to fig. 18, when the number of filters for the current block 1800 is determined to be three, the image decoding apparatus 100 may unify the third region 1870 and the fourth region 1880 together, and when the current sample 1810 is located in the third region 1870 or the fourth region 1880, the image decoding apparatus 100 may determine f3 as the filter for the current sample 1810.
In addition, the image decoding apparatus 100 may determine in advance a range of samples for which each of fk (0 < = k < = M) is used based on a distance between a sample in the current block and a reference sample, and determine a filter for use in a range of samples including the current sample as a filter for the current sample.
For example, when the minimum distance of the vertical and horizontal distances between the current sample and the reference sample is less than 4, the image decoding apparatus 100 may determine f0 as the filter for the current sample. When the minimum distance of the vertical and horizontal distances between the current sample and the reference sample is greater than or equal to 4 and less than 8, the image decoding apparatus 100 may determine f1 as the filter for the current sample. When the minimum distance of the vertical and horizontal distances between the current sample and the reference sample is greater than or equal to 8 and less than 16, the image decoding apparatus 100 may determine f2 as the filter for the current sample. When the minimum distance of the vertical and horizontal distances between the current sample and the reference sample is greater than or equal to 16 and less than 32, the image decoding apparatus 100 may determine f3 as the filter for the current sample. When the minimum distance of the vertical and horizontal distances between the current sample and the reference sample is greater than or equal to 32 and less than 64, the image decoding apparatus 100 may determine f4 as the filter for the current sample. When the minimum distance of the vertical and horizontal distances between the current sample and the reference sample is greater than 64, the image decoding apparatus 100 may determine f5 as the filter for the current sample. In this case, the image decoding apparatus 100 may change the number of filters for the current block according to the size of the current block. It has been described that the image decoding apparatus 100 may previously determine the range of samples for which each of the plurality of filters fk (0 < = k < = M) is used based on the minimum distance among the vertical and horizontal distances between the current sample and the reference sample, but not limited thereto, it may be understood by those of ordinary skill in the art that the image decoding apparatus 100 may determine the distance between the reference samples that intersect the extension line of the prediction direction from the current sample based on the intra prediction mode of the current block and previously determine the range of samples for which each of the plurality of filters fk (0 < = k < = M) is used based on the distance between the reference samples.
In addition, it has been described that the image decoding apparatus 100 may previously determine the range of samples for which each of the plurality of filters fk (0 < = k < = M) is used by multiplying the size of the current block by a predetermined ratio, or previously determine the range of samples for which each of fk (0 < = k < = M) is used based on the distance between the samples in the current block and the reference samples, but is not limited thereto, and the image decoding apparatus 100 may determine the range of samples for which each of the plurality of filters is used based on various methods of at least one of the size of the current block and the distance between the samples in the current block and the reference samples, as can be understood by those of ordinary skill in the art.
In addition, the image decoding apparatus 100 may determine the number of filters for the current block based on the size (height or width) of the current block and determine filters corresponding to the number of filters. For example, if the size of the current block is greater than or equal to a predetermined size, the image decoding apparatus 100 may determine the filters f0, f1, f2,. And fM-1 as filters for the current block. When the size of the current block is smaller than the predetermined size, the image decoding apparatus 100 may determine the number of filters for the current block to be a predetermined number K (K is an integer) smaller than M. In this case, the video decoding apparatus 100 may determine the predetermined number K of filters according to a predetermined combination among various combinations of possible combinations of the filters f0, f1, f 2. For example, the image decoding apparatus 100 may determine the filters f0, f1, f2,. And fK-1 as filters for the current block. For example, when the number of filters for the current block is determined to be two, the image decoding apparatus 100 may determine the filters f0 and f1 as filters for the current block. In addition, when the number of filters for the current block is determined to be two, the image decoding apparatus 100 may determine the filters f0 and fM-1 as filters for the current block.
For example, the image decoding apparatus 100 may determine a 4-tap filter set including filters f0, f1, f2, and f3 for performing intra prediction on the current block as follows. For example, the image decoding apparatus 100 may determine the coefficients of the filter f0 as { -2, 126,4,0}. (the coefficient of the filter refers to the coefficient of the filter applied to a predetermined fractional pixel position) the image decoding apparatus 100 may determine the coefficient of the filter f1 as {12, 99, 18, -1}. The image decoding apparatus 100 can determine the coefficients of the filter f2 as {21, 82, 23,2}. The image decoding apparatus 100 can determine the coefficients of the filter f3 as 31, 63, 33, 1. Here, the filter f0 may be a filter of which smoothing strength is the weakest among the filters f0, f1, f2, and f3, and the filter f3 may be a filter of which smoothing strength is the strongest among the filters f0, f1, f2, and f3. Meanwhile, the coefficient values of the filter are not limited to those listed above, and those skilled in the art will appreciate that the coefficient values of the filter may be used with slight changes (e.g., +1 to 5, -1 to 5).
Meanwhile, the image decoding apparatus 100 may determine the number of reference samples used to determine the predicted sample value of the current sample based on the distance between the current sample and the reference sample. The image decoding apparatus 100 may perform filtering on the current samples by using reference samples corresponding to the number of reference samples.
For example, when the distance between the current sample and the reference sample is greater than or equal to a predetermined distance, the image decoding apparatus 100 may generate a predicted sample value of the current sample by performing filtering on the current sample using M reference samples. When the distance between the current sample and the reference sample is less than the predetermined distance, the image decoding apparatus 100 may generate a predicted sample value of the current sample by performing filtering on the current sample using less than M number of reference samples. For example, when the distance between the current sample and the reference sample is less than a predetermined distance, the image decoding apparatus 100 may generate a predicted sample value of the current sample by performing filtering using one or two reference samples. When the distance between the current sample and the reference sample is greater than or equal to a predetermined distance, the image decoding apparatus 100 may generate a predicted sample value of the current sample by performing filtering using four or more reference samples.
The image decoding apparatus 100 may determine the number of reference samples used to perform filtering on the current sample by adjusting the number of taps of the filter. For example, when the distance between the current sample and the reference sample is greater than or equal to a predetermined distance, the image decoding apparatus 100 may determine a filter having 4 taps or more as the filter for the current sample. When the distance between the current sample and the reference sample is less than the predetermined distance, the image decoding apparatus 100 may determine a 1-tap filter or a 2-tap filter as the filter for the current sample. Here, the image decoding apparatus 100 may not perform filtering instead of performing filtering by using a 1-tap filter.
In addition, the image decoding apparatus 100 may determine the number of reference samples for performing filtering on the current sample by fixing the number of taps of the filter and adjusting coefficient values of some filters.
For example, the image decoding apparatus 100 may determine the number of taps of the filter as 4 taps, and determine the coefficient of the filter for the current sample as {0, 128, 0} when the distance between the current sample and the reference sample is less than a predetermined distance. When the distance between the current sample and the reference sample is greater than or equal to the predetermined distance, the image decoding apparatus 100 may determine the coefficients of the filter for the current sample as {32, 63, 31,1}.
The image decoding apparatus 100 may determine a filter for a current sample from among filters for the current block based on a position of the current sample in the current block, an intra prediction mode, and a size of the current block.
For example, the image decoding apparatus 100 may determine to use the first filter when the x-axis coordinate value of the current sample point is smaller than a predetermined value or the y-axis coordinate value of the current sample point is smaller than a predetermined value, and may determine to use the second filter otherwise.
Here, the first filter is less in smoothing strength than the second filter, and may have a sharp characteristic. Here, the predetermined value may be 8, but is not limited thereto, and may be one of various values of multiples of 4.
When the index value of the intra prediction mode of the current block is less than or equal to the index value of the mode 34, the image decoding apparatus 100 may determine whether the width of the current block is less than or equal to the first value and the height of the current block is less than or equal to the second value. Here, the first value may be smaller than the second value. For example, the first value may be 16 and the second value may be 32.
When the index value of the intra prediction mode of the current block is greater than the index value of the mode 34, the image decoding apparatus 100 may determine whether the width of the current block is less than or equal to a first value and the height of the current block is less than or equal to a second value. Here, the first value may be smaller than the second value. The first value may be 16 and the second value may be 32.
The image decoding apparatus 100 may determine a filter for the current block based on the width and the height of the current block. For example, when the width of the current block is less than or equal to a first value and the height of the current block is less than or equal to a second value, the image decoding apparatus 100 may determine f0 and f1 as filters for the current block. When the width of the current block is greater than the first value or the width of the current block is greater than the second value, the image decoding apparatus 100 may determine f2 and f3 as filters for the current block.
Fig. 19 is a diagram for explaining that an encoding (decoding) order between encoding units is determined as forward or reverse based on an encoding order flag, and a right reference line or an upper reference line according to the determined encoding (decoding) order can be used for intra prediction according to an embodiment of the present disclosure.
Referring to fig. 19, a maximum coding unit 1950 is divided into a plurality of coding units 1956, 1958, 1960, 1962, 1968, 1970, 1972, 1974, 1980, 1982, 1984, and 1986. The maximum coding unit 1950 corresponds to the highest node 1900 of the tree structure. Further, a plurality of encoding units 1956, 1958, 1960, 1962, 1968, 1970, 1972, 1974, 1980, 1982, 1984, and 1986 correspond to a plurality of nodes 1906, 1908, 1910, 1912, 1918, 1920, 1922, 1924, 1930, 1932, 1934, and 1936, respectively. Upper segment coding order flags 1902, 1914, and 1926 indicating a coding order in the tree structure correspond to arrows 1952, 1964, and 1976, and upper segment coding order flags 1904, 1916, and 1928 correspond to arrows 1954, 1966, and 1978.
The upper section coding order flag indicates the coding order of two coding units located at the upper section among four coding units of the same depth. When the upper segment coding order flag is 0, coding is performed in the forward direction. In contrast, when the upper segment coding order flag is 1, the coding is performed in the reverse direction.
Similarly, the lower encoding order flag indicates the encoding order of two coding units located at the lower stage among four coding units at the same depth. When the lower segment coding order flag is 0, coding is performed in the forward direction. In contrast, when the next segment coding order flag is 1, the coding is performed in the reverse direction.
For example, the upper-stage coding order flag 1914 is 0, and thus the coding order between the coding units 1968 and 1970 is determined in the forward direction from the left side toward the right side. In addition, the lower encoding order flag 1916 is 1, and thus the encoding order between the encoding units 1972 and 1974 is determined in the reverse direction from the right side toward the left side.
According to an embodiment, the upper section coding order flag and the lower section coding order flag may be set to have the same value. For example, when the upper segment coding order flag 1902 is determined to be 1, the lower segment coding order flag 1904 corresponding to the upper segment coding order flag 1902 may also be determined to be 1. Since the values of the upper coding order flag and the lower coding order flag are determined by 1 bit, the amount of information of the coding order information is also reduced.
According to the embodiment, the upper and lower section coding order flags of the current coding unit may be determined by referring to at least one of the upper and lower section coding order flags applied to a coding unit having a depth smaller than the current coding unit. For example, the upper-stage coding order flag 1926 and the lower-stage coding order flag 1928 applied to the coding units 1980, 1982, 1984, and 1986 may be determined based on the lower-stage coding order flag 1916 applied to the coding units 1972 and 1974. Therefore, the upper-segment coding order flag 1926 and the lower-segment coding order flag 1928 can be determined to be the same value as the coding order flag 1916. The values of the upper and lower coding order flags are determined from the upper coding unit of the current coding unit, and thus the coding order information is not obtained from the bitstream. Therefore, the amount of information of the coding order information is also reduced.
Here, data of samples included in the right neighboring coding unit 1958 decoded before the current coding unit 1986 and data 1980 and 1982 of samples included in the upper neighboring coding unit may be used, and thus the image decoding apparatus 100 may perform prediction according to an embodiment of the present disclosure by using data of samples included in the right neighboring coding unit 1958 (right reference line) and data of samples included in the upper neighboring coding units 1980 and 1982 (upper reference line).
In other words, a method and apparatus for determining a filter to be used for a current sample based on at least one of a size of the current block and a distance between the current sample and a reference sample, and adaptively performing intra prediction based on the determined filter have been described with reference to fig. 17 to 18, and contents of performing intra prediction based on reference samples adjacent to an upper or left corner of the current block on the premise that encoding and decoding are performed according to an encoding and decoding order of existing coding units have been described, but are not limited thereto, and it may be understood by those of ordinary skill in the art that intra prediction may be performed based on reference samples adjacent to an upper or right corner of the current block when an encoding/decoding order between some adjacent coding units is a reverse order of a right-side coding unit or a left-side coding unit, as shown in fig. 19.
According to various embodiments of the present disclosure, prediction accuracy may be improved by adaptively determining a filter used for a previous sample based on a distance between a current sample and a reference sample, and a prediction block of a natural pattern may be generated.
In other words, the prediction accuracy can be improved by using a filter having a small smoothing strength and a sharp characteristic for a sample close to the reference sample. In addition, a prediction block of a natural pattern can be generated by using a filter having a strong smoothing strength for samples far from the reference sample.
In addition, correction indicating a sudden prediction error (prediction error) can be performed by generating a prediction block having a natural pattern and high prediction accuracy, and thus the transform efficiency can be improved.
Thus far, various embodiments of the present disclosure have been described. It will be understood by those of ordinary skill in the art that the present disclosure may be implemented in modified shapes without departing from the essential characteristics of the present disclosure. The embodiments should be considered in descriptive sense only and not for purposes of limitation. Therefore, the scope of the present disclosure is defined not by the detailed description of the present disclosure but by the appended claims, and all differences within the scope will be construed as being included in the present disclosure.
The embodiments of the present disclosure can be written as computer programs and can be implemented with general-purpose digital computers that execute the programs using a computer readable recording medium. Examples of the computer readable recording medium include magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.), optical recording media (e.g., CD-ROMs, or DVDs), and the like.
Claims (3)
1. An image decoding method, comprising:
obtaining residual data of the current block from a bitstream;
determining a first filter of the plurality of filters as a filter for obtaining a predicted sample of a current sample based on a size of the current block when an x-coordinate value representing a position of the current sample on an x-axis in the current block is less than or equal to a predetermined value;
determining a second filter of the plurality of filters as a filter for obtaining a predicted sample of a current sample based on a size of the current block when the x-coordinate value is greater than a predetermined value;
obtaining a predicted sample of a current sample using a plurality of reference samples and a plurality of coefficients of the determined filter, wherein the plurality of reference samples are obtained based on an intra prediction mode of the current block and a position of the current sample;
obtaining a prediction block of a current block including prediction samples of current samples;
obtaining a residual block of the current block based on the obtained residual data of the current block; and
restoring the current block based on the prediction block of the current block and the residual block of the current block,
wherein the smoothing intensity of the first filter is smaller than the smoothing intensity of the second filter.
2. An image encoding method, comprising:
determining a first filter of the plurality of filters as a filter for obtaining a predicted sample of a current sample based on a size of the current block when an x-coordinate value representing a position of the current sample on an x-axis in the current block is less than or equal to a predetermined value;
determining a second filter of the plurality of filters as a filter for obtaining a predicted sample point of a current sample point based on a size of the current block when the x-coordinate value is greater than a predetermined value;
obtaining a predicted sample of a current sample using a plurality of reference samples and a plurality of coefficients of the determined filter, wherein the plurality of reference samples are obtained based on an intra prediction mode of the current block and a position of the current sample;
generating a prediction block for a current block including prediction samples for the current sample; and
encoding information regarding a transform coefficient of the current block based on a prediction block of the current block,
wherein the smoothing intensity of the first filter is smaller than the smoothing intensity of the second filter.
3. An image decoding apparatus comprising:
a processor configured to:
residual data of the current block is obtained from the bitstream,
determining a first filter among a plurality of filters as a filter for obtaining a predicted sample of a current sample based on a size of the current block when an x-coordinate value representing a position of the current sample on an x-axis in the current block is less than or equal to a predetermined value,
determining a second filter of the plurality of filters as a filter for obtaining a predicted sample point of a current sample point based on a size of the current block when the x-coordinate value is greater than a predetermined value,
obtaining a predicted sample of a current sample using a plurality of reference samples and a plurality of coefficients of the determined filter, wherein the plurality of reference samples are obtained based on an intra prediction mode of the current block and a position of the current sample;
obtaining a prediction block for a current block comprising prediction samples for the current sample,
obtaining a residual block of the current block based on the obtained residual data of the current block, and restoring the current block based on the prediction block of the current block and the residual block of the current block,
wherein the smoothing intensity of the first filter is smaller than the smoothing intensity of the second filter.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310134934.3A CN116156165A (en) | 2017-10-31 | 2018-10-31 | Image encoding method, image decoding method and apparatus thereof |
CN202310153413.2A CN116156166A (en) | 2017-10-31 | 2018-10-31 | Image encoding method, image decoding method and apparatus thereof |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762579255P | 2017-10-31 | 2017-10-31 | |
US62/579,255 | 2017-10-31 | ||
PCT/KR2018/013114 WO2019088700A1 (en) | 2017-10-31 | 2018-10-31 | Image encoding method and device and image decoding method and device |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310153413.2A Division CN116156166A (en) | 2017-10-31 | 2018-10-31 | Image encoding method, image decoding method and apparatus thereof |
CN202310134934.3A Division CN116156165A (en) | 2017-10-31 | 2018-10-31 | Image encoding method, image decoding method and apparatus thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111543054A CN111543054A (en) | 2020-08-14 |
CN111543054B true CN111543054B (en) | 2023-02-28 |
Family
ID=66332420
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201880084999.9A Active CN111543054B (en) | 2017-10-31 | 2018-10-31 | Image encoding method, image decoding method and device thereof |
CN202310153413.2A Pending CN116156166A (en) | 2017-10-31 | 2018-10-31 | Image encoding method, image decoding method and apparatus thereof |
CN202310134934.3A Pending CN116156165A (en) | 2017-10-31 | 2018-10-31 | Image encoding method, image decoding method and apparatus thereof |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310153413.2A Pending CN116156166A (en) | 2017-10-31 | 2018-10-31 | Image encoding method, image decoding method and apparatus thereof |
CN202310134934.3A Pending CN116156165A (en) | 2017-10-31 | 2018-10-31 | Image encoding method, image decoding method and apparatus thereof |
Country Status (3)
Country | Link |
---|---|
KR (1) | KR102539068B1 (en) |
CN (3) | CN111543054B (en) |
WO (1) | WO2019088700A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102618498B1 (en) | 2018-03-08 | 2023-12-27 | 삼성전자주식회사 | Video decoding method and apparatus, and video encoding method and apparatus |
IL313309A (en) | 2018-06-11 | 2024-08-01 | Samsung Electronics Co Ltd | Encoding method and apparatus therefor, and decoding method and apparatus therefor |
CN110944211B (en) * | 2019-11-15 | 2022-07-08 | 腾讯科技(深圳)有限公司 | Interpolation filtering method, device, medium and electronic device for intra-frame prediction |
WO2024155166A1 (en) * | 2023-01-20 | 2024-07-25 | 엘지전자 주식회사 | Method and device for image encoding/decoding, and storage medium for storing bitstream |
WO2024155168A1 (en) * | 2023-01-20 | 2024-07-25 | 엘지전자 주식회사 | Image encoding/decoding method and device, and recording medium for storing bitstreams |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20110123651A (en) * | 2010-05-07 | 2011-11-15 | 한국전자통신연구원 | Apparatus and method for image coding and decoding using skip coding |
KR20120140181A (en) * | 2011-06-20 | 2012-12-28 | 한국전자통신연구원 | Method and apparatus for encoding and decoding using filtering for prediction block boundary |
US10129542B2 (en) * | 2013-10-17 | 2018-11-13 | Futurewei Technologies, Inc. | Reference pixel selection and filtering for intra coding of depth map |
CN106688238B (en) * | 2013-10-17 | 2019-12-17 | 华为技术有限公司 | Improved reference pixel selection and filtering for intra-depth map coding |
JP6324510B2 (en) * | 2014-05-23 | 2018-05-16 | 華為技術有限公司Huawei Technologies Co.,Ltd. | Method and apparatus for prior prediction filtering for use in block prediction techniques |
US10148953B2 (en) * | 2014-11-10 | 2018-12-04 | Samsung Electronics Co., Ltd. | System and method for intra prediction in video coding |
WO2017043786A1 (en) | 2015-09-10 | 2017-03-16 | 엘지전자 주식회사 | Intra prediction method and device in video coding system |
EP3393126A4 (en) * | 2016-02-16 | 2019-04-17 | Samsung Electronics Co., Ltd. | Intra-prediction method for reducing intra-prediction errors and device for same |
KR102346713B1 (en) * | 2016-04-12 | 2022-01-03 | 세종대학교산학협력단 | Method and apparatus for processing a video signal based on intra prediction |
CN106170093B (en) * | 2016-08-25 | 2020-01-07 | 上海交通大学 | Intra-frame prediction performance improving coding method |
-
2018
- 2018-10-31 KR KR1020207011248A patent/KR102539068B1/en not_active Application Discontinuation
- 2018-10-31 WO PCT/KR2018/013114 patent/WO2019088700A1/en active Application Filing
- 2018-10-31 CN CN201880084999.9A patent/CN111543054B/en active Active
- 2018-10-31 CN CN202310153413.2A patent/CN116156166A/en active Pending
- 2018-10-31 CN CN202310134934.3A patent/CN116156165A/en active Pending
Also Published As
Publication number | Publication date |
---|---|
WO2019088700A1 (en) | 2019-05-09 |
CN116156165A (en) | 2023-05-23 |
KR20200066638A (en) | 2020-06-10 |
CN111543054A (en) | 2020-08-14 |
CN116156166A (en) | 2023-05-23 |
KR102539068B1 (en) | 2023-06-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111543054B (en) | Image encoding method, image decoding method and device thereof | |
CN112514402B (en) | Method and apparatus for image encoding and method and apparatus for image decoding | |
KR102514436B1 (en) | Image encoding method and apparatus, and image decoding method and apparatus | |
KR102672759B1 (en) | Method and Apparatus for video encoding and Method and Apparatus for video decoding | |
KR102471075B1 (en) | Encoding method and apparatus therefor, and decoding method and apparatus therefor | |
CN110870309B (en) | Image encoding method and apparatus, and image decoding method and apparatus | |
KR102444295B1 (en) | Video encoding method and encoding device, and video decoding method and decoding device considering hardware design | |
CN111213377A (en) | Method and apparatus for video decoding and method and apparatus for video encoding | |
CN113574879A (en) | Image encoding method and apparatus, image decoding method and apparatus | |
CN112385219B (en) | Method and apparatus for image encoding and method and apparatus for image decoding | |
KR102606290B1 (en) | Method and apparatus for image encoding, and method and apparatus for image decoding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |